This strategy is often called a "concierge" MVP. You deliver the service you claim, but behind the scenes everything is incredibly manual. Once you've proved people like the service, you then go make the process less manual. Zappos and Amazon are both famous for doing this.
p.s. -- I already put this in a chain, but the majority of comments are just claiming this is fraud. Thought it might be worth posting something slightly more visible.
There are many times a service like that might not be fraud. For example, if you never explicitly said it was an AI, and if every detail of the service remains the same, from the customer's perspective.
The privacy implications alone make the difference between a human sitting in on your meeting and an actual AI enough to call this fraud. Giving it a fancy name doesn't change that.
When I was teaching product development I called this the Mechanical Turk strategy. Completely agree that it isn’t automatically fraud, it can also be a cheap prototype that lets you start testing hypotheses ASAP.
I think what PG meant by "do things that don't scale" is earnest effort in service of building a real product: talking to users, manually onboarding, hand-holding early customers so you can learn fast and iterate toward something that eventually does scale.
What this startup did isn't that, AFAICT. It wasn't manual work in service of learning...it was just fraud as a business model, no? Like, they were pretending the technology existed before it actually did. There's a bright line between unscalable hustle and misleading customers about what your product actually is.
Doing unscalable things is about being scrappy and close to the problem. Pretending humans are AI is just straight up deceiving people.
Totally agree with this point. There are several advice that pg and similar roles give which are not universally true. I reiterate your point that "doing things that don't scale" is meant specifically for searching for 1-1 user experience advice.
A similar exmaple is "Make something people want". This is generally true advice in focusing your efforts on solving customer's problems. Yet, this is disastrous advice if taking literally to the fullest extent (you can only imagine).
lots of people joke about how their jobs is "just attending a bunch of meetings", but can you imagine how horrific your job would be if you were this guy and your job actually was just "attend a bunch of meetings"?
AND you didn't have context or interest in the content?
AND you were required to write an essay at the end proving that you paid attention?!
> this was for our first few beta customers from 2017 and we made it clear that there was a human in the loop of the service. LLMs didn't exist yet. It was like offering an EA for $100/mo - several other startups did that as well, but obviously it doesn't scale.
So not necessarily fraud unless they deceived investors. Or he’s covering up his mistake. Getting the popcorn!!
That's not from the LI post from the CTO that the story (or the Futurism story the linked story is apparently rereporting) is based on [0], which has the same straight-up fraud description as appears in both stories: “We told our customers there's an "AI that'll join a meeting.” In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand.”
Your quote seems likely to be from an after-the-fact damage control “clarification” post by the CEO [1] describing that the early users as close friends, who knew that it was human assisted and not machine transcription (I say seems likely to be because it expresses something similar to what you claim but slightly more distant from the original story, and doesn't have the quote you present, but it is marked as edited so it seems plausible that it at one point had your quote but decided that it needed even a stronger rewrite of the narrative for PR reasons.)
1. It's not fraud. They were proving the market at an early stage while living on a pizza diet.
2. Their startup now does what it says on the tin. And it's now a unicorn.
3. To those claiming this was "unethical" - a large company providing this service would still record calls and have QA / engineers listening to calls to improve the service.
+1, this strategy is often called a "concierge" MVP. You deliver the service you claim, but behind the scenes everything is actually incredible manual. Once you proved people like the service, you then go make the process less manual. Zappos and Amazon are both famous for doing this.
customers pay for the service, not the method-by-which-the-service-is-provided. If they explicitly sold it as a service without a human in the loop, then I think that's bad. If they just sold transcription..... then this is transcription.
Expectation is that sensitive meetings run through a pipeline without being exposed to actual people (and if it is for very specific reasons, there are audit trails).
Here, they literally listen to sensitive information and can act on it.
How do you trust they won't do it again to "enhance summaries" or something in the future?
This is fraud. Their exposure here is not just being sued by clients, though there's that as well, but being charged with one or more crimes, convicted, and going to prison. This was an incredibly stupid scheme, made even more stupid by publicly confessing to it.
I can understand why people are incensed the founders sat in stealthily on meetings, but I don't understand why they don't feel the same level of mistrust for the company's bots.
Don't they realize the company will store their whole conversation in the cloud, and a rogue employee/founder can just as easily pull it up and listen after the fact?
> "Sitting in someone's meeting uninvited is violation of privacy. They wanted a bot in the meeting, not an uninvited person," said automation expert Umar Aftab. "This way you sabotage trust and could incur legal implications."
> "Good luck with all the lawsuits," added another. "This might read like a gritty founder hustle story," said software engineer Mauricio Idarraga. "But it's actually one of the most reckless and tone-deaf posts I've seen in a while."
> "We told our customers there's an 'AI that'll join a meeting'," said Udotong. "In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand."
They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
It's a lot worse than that. This is a breach, from the perspective of the customers. They now have to explain to whoever was there how they disclosed their confidential information. That's going to become a nice boomerang.
Unless there was a violated promise of an on-prem notetaker app, there's absolutely no difference between having a third-party AI and third-party contractor listening to your meetings. You should ALWAYS assume their engineers have access to stored data for maintenance and debugging.
Except they were pretty transparent about there being a human in the loop. They were essentially selling an MIT engineering grad as your note taker for $100/mo, which is a steal. Google hires associate product managers from MIT to be note takers for $20k/mo
The quote taken from the article is: "We told our customers there's an 'AI that'll join a meeting'," said Udotong.
How do you get from 'AI that'll join a meeting' to 'an MIT engineering grad as your note taker'?
The rest about note takers is irrelevant when the problem is lying about the "note taker" as that could be the deciding factor for choosing a service, not price
Disclaimer I actually met Sam when he was launching this and I felt like he pitched it at the time as aspiring to make it do all the things. I guess that’s a lie since the ai didn’t work at the time.
> They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
Erm, to the customer, what is the difference between a bunch of humans transcribing your meetings or an AI doing it? If I'm paying $100/month to get my meetings transcribed, why do I care whether it's the founder, an AI, or magic pixie dust (but I repeat myself)?
> If I'm paying $100/month to get my meetings transcribed, why do I care whether it's the founder, an AI, or magic pixie dust (but I repeat myself)?
Why you, personally, care or not is your business. If you were one of the customers who bought an automated (AI) service and instead got 2 guys gathering info during meetings, and are okay with it and see no difference between them -- then cool-emojis.
The whole point of running a business is that I deliver a particular good for a price we agree on but then how I deliver it is up to me within fairly wide latitude.
If, for example, I have found some way to make some equivalent part cheaper, it is NOT incumbent upon me to disclose how I did it to you or my competitors. In order to protect trade secrets that may give away the answer or process to a competitor, I may lie straight to your face to misdirect you or possibly to throw a competitor down the wrong trail. As long as I'm not causing you harm and am delivering to spec, I consider myself to have a lot of latitude.
In any AI implementation, the company principals would have access to the data anyway so security wasn't compromised. In fact, because of the fallibility of human memory, security is probably better than running the audio through possibly compromised systems running an AI in a data center god knows where. So, data security and provenance was not harmed.
Sure, if you specified not to use indentured labor and I subcontract it to Bangladeshi orphans, you have a right to be upset as the bad PR could harm you. That didn't happen in this instance--the labor was company principals.
I see lots of posturing, but nobody pointing to what violations occurred other than not using <jazz hands> "AI". At no point has anyone in this thread demonstrated any harm to the end user beyond some very vague insinuations of "wrongdoing".
Instead, what I see is a bunch of people getting upset that they used "Disgusting Pixie Dust (direct employee labor)" instead of "Delicious Pixie Dust (AI)". If I'm being snide, I would point out that when the direction of substitution is reversed (AI instead of labor)--the HN legions would be celebrating the cleverness. If I'm being particularly snide, I would chalk the anger up to the fact that it nicely demonstrated that the AI Emperor isn't wearing any clothes and is making a bunch of people angry whose paychecks depend upon the emperor remaining naked.
Now, if they were pitching their "Custom AI" to investors while hand transcribing, that is a very, very different kettle of fish.
> However, some LinkedIn commenters seem to see very little wrong with Fireflies' dubious early business practices.
Seems to be a good example of today’s zeitgeist.
Many of the comments on this very post, seem to take the same position.
I’m not horrified about what they did. This kind of shysterism is pretty common, these days.
What does disturb me, though, is an “end justifies the means” acceptance of these practices.
In law (and law enforcement), they have a “fruit of the poisoned tree” doctrine, where starting something wrong, immediately nullifies everything after that, even if it solves the case.
Coming from a perspective of wanting a lot more ethics and integrity in technology, I think we might be well-served to consider something like this. I’m deeply disturbed by the blatant moral decay in tech. I keep flashing on Dr. Malcolm, taking about “could,” and “should.”
Confidentiality breach. You are supposed to have processes in place to guarantee that your customers data is safe from employees that are not explicitly disclosed to the customer from having access. Saying 'a program makes annotations' versus 'me and my buddy are sitting in undisclosed on your confidential meeting' are two entirely different things from a legal perspective.
"I'll get you to the moon safely" = NASA
"I'll get you to the moon" = me with a wood chipper and a very powerful potato gun
The startup charged for "AI note-
taking", not simply "note taking".
Best "fake it till you make it" story I've ever heard
This strategy is often called a "concierge" MVP. You deliver the service you claim, but behind the scenes everything is incredibly manual. Once you've proved people like the service, you then go make the process less manual. Zappos and Amazon are both famous for doing this.
p.s. -- I already put this in a chain, but the majority of comments are just claiming this is fraud. Thought it might be worth posting something slightly more visible.
There are many times a service like that might not be fraud. For example, if you never explicitly said it was an AI, and if every detail of the service remains the same, from the customer's perspective.
The privacy implications alone make the difference between a human sitting in on your meeting and an actual AI enough to call this fraud. Giving it a fancy name doesn't change that.
Even better, its subclass the Wizard of Oz MVP: https://www.rabitsolutions.com/blog/examples-of-mvp/
When I was teaching product development I called this the Mechanical Turk strategy. Completely agree that it isn’t automatically fraud, it can also be a cheap prototype that lets you start testing hypotheses ASAP.
Reminds me a lot of Theranos, but without the jail time (yet).
https://en.wikipedia.org/wiki/Elizabeth_Holmes
Also, there was this, which also originally claimed to be AI:
https://spectrum.ieee.org/untold-history-of-ai-mechanical-tu...
I think what PG meant by "do things that don't scale" is earnest effort in service of building a real product: talking to users, manually onboarding, hand-holding early customers so you can learn fast and iterate toward something that eventually does scale.
What this startup did isn't that, AFAICT. It wasn't manual work in service of learning...it was just fraud as a business model, no? Like, they were pretending the technology existed before it actually did. There's a bright line between unscalable hustle and misleading customers about what your product actually is.
Doing unscalable things is about being scrappy and close to the problem. Pretending humans are AI is just straight up deceiving people.
Totally agree with this point. There are several advice that pg and similar roles give which are not universally true. I reiterate your point that "doing things that don't scale" is meant specifically for searching for 1-1 user experience advice.
A similar exmaple is "Make something people want". This is generally true advice in focusing your efforts on solving customer's problems. Yet, this is disastrous advice if taking literally to the fullest extent (you can only imagine).
Isn't that something the startup people always advocate, to do things by hand till you proved there's a market? Sounds like they did
Not sure if this is what PG had in mind, but yes, the famous "do things that don't scale at first" advice to startups
Yeah imo this is a success story.
If it was a digital transcript service alone, yes, it is a success.
Claiming that the transcripts were generated by a nonexistant AI is fraud and should be treated as such.
lots of people joke about how their jobs is "just attending a bunch of meetings", but can you imagine how horrific your job would be if you were this guy and your job actually was just "attend a bunch of meetings"?
AND you didn't have context or interest in the content?
AND you were required to write an essay at the end proving that you paid attention?!
AND you got a billion dollar valuation out of it?
Wait...
you mean a billion dollar valuation that'll collapse long before you can actually get an exit?
On the other hand you can make goofy stuff up and people will think it was "the AI".
https://youtu.be/85fx0LrSMsE?si=D5WU1DEtBeHvjVaN&t=115
From LI
> this was for our first few beta customers from 2017 and we made it clear that there was a human in the loop of the service. LLMs didn't exist yet. It was like offering an EA for $100/mo - several other startups did that as well, but obviously it doesn't scale.
So not necessarily fraud unless they deceived investors. Or he’s covering up his mistake. Getting the popcorn!!
That's not from the LI post from the CTO that the story (or the Futurism story the linked story is apparently rereporting) is based on [0], which has the same straight-up fraud description as appears in both stories: “We told our customers there's an "AI that'll join a meeting.” In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand.”
Your quote seems likely to be from an after-the-fact damage control “clarification” post by the CEO [1] describing that the early users as close friends, who knew that it was human assisted and not machine transcription (I say seems likely to be because it expresses something similar to what you claim but slightly more distant from the original story, and doesn't have the quote you present, but it is marked as edited so it seems plausible that it at one point had your quote but decided that it needed even a stronger rewrite of the narrative for PR reasons.)
[0] https://www.linkedin.com/posts/sudotong_we-charged-100month-...
[1] https://www.linkedin.com/posts/krishramineni_we-charged-100m...
1. It's not fraud. They were proving the market at an early stage while living on a pizza diet.
2. Their startup now does what it says on the tin. And it's now a unicorn.
3. To those claiming this was "unethical" - a large company providing this service would still record calls and have QA / engineers listening to calls to improve the service.
[dead]
+1, this strategy is often called a "concierge" MVP. You deliver the service you claim, but behind the scenes everything is actually incredible manual. Once you proved people like the service, you then go make the process less manual. Zappos and Amazon are both famous for doing this.
And as long as you much a buck or two then who cares if you lie a little to the customers about what is actually happening!
customers pay for the service, not the method-by-which-the-service-is-provided. If they explicitly sold it as a service without a human in the loop, then I think that's bad. If they just sold transcription..... then this is transcription.
How isn't this just fraud?
Expectation is that sensitive meetings run through a pipeline without being exposed to actual people (and if it is for very specific reasons, there are audit trails).
Here, they literally listen to sensitive information and can act on it.
How do you trust they won't do it again to "enhance summaries" or something in the future?
This is fraud. Their exposure here is not just being sued by clients, though there's that as well, but being charged with one or more crimes, convicted, and going to prison. This was an incredibly stupid scheme, made even more stupid by publicly confessing to it.
Relevant essay, Do things that don’t scale by Paul Graham.
https://paulgraham.com/ds.html
I can understand why people are incensed the founders sat in stealthily on meetings, but I don't understand why they don't feel the same level of mistrust for the company's bots.
Don't they realize the company will store their whole conversation in the cloud, and a rogue employee/founder can just as easily pull it up and listen after the fact?
As always, amazed how accepting this place is of this sort of behaviour. I remember people here trying to defend Theranos when all that came out...
CS courses really have to start placing a bit more emphasis on ethics.
> "Sitting in someone's meeting uninvited is violation of privacy. They wanted a bot in the meeting, not an uninvited person," said automation expert Umar Aftab. "This way you sabotage trust and could incur legal implications."
> "Good luck with all the lawsuits," added another. "This might read like a gritty founder hustle story," said software engineer Mauricio Idarraga. "But it's actually one of the most reckless and tone-deaf posts I've seen in a while."
> "We told our customers there's an 'AI that'll join a meeting'," said Udotong. "In reality it was just me and my co-founder calling in to the meeting sitting there silently and taking notes by hand."
They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
It's a lot worse than that. This is a breach, from the perspective of the customers. They now have to explain to whoever was there how they disclosed their confidential information. That's going to become a nice boomerang.
Unless there was a violated promise of an on-prem notetaker app, there's absolutely no difference between having a third-party AI and third-party contractor listening to your meetings. You should ALWAYS assume their engineers have access to stored data for maintenance and debugging.
Except they were pretty transparent about there being a human in the loop. They were essentially selling an MIT engineering grad as your note taker for $100/mo, which is a steal. Google hires associate product managers from MIT to be note takers for $20k/mo
The quote taken from the article is: "We told our customers there's an 'AI that'll join a meeting'," said Udotong.
How do you get from 'AI that'll join a meeting' to 'an MIT engineering grad as your note taker'?
The rest about note takers is irrelevant when the problem is lying about the "note taker" as that could be the deciding factor for choosing a service, not price
Disclaimer I actually met Sam when he was launching this and I felt like he pitched it at the time as aspiring to make it do all the things. I guess that’s a lie since the ai didn’t work at the time.
> They charged $100/month for this. If it were free then whatever, but lying to paying customers about the service is not okay.
Erm, to the customer, what is the difference between a bunch of humans transcribing your meetings or an AI doing it? If I'm paying $100/month to get my meetings transcribed, why do I care whether it's the founder, an AI, or magic pixie dust (but I repeat myself)?
Misleading investors is a different problem.
> If I'm paying $100/month to get my meetings transcribed, why do I care whether it's the founder, an AI, or magic pixie dust (but I repeat myself)?
Why you, personally, care or not is your business. If you were one of the customers who bought an automated (AI) service and instead got 2 guys gathering info during meetings, and are okay with it and see no difference between them -- then cool-emojis.
or for a more pointed response, see: https://news.ycombinator.com/item?id=45935354
The whole point of running a business is that I deliver a particular good for a price we agree on but then how I deliver it is up to me within fairly wide latitude.
If, for example, I have found some way to make some equivalent part cheaper, it is NOT incumbent upon me to disclose how I did it to you or my competitors. In order to protect trade secrets that may give away the answer or process to a competitor, I may lie straight to your face to misdirect you or possibly to throw a competitor down the wrong trail. As long as I'm not causing you harm and am delivering to spec, I consider myself to have a lot of latitude.
In any AI implementation, the company principals would have access to the data anyway so security wasn't compromised. In fact, because of the fallibility of human memory, security is probably better than running the audio through possibly compromised systems running an AI in a data center god knows where. So, data security and provenance was not harmed.
Sure, if you specified not to use indentured labor and I subcontract it to Bangladeshi orphans, you have a right to be upset as the bad PR could harm you. That didn't happen in this instance--the labor was company principals.
I see lots of posturing, but nobody pointing to what violations occurred other than not using <jazz hands> "AI". At no point has anyone in this thread demonstrated any harm to the end user beyond some very vague insinuations of "wrongdoing".
Instead, what I see is a bunch of people getting upset that they used "Disgusting Pixie Dust (direct employee labor)" instead of "Delicious Pixie Dust (AI)". If I'm being snide, I would point out that when the direction of substitution is reversed (AI instead of labor)--the HN legions would be celebrating the cleverness. If I'm being particularly snide, I would chalk the anger up to the fact that it nicely demonstrated that the AI Emperor isn't wearing any clothes and is making a bunch of people angry whose paychecks depend upon the emperor remaining naked.
Now, if they were pitching their "Custom AI" to investors while hand transcribing, that is a very, very different kettle of fish.
> what is the difference between a bunch of humans transcribing your meetings or an AI doing it?
Probably depends upon how sensitive the information is. ie "was PII involved?" should be a fair clear example
The joke that AI stands for “Actually, Indians” and the co-founder annd presumably the other guy typing is Indian is crazy.
"Actually, Indians" is not meant as a joke.
And it's not rare at all. It is also not rare to see such efforts funded.
> However, some LinkedIn commenters seem to see very little wrong with Fireflies' dubious early business practices.
Seems to be a good example of today’s zeitgeist.
Many of the comments on this very post, seem to take the same position.
I’m not horrified about what they did. This kind of shysterism is pretty common, these days.
What does disturb me, though, is an “end justifies the means” acceptance of these practices.
In law (and law enforcement), they have a “fruit of the poisoned tree” doctrine, where starting something wrong, immediately nullifies everything after that, even if it solves the case.
Coming from a perspective of wanting a lot more ethics and integrity in technology, I think we might be well-served to consider something like this. I’m deeply disturbed by the blatant moral decay in tech. I keep flashing on Dr. Malcolm, taking about “could,” and “should.”
If this wasn’t related to software, it’d simply be called fraud. Selling an AI service that has no AI is lying.
But when it’s a SaSS product it becomes an inspirational hustle culture story.
A business charges for a service and the service was met. what’s the problem here?
I would bet the TOS mentioned manual reviews.
Confidentiality breach. You are supposed to have processes in place to guarantee that your customers data is safe from employees that are not explicitly disclosed to the customer from having access. Saying 'a program makes annotations' versus 'me and my buddy are sitting in undisclosed on your confidential meeting' are two entirely different things from a legal perspective.
"I'll get you to the moon safely" = NASA "I'll get you to the moon" = me with a wood chipper and a very powerful potato gun The startup charged for "AI note- taking", not simply "note taking".
IMO the real fraud was against the investors.
If I invest in your AI startup and find out it's really people doing the work, I'm going to be pissed.
sounds fake, founders these days make up all sorts of edgy stories to PG signal to investors
Sounds similar to builder.ai but on a smaller scale?
Why even cheat here? Sounds like this is almost trivial with current AI systems.
Maybe it wasn’t when they started the company?
surprise, surprise, ai being used for fakery instead of value
Was it fraud? Or were they a couple of plucky capitalists rigging the system to get by?
I guess it just depends on your perspective...
The Lean Startup.
"Make it exist, first. You can make it good later"
Bias towards bullshit
It won't be a surprise if YC has invested in this company.
$10M revenue in 2024, now $1B tender offer lol.
[dead]
[dead]