It was inevitable when ads were no longer enough to sustain websites, I suppose. It is a shame how critical news may be paywalled out from the public, though. Tragic, but fitting.
> It was inevitable when ads were no longer enough to sustain websites, I suppose. It is a shame how critical news may be paywalled out from the public, though. Tragic, but fitting.
Also when any public content started being gulped up by scrapers and resold as AI.
YC remains a great source of creative inspiration for me. However, I tend to skip most AI-related content on HN, as the topic does more harm than good to me in my daily life. Some people around me delegate more and more decisions to the chat, and that frightens. Especially if you are somehow dependent on them or your work gets evaluated by some creepy AI-driven bossware. We should admit that AI, particularly LLMs, is not just eating: it is destroying society, human communications, the education system, and the scientific community. This enumeration is merely the sides that I personally faced.
I work with some undergrads and see this delegation increasing year over year. Unfortunately it's also happening at the expense of reading books, using library search tools to find proper sources, and information gathering in general.
"An LLM might be able to explain something to you, but it can never understand it for you."
I was recently in a work AI training where we were encouraged to have AI review all our vendor and contractor budgets and find holes and create rebuttals to line items. Was wondering what if the vendor has ai review our rebuttals and create counter-points to our ai-created arguments. At some point it will just be AI talking against itself to another AI chatbot.
> At some point it will just be AI talking against itself to another AI chatbot.
And then something like this [0] will happen, creating a weird wasteful meta-game about the model(s) used by each company.
It makes intuitive sense: If you outsourced "generate awesome assertions" to a contractor, then someone else hired the same contractor to "judge the awesomeness of these assertions", they are more likely to get lots of "awesome" results—whether they're warranted or not. The difference might even come down to quirks of word-choice and formatting which a careful human inspector would judge irrelevant.
I visibly cringe when I hear people talking about using AI for anything. With the help of the social disease we call 'social media', AI is destroying thought on all levels.
Not sure about destroying everything. Def benefitting few at the great, great expense of the many. But trickle down benefits are a thing. And I see no way to avoid the future; the genie’s out the bottle.
It's funny because it's causing a tragedy of the commons where there are so many AI startups that YC is funding companies that compete with each other. Case in point, the the PearAI incident a year ago where a YC company forked and rebranded an open source repo from another YC company against its license: https://unfashionable.blog/p/yc/
Is placing bets on competing companies that against YC's business model?
Like I'd assume the best case for YC's business model is to pick a winner, fund them, sell off at the company's peak, then find a competitor likely to unseat them and fund that competitor so they can enjoy the profits of capturing the same market again.
Yeah, to me that’s always been the obvious most "expensive" part about doing YC. It’s not the chunk of equity, it's that they now know everything about your company, your road map or ideas, and the way you think and operate. And that's a pretty good deal for whatever little money they're spending on you. The whole thing has a "frenemies" vibe.
"How paid subscriptions ate the spread of information" I can't subscribe to another site and seems the archive link is not working for me either. Would love to read it though because while YCombinator just follows where they see the money being, AI has really seen the majority of investment recently.
https://www.ycombinator.com/companies/industry/investing
Just doing a search for "AI" its clear there is a preference. Maybe for good reason though. If I could read the article that is.
pretty entitled take. why use the phrasing about subscriptions preventing spread of information instead of something like "stingy readers preventing authors from eating"
Because as I just counted for you I'm subscribed already to 6 others.. at different rates. Some of them crossing over with articles. To you I sound "entitled" but there's a limit to my miscellaneous spending.
> "How paid subscriptions ate the spread of information"
The "irony" here is that news organizations are moving to gate their content precisely because that's the only effective technique for preventing AI crawlers from ingesting it all ...
So we as the real people have to pay because the AI might consume it ? If you believe the AI isn't consuming those paywalled articles also, I have a bridge to sell you.
Exactly my thoughts. Couldn't agree more really, there is so much to discuss on this short time that we have on earth but a lot of that energy is going into talking about AI and AI again it seems. I have nothing against the tech but c'mon I would argue that open source itself overall is more valuable than AI but that isn't an apples to orange comparison but still, I see far fewer discussions of open source as compared to AI.
Maybe Open source is just on mind a lot these recent days.
>I see far fewer discussions of open source as compared to AI.
Because the big discussions of open source were decades ago now. If you add up all the discussions of open source in the past I'd assume they'd out number the AI discussions now. Also, there is very little novel to discuss about open source. Now novel and 'important' are different things, but novel is what tends to write articles and get eyeballs.
I don't really know man half the population doesn't know what open source means or cares whereas a lot of people seemingly have lots of opinions about AI.
One can liberate people from big tech and the other ties them to it in sorts. And there are very very more conversations about the latter than the former.
The point of discussion is to bring change. There has been a real change in how usable Linux has been lets say compared to past but now its really about user adoption I suppose. I genuinely think that we might need to reopen that discussion window as a lot of people are getting interested in linux/homelabbing /realizing that they can degoogle and what not to really get privacy.
Those discussions brought a change into how open source software is written today (git) etc. but now we probably might need discussions about the awareness of these open source products to the general public if we truly want mass adoption I suppose.
The percentage of the population that is going to run things on their own is comparatively tiny. Mass adoption isn't going to happen because convenience and support is what most people want. Again, these things have been discussed for decades, and yet we keep seeing tighter and tighter centralization of data and services. In phones it's pretty much dead. You have apple (totally locked) and Google (on their way to totally locked).
Yet I still feel like things like grapheneos are really valid nowadays and there are definitely some de-facto low hanging fruits of open source where you still get the same-ish level of convenience and support and the only reason I can't think of the same thing happening is knowledge.
my country is literally filled with everyone owning whatsapp, when I ask people why not signal... they don't even know what signal is.
Everyone has always been doing chats on whatsapp and now there is this weird lock-in but all it takes is for masses to use signal and spread the word as an example for a small victory towards a path of good. I will try to do my part I suppose, we can all be pessimist but atleast I feel deep down that we can create a system of convenience and support for other open source projects too if we can donate to these projects too and have a reasonable assessment of that too.
Too many people think of open source as free and yes its free but I also think that its the responsibility of us as a whole as a society to fund open source if we want support otherwise we should stop thinking about it.
Regarding google locking down, what can I say except that I think that this decision should be fought against with as much scrutiny by the people advocating for freedom as possible.
There is a way to do things with adb but still, it is a shame that google went down this path and we should definitely fight against this too of sorts but I have hope in grapheneos + f-droid too tho. I definitely need to enlighten myself more if the google's thing is gonna cause an issue for things like grapheneos too as that would be a real deal breaker / cause even more severe issues as from what I know grapheneos is one of the safest os roms/ most privacy friendly android rom for mobile and gives a lot of security advantages that are definitely something to look at and admire.
> I don't really know man half the population doesn't know what open source means or cares whereas a lot of people seemingly have lots of opinions about AI.
I'm not the person you responded to, but I'm certain that way less than half the population knows about or considers open source, that's one of those living in a bubble illusions.
Ultimately those people aren't here and have no interest in being here.
I agree that those people aren't here and have no interest.
I would say that it is the issue of lack of knowledge if anything.
Yes, there is an infinite amount of knowledge available on the internet regarding open source and that's honestly where I learnt about stuff but its overwhelming at first and required something to kickstart the whole process/ be a catalyst of sorts.
Realizing using linux that privacy matters a lot and then realizing that I can just search open source alternative to X really helped me in the beggining/still sometimes does and there are a lot of low hanging fruits that can just be told for people to follow (like use signal instead of whatsapp) so there's definitely that.
I definitely am thinking about doing something about it tbh the more I think about it but I think that partially why people don't go around doing this is because of how AI seemingly sells and open source fundamentally doesn't sell
It's not knowledge, it's interest. You go looking because you care. But people don't even use desktops anymore. They could, but they don't, what's the point? I know devs who haven't even put together their own PCs in decades if they ever did at all.
I don't know how you shift that, but getting people interested would be the first step. They need to have a reason to want to learn.
There's an enormous amount to discuss about "open source," and it's neglected enough that the bro generation of programmers doesn't even understand it, or the difference between it and Free software, and what those differences materially mean to developers and the public at large.
The reason that HN is eaten up by LLMs is because it's eaten by any trendy topic that's in the mainstream news. HN used to be directed by an active and opinionated mind in pg, and it's been left to salaried, caretaking censors whose primary job is to make sure the site doesn't become an embarrassment. This mainly consists of artificially excluding discussions that may lead to energetic debate; debate that usually becomes swamped with low-quality comments by people speaking outside of their expertise, and can make the place look like a cesspool.
But energetic debate is where all of the energy is. The problem isn't those topics, the problem is that those topics also take a lot of energy to moderate the problems out of. We're experiencing an abandoned place, not a place driven by anything internal. It's an old barn that is kept clean just so it doesn't catch on fire.
We don't talk about FOSS because it isn't a general topic that is in the mainstream news, not because it isn't an important topic that is more vital to discuss than ever, in the face of monopoly, walled-gardens, verified signatures and centralization.
A positive editor like pg was aware of this, and treated the site as his personal playground. It partially revolved around what were essentially his journal entries. I only ever ended up on this site because he decided one day to have every front page article be about Erlang, of all things.
edit: and to add to the penultimate paragraph, an energetic discussion of FOSS that lead to productive projects and statements by people of influence would influence the mainstream. This place used to make stories, not just Digg them. The purpose of the site (other than to run something on Arc) seemed to me to be to juice new YC startups in a way that would leak into the general media. It doesn't even seem useful for YC any more.
You've really hit the nail on the head on how I feel. Especially with this line:
> This place used to make stories, not just Digg them.
I've started to dread most conversations about FOSS on this site because they just turn into the same tired old high-energy, low-quality conversation repeated over and over again. There's little incentive for anyone of influence or expertise to contribute because, well, all of these conversations end up the same way.
I guess I disagree on your view of the moderation of this site. While it's true that pg used to do a lot of guidance and tastemaking on HN, the scale of the site was small enough where he could. At this point the site is massive and only growing and this new userbase expects a Digg or Reddit like norm, not one that drives tastemaking. I think the site would require a fundamental rehaul to offer an individual or a group the tastemaking that pg could do when the site was a fraction of the size.
I also think, for better or for worse, that HN has "accepted" not being the tastemaker anymore and becoming another tech news aggregator. It's because the eyeballs of folks new to these issues doesn't really fall onto this site anymore. For a while that had been Twitter but now that Twitter is under Musk, it's lost that distinction and now tech discussions don't seem to have a good home.
Even if pg where here to do so, it wouldn't happen the way you think it would. Open source got ate by AWS and closed behind paid for services. We had those discussions in the past, we lost, the market spoke.
Moreso the internet evolved into something we don't like. In the early days it wasn't that hard to have an energetic conversation because you were having conversations with real people at their digression. Yes, there were some number of trolls you had to deal with, but over time they went from being the exception to being the rule. Any and every site you create now, the moment you collect a worthwhile market will be flooded with those attempting to market their wares to the point that actual conversation can no longer exist. The walled-gardens and verified signatures are a side effect of the infection that the internet has become.
The world you lived in aged and died and now a brave new world screams for your attention 24/7 without care for your health or sanity. The days where a large portion of people were into programming because they enjoyed it are long past. People need paychecks to support the ever spiraling costs of basic needs. Consolidation and monopolies aren't just a thing in websites and computer hardware, it's a thing everywhere with everything especially in the US. The power scaling laws of technology have come home to roost. The cyberpunk dystopia we were warned of is already here and the masses invited it with open arms.
The FOSS utopianist need to realize they lost, and it's human laziness and apathy that was the killing blow. If you can come to terms with that, maybe a rebirth of FOSS that targets our base instincts can arise.
Man this is really a great comment that I just read and I agree soo much about the maybe a rebirth of Foss that targets our base instincts can arise.
I think that a very solid (energetic) discussion can take place on that too and its just a pleasure that even now, to me, this is full of energy and maybe mimics a sense of spirit of that energy the parent comment was referring to.
I actually wanted to share that energy and I really read this comment and recorded myself a video of reading this whole text from start just to reach your post which I know isn't going to get anywhere but I just wanted this message to be beyond this thread. It definitely gave me some new insights and was a fun exercise in making me less shy around the camera. I want to create memes like burialgoods/anything to really spread this message in sorts and other messages too regarding open source. There is so much to be done :)
But I also want to do it in a hopeful way, we can come with terms on things, we should and try to advocate for the rebirth of Foss as you say.
It genuinely makes one feel a bit hopeless but I think that the approach of looking at the uncomfortable and then still wanting a rebirth/fighting for it is something worth looking for in our lifetimes.
> Moreso the internet evolved into something we don't like.
> If you can come to terms with that, maybe a rebirth of FOSS that targets our base instincts can arise.
> The cyberpunk dystopia we were warned of is already here and the masses invited it with open arms.
Mature conversations need to accept reality to move forward. I disagree with the implication in your comment that we are in uniquely lost times. I think FOSS was under much, much more threat in the Microsoft and proprietary software times than it is now. Remember when encryption was locked by the NSA? I just think the community on this site has locked itself into a local minimum of getting frustrated and sad over the state of things they don't like. Once any upvote-based site gets locked into one of these local minima it becomes really hard to escape as the incentive structure of voting continues to reward tapping into the same emotions.
That is an insight that my age simply didn't bring.
I am not sure about the accuracy of it as I was there after the AI era and I have only seen the slow but steady creep of it.
> This place used to make stories, not just Digg them
This does give me goosebumps.
Honestly, I like this place because it has a lot of nice people and we are more alike and different and it's just that this seems to be the place I have decided to call a home to all of my ramblings/thoughts on essentially everything.
The only other social media that I operate is maybe discord and a very small dose of reddit.
it does seem to me that somewhere along the internet, we might have lost it, or atleast its hidden, waiting to be discovered.
I can't help but share things like julia evan's zines and how he inspired one day when I messaged him on mail about being worried about AI when he shared me somebody who had created a better UI for the man pages and so many other interesting pages which really reflected personality that I didn't know exist.
I don't really think that we can do over hackernews but there is certainly a possibility of atleast having niche discussions like erlang and discussing them could lead to more people exposure's to it... which can benefit them or the community!
I can maybe think of that we might need to use something like matrix to atleast create a public community with better moderation if that's the issue.
I think the HN mods are also trying their best but if we want opinions, the best way I right now could think of to fixing this as an issue might be to creating a hackernews but not for AI which I had actually suggested once but everybody just said to me to block posts about AI or to create a tracker which removes all AI references at most, the post is definitely lost in my comments but I definitely remember it.
I can understand where you are coming from and I think that this place isn't a substitute for a place like that and we still might need a place like that too if we become too pessimistic of y-combinator.
With the quality of modern AI this really isn't possible. Or as someone else said "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists"
On the internet a community is almost always growing or dying, there is no real inbetween. Communities that make the news do so because those sites are accessible. The problem is being accessible to humans ensures that bots will show up and cause problems, so now you have the added responsibility of moderation and spam control which involves politics at the end of the day.
yeah I don't know how to write this sentiment without whining, but I am Tired of seeing Boring articles about AI where very little of interest is said or done, with mealmouthed comments. I swear the development lifecycle of really neat and novel tools used to be more than "Poll AltmanBot"
I remember when UBI was consuming too much bandwidth. Foreplay to the AI saturation now. Suppose it was useful to get the hype up enough to arrive where we are. Still odd we aren’t having any UBI threads now, when it’s actually relevant. (Not that I endorse the idea, which afaict traces back to Charles Murray.)
It seems to vary a lot by the day. HN still has enough good days so I keep checking, but on the bad days I just go to lobste.rs and usually there’s something quality to read over there
> Going through YC’s startup directory reveals that of the 170 startups in the most recent summer batch 154 are AI startups.
The author calculates this by searching for the term "AI" in the name and description of each startup's YC page. But presence of the term "AI" doesn't make a startup an "AI startup," so to speak. For example, I picked one startup at random, Topological, which is "developing physics-based foundation models for CAD optimization." Just because the company uses AI doesn't make it an "AI startup." AI is rapidly proving itself to be an extremely useful and workflow-changing tool, and many companies now have adopted it somewhere in their product without suddenly becoming AI companies.
> For example, I picked one startup at random, Topological, which is "developing physics-based foundation models for CAD optimization." Just because the company uses AI doesn't make it an "AI startup."
It is interesting that you have an opinion on what a real AI startup is and either (1) don’t know what foundation models are, or (2) do know what foundation models are but think that a company whose core focus is on developing them for a particular field is just a startup that uses AI but not one one whose core business is AI.
I agree in general that keyword searching for a buzzword will have false positives, as people work it in to their description just to ride the trend. However, that is a bad example. If the core of your business is developing new AI models, then you are an AI startup. Personally I think it is great that someone is using the recent breakthroughs in AI and applying it to something other than LLMs, as well as trying to fuse rules based systems (physics) and AI. Way more interesting than AI startups that are just trying to slap an LLM onto a task that it isn't ready for. But still an AI startup.
If anything, building a foundation model is a more legitimate "AI startup" than wrapping slick marketing and a bit of tuning around someone else's LLM or diffusion model...
> Just because the company uses AI doesn't make it an "AI startup."
I think you need to expand a bit more this. I does sound to me that if you are using AI in the main product you are an AI startup.
It is similar with if you are offering a web version of your app you are an internet company. And if you are offering a mobile version you are a mobile company.
and a company can be more than one: An internet AI mobile company :)
If you put "AI" into your name or TLD (Description or Domain) you are intentionally playing the AI angle/game, so it's going to guide everything from the core values on down, and that's what I think is the critical influence, more so than actually doing "real" AI (whatever that means)
Topological doesn't just have "AI" in the description, they are tagged with the "AI" tag, and for good reason, they are developing machine learning models (which is what they mean by "foundation model" in the text you quoted).
Sort of like how so many people having nothing to do with "AI/ML" on Linkedin put "AI/ML" in their Headline. I always wondered if that even works, do people get found/recruited/hired more for having that?
It's a fool's game to go looking for signal/value on LI. It's an echo chamber of group delusion & fake it till you make it. Use it when you need a new job and avoid like measles the rest of the time.
By that metric, every single software company is now an AI company.
I can't think of a SINGLE company (except ours maybe, only because we haven't updated our website yet, not because we don't want to) that doesn't vomit AI terms _everywhere_ in their product messaging.
I don't know whether consumers/buyers are demanding that, or what. But it seems everywhere and in every discipline. It's in research grant applications, it's in legal tech, everywhere.
AI is the new "we use the internet" - it will just get taken for granted.
I agree with the GP and you - every company IS desperate to be an AI company today. They've been told three things: 1. this time it's different, 2. AI is disrupting everything, 3. if you don't disrupt yourself someone else will.
# 1 has been said repeatedly before every previous AI winter.
#2 is a lie from people trying to sell you something and the FOMOs parroting the same line.
#3 someone is going to disrupt you regardless, and it's impossible to disrupt yourself (unless you're prepared to stop doing everything that made you successful in the first place)
> I don't know whether consumers/buyers are demanding that, or what.
Consumers aren't demanding it, investors are.
They see the hype and ridiculous amounts of money being invested into AI startups, and they're afraid of missing out, so they throw their own money in. No time to stop and think about it, there's too much hype and things are moving too fast. Then other investors see even more money being thrown in, and throw more money in themselves - monkey see, monkey do. Repeat ad infinitum and the next thing you know, majority of venture capital is being blindly funneled into anything AI related.
If you're leading some unrelated company and paying attention to where all this money is flowing, the path forward is obvious: simply become an AI company. Doesn't matter if you're actually building something valuable with it or not, because as long as your landing page fulfills a certain quota of AI buzzwords, the investors won't care. They only care about making the line go up, because it's good when the line goes up.
I was at a meeting of a bunch of VCs a while ago (a couple of months). The principal partner of one of the larger funds said that they will now only consider investments that are AI related. Just to give you a data point, I'm not saying that that is a trend. But if that's a market representative you can't fault the supply side for adapting to it.
My view on this now is that a company whose product or service is based on 'AI' is openly admitting incompetence in what should be their core competence.
Or put another way, their competency relies completely on people not associated with the company.
until recently I worked in learning management systems (LMS) including the content creation, libraries and delivery. At a recent event with many VCs they all considered the learning space dead because AI was going to do it all. Don't hold your breath!
The one field that I have seen impacted more than just on the periphery is translation services. What was worth 20 cts (or even more, for niche stuff) per word not that long ago is now essentially free.
No, you really can't. You either are going to play ball or you'll simply not get funded. It's pretty dumb but that's the way it is right now, maybe they'll come to their senses again once the bubble inevitably bursts but that could be quite a while.
The one thing that LLMs are good at is prototyping super fast. For accelerators/incubators etc this is a game changer since more ideas can be realized and tested for market fit.
Now if the market fit indeed exists then someone needs to rearchitect and rewrite the thing. But wasn’t it always the case? The POC always was a hacked together solution with no real viability to be used as the final product.
LLMs are good at prototyping using data across _all_ similar projects that exist.
It is not a 1-1 copy.
Most frontend is a dozen components.
Most backend again is a handful of architectures when it comes to DB/business logic, CRUD.
It goes to say that if you can guide the LLM to build something innovative you can think of, it will put those components together in a reasonable way - good enough to start off with.
Exactly. One could build the new video platform with revolutionary customer facing features. The tech stack will likely be the same. Some frontend, some backend maybe some calls to an endpoint that happens to be an LLM.
Startups that typically end up in incubators etc are not about new fundamental systems (languages, frameworks, theoretical methodologies etc), but rather about new products.
This seems to suggest a failure in our model of software. We were supposed to have reusable components. Writing the same thing more than once was not supposed to be necessary.
I recognize that in reality this hasn't always worked out. But I also don't think that the answer is a black box that can churn out questionable boiler-plate.
Or they're including AI as a buzzword in whatever they were already doing, and not exactly ignoring a new tool, but might be overselling how useful AI is to their thing?
Why almost lie though? Like, if I am a company and I integrate AI just to say to my investors that I got AI so that they can not feel FOMO is utterly bonkers and well, I don't know but the investors definitely don't sound reasonable and I think that the people who are somehow lending money to these investors who are investing on such basis definitely need to think about their life choices if the company in their portfolio is selected or not just because of this seemingly bizarre checkbox that most general public is actually in fact against of having.
I don't think anyone anywhere on the totem pole: from the junior engineer, to the engineering manager, to the founder, to the VC, to the investors, cares if it's actually AI. They just want to see the word there. Someone got it into their head that "AI is the thing now" and now the junior engineer isn't going to get hired unless he says AI. The Eng manager is not going to get promoted unless he talks about managing AI. The Founder is not going to be funded unless he says the company relates to AI. The VC is not going to line up investors unless he says AI. And investors have no clue what to do with their money, but heard somewhere that "AI is the thing now" and that's where they want to flush their money.
Nobody really wants any of this shit as a product.
Literally every tech startup company out there is spinning itself as AI. I’m familiar with some of them well enough to know that the AI is bullshit. You could be running an SVM on some data and you’re now “powered by AI”. I’m not joking.
I've also noticed many startups from prior YC batches that haven't found traction yet have pivoted to AI-related offerings. It's been kind of amusing to watch them become absorbed into the hype cycle, one by one.
So like internet startups in the 90s. Like .com in the late 90s. Like social media startups in the late 2000s/2010s. It's all cycles. Every bubble is talked about on HN (and similar sites) because that's the purpose of sites like this.
maybe a big difference is those hype cycles were net-positive for developers. This is the first one in a while that has VCs and upper management salivating with the idea of cutting all the devs lose. Who do YOU think this is likely to replace first, technical developers or administrators?
It worries me a lot what are those devs without jobs now, will do next? Do they change to the dark side and start battling AI startups and technologies?
The article starts by assuming there’s an AI bubble.
Then it goes to label all these startups as AI companies.
This is how they are categorized (2025 batch only):
AI Agents (15%)
Productivity/Ops Tools (11.4%)
AI Infrastructure (10%)
Healthcare (10%)
Consumer Apps (9.3%).
We cannot argue that AI infrastructure is AI, but what about say back office automation and healthcare? Are these AI companies? Or are they automation/healthtech that happen to use LLMs?
Many Fintech and e-commerce companies use ML, are they ML companies?
If a company's supposed advantage is predicated on using LLMs, it's an AI company. Of course, if that strategy fails, it could pivot to a non-AI approach which would make it cease to be an AI company.
If all the AI startups are using Anthropic and OpenAI, does this investment in AI startups continue to be the norm until the Anthropic and OpenAI investors see a return?
I'm predicting there will be a startup to focus on large scale forensic identification of archived samples of various things. Particularly items that may have been important enough to save in a freezer for 70 or 80 years. The refinement and availability of devices to measure microscopic amounts, and AI to process and sift through mundane results. From my experience with science, laboratory procedures before the 1980's were generally inconsistent and inadequate. Contamination was not uncommon, something biological investigators had to deal with when investigating early HIV cases. Wouldn't be surprised for an "empty the freezers" order from the FDA or CDC. Probably located underground in a BSL 4/5 facility in Arizona or Utah.
AI (or specifically LLM-related) seems to have sucked so much oxygen out of the room that it's harder to focus on other things. I moved on about 18 months ago and I'd like to work as an analyst again on some other things but AI seems to pretty much dominate the conversation. I'm not disinterested in AI but it seems to simultaneously be the topic with the most interest and the most noise.
I'm not sure I'm exhausted but, given a bunch of other things going on (that have been exhausting), I've been less inclined to keep as involved in tech as I would otherwise have done. Probably do a reboot after the holidays and decide what I want to do (or not).
According to The Information (which is hard-paywalled unfortunately) venture capital as a whole is set to run out of money in 18 months if things continue at the current pace. AI companies had better turn profitable sharpish.
How would that even work to be profitable? I don't see a way on how they can suddenly become profitable if I can be honest when the fundamental economics of the thing seems a little broken to me...
We're all in trouble but OpenAI more so if they're depending on consumer spending (specifically via AI) to float the general economy - let alone theirs.
Summarized by Ed Zitron: The Information recently published some worrying data from venture capitalist Jon Sakoda — that “at today’s clip, the industry would run out of money in six quarters,” adding that the money wouldn’t run out until the end of 2028 if it wasn’t for Anthropic and OpenAI.
Captain Obvious chiming in here, but as one of those boring douche canoes whose concerns are data privacy, data locality and generally the ability to understand how AI trains off of and manipulates the data we feed it, I'm resistant to the tech. We're testing the use of AI to aggregate and explain patterns in the data we have, but this is limited to our ticketing systems and Slack. Until our current choice of AI provider can guarantee that our transactional data won't be stored outside of the EU, the people driving this internally and those keen to make the sale externally can take a long walk off of a short pier. I can almost smell the aroma of the coffee machine from the chamber in which an EU subcommittee is working on the Artificial Intelligence Act (AIA) (EU) 2025/xxxx to add to the raft of regulations that financial entities in Europe need to concern themselves with. Maybe they'll be nice and just append it to an existing act.
AI might be great. AI might be terrible. I'm not all convinced that most data aggregation features baked into AI and used by most normal companies couldn't be implemented in R or SQL (disclaimer: I couldn't code my way out of a wet paper bag if the tool I was told to use was an axe). It's just wanted so that someone can crawl over data sets to ask simple questions like 'how many merchants exceeded n number of transactions between a and b date' or 'My customer needs an eIDAS certificate. What do I ask them to send us without having to talk to Captain Obvious?'. I mean, we're busting a gut on revamping our developer docs, but given that the spec is already public, I'm pretty sure that developers can already vibe code against that. Going to test that and see how it gets on.
AI ate HN also. It’s ridiculous how many articles are about AI now. Are we truly out of any new ideas? Can we not talk about something else? I kind of like AI but it’s a tool, I don’t talk about my screwdriver constantly.
"AI" (quotes deliberate) is the biggest story in tech, right now, for better or for worse. HN is a news site, so it's fairly logical that "AI" would dominate.
It wasn't that long ago, that you couldn't find a pitch deck, anywhere, that was missing the word "Blockchain."
I'm finding it interesting. It's definitely a bubble, but unlike crypto, there's some real utility here. I do think that we'll be seeing some excellent stuff (and some awful crap) down the road.
Brother go to something like r/cryptocurrency to see people trying to justify the real utility...
I am not a crypto advocate but there are definitely a lot of similarities.
The only One thing I can respect Sam Altman for is the line that he said about bubbles and "When bubbles happen, smart people get overexcited about a kernel of truth,”
This is the similarity... Both had some use cases, one might have more and the other but we are being so overexcited, I think being forced to be overexcited about this tech
Tangential pet-peeve: "That's fixed with private blockchain", which is the equivalent of "That's fixed with Segways that have a second set of wheels and the self-balancing is disabled."
There's a bunch of people busy reinventing decades-old old tech/accounting, either because they never bothered to learn what was actually new about "blockchain", or because they can't bear to admit that the new stuff wasn't really good/necessary.
________
[Edit] To digress into specifics, the characteristic that sets "Blockchain" apart from what we already had (distributed databases, Merkle trees etc.) is fulfilling a product requirement: That node-membership is distributed and uncontrollable. That's what causes a cascade of anti-abuse features.
But outside of wild-west cryptocurrency, that usually isn't necessary, nor even desirable when you consider the rest of the cycle-burning baggage and complexity that comes with it.
Oh, there’s going to be some heavy-duty snake oil, but I can say that I’ve already found it quite useful, in its current, larval state. I just haven’t found it to be a “game changer,” for me, yet.
That said, I am not really an industrial-scale user.
It is your duty to read HN new, and upvote articles you find interesting, or that you think others might find interesting. You can bias towards non-LLM ones.
If hackernews existing during the dawn of personal computing you bet your ass that every single cpu release by intel or ibm or whatever would be front page news and everyone would be talking about how they were going to use computers to automate all paperwork etc etc.
A bubble doesn't mean there's no value whatsoever, but that we've surpassed rational thought and validation. Example: we ask good developers to evaluate tools and options; not jumping in blindly to the next big thing is what makes them good in the first place. With AI, executives are literally saying if you're not all-in you have no future here. THAT is a bubble.
A tangential note -- these days when I'm casually reading through job postings, a lot of companies mention in their job requirements something along the lines of "Must be familiar/knowledgeable about GenAI technologies (RAG/MCP)" even for roles that are not strictly related to GenAI.
When you look deeper into most of these companies, they seem better off using third-party services (like Claude/ChatGPT) instead of building their own GenAI models, MCP servers and whatnot. This is not considering that the domain they are operating in would have no apparent and obvious gain from utilizing GenAI (sure, they can buy ChatGPT/Claude organization subscription and that's all they need). I don't know if this is the indicator of how big this AI bubble is, but it sure is interesting...
HN's AI hate-boner has always been a bit off-putting to me. This is a technology forum, and it's pretty much the biggest advance in recent technology that has potential implications for all of our lives. I definitely also get AI-fatigue, but it's no mystery why there's a preponderance of content about LLMs, diffusion models, self-driving cars, etc.
YC's goals are to manage risk and to make money, and new tech like this is almost certain to make someone a lot of money. All these YC companies are just different random initializations of potential ways that this new generation of AI can affect the world. It's a given that most startups of this breed will fizzle out with no impact, but I imagine that a few of them will actually change how something is done (and make a lot of money in the meantime).
The hate boner comes from HN's love for technology - software and hardware - and AI is so dominant in tech news. Once you learn the basics of LLMs and agents, which are really not that complicated, it gets sort of dull to hear about again and again and again.
Off-putting? I think skepticism over marketing-hype from workers in a field is how things are supposed to work, especially for a group that spends lots of time plotting things out looking for edge-cases and ways for it to fail.
I'd be far more disturbed by the opposite, where everybody on HN is expected to gush over the thing-du-jour.
As aptly-put for a prior hype-cycle:
> Tech Enthusiasts: "Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via Alexa! I love the future!"
> Programmers/Engineers: "The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise."
If you drive through San Francisco on I-80, every single billboard between the Bay Bridge and the 101 is for some kind of AI service (except, amusingly, the one for Yudkowsky's new book about AI doom). All of them look terrible and completely useless. There was one that said "Still using PowerPoint? Use our AI slide maker instead" with a picture of a three-eyed cat, as though to brag that their service makes unappetizing and unreliable slop.
It's total insanity; comparisons to tulip mania no longer even apply now that people are tossing around numbers like $500 billion when talking about their capex buildouts.
Yeah, a few times per year, I take a drive to SFO, and I am again struck by the absurdity. It's not just the current AI fixation. They always seemed bizarrely niche to me.
I assumed billboards were for mass consumer marketing. What tiny percentage of the people on these highways are actually in a position to act on any B2B tech marketing? I don't understand the economic choice to pay for a billboard like that. The ones along the highway that make sense to me are for iPhones and such.
> I assumed billboards were for mass consumer marketing.
My hunch is that billboards on I-80 through San Francisco are a vanity product. Their actual purpose isn't marketing, but to flatter the egos of the CEOs that own the service - they get to dominate the sky, the eyes, and the brains for thousands of tech commuters twice a day. The one thing I'm not sure about is whether or not the people paying for the billboards actually think that their sales are going to materially benefit, but I'm pretty sure the sellers are clear that they absolutely will not - and either way, I'm sure people see it as a mark of prestige to have an ad there.
It’s basically sifting through ore; 99% of the people who see it aren’t the target, it’s the 1% of viewers who are buyers or funders who you otherwise couldn’t directly advertise to. Same reason you see defense contractors putting up ads for weapons systems in the DC metro.
Every company is an AI company and you should probably be less interested in building an AI and more interested in how you can apply AI to solve business needs that are not directly AI-related.
Is this my CTO posting on HN? I've heard this EXACT statement repeatedly over the past 18 months. It's one of those that sounds smart but when you pull it apart there's nothing of value here.
The self-described "startup community" hasn't produced much of any value to me in a really long time. Dropbox-like things are somewhat useful, but that started in 2007 or so and the one I actually use in maybe 2011 or so.
I just took a look at yclist.com, which ends in 2019. Noticed only a few that I've ever heard of and none that I use. Possibly there are some since then but I sort of doubt it.
There are a couple of things that started later than say, 2015 that I actually use but none that come immediately to mind came out of the "startup community."
There are some others, like Stripe, from which I probably benefit somewhat indirectly, but even that dates back to 2010.
So yeah, for me, the SV "startup community" has been a wasteland for a long time. There are probably a few things that I'm not thinking of at the moment, but the fact that they don't come to mind suggests that they just aren't very important.
To be fair, there are probably quite a few useful startups hidden in the B2B world, especially given how much accepted startup strategy appears to have shifted to focusing on that segment.
This is true, but as far as I can tell, does not benefit me much. That's why I mentioned Stripe as something that probably does benefit me indirectly.
But a lot of the YC companies in particular seem to be making software I don't care about to sell to other companies making other software I don't care about to sell to other companies making other software I don't care about, and so on. The benefit to me of all this activity seems to be so close to zero that the difference doesn't matter.
I think the most important self-driving car company is Waymo, apparently founded as the Stanford Self-Driving Car Team in 2004, somehow became the Google Self-Driving Car Project in 2009, the renamed Waymo in 2016.
All that is pretty long ago now and doesn't really have much connection to the SV "startup community" (Google ceased being meaningfully described as a "startup" a long, long time ago).
And anyway, I don't care about self-driving cars. Don't have one, don't particularly want one.
The direct benefits to me of OpenAI are nearly negligible. I use it (and others) now and then for various things but if it disappeared tomorrow, my life and work would not be meaningfully impacted.
it is because during from 2004/2005 on, it was seen a way to get rich quick to "just found something" and try it to sell:
the "early days" where driven by engineers, tech people, technicians - then the MBA crowd joined and funding became a "pipeline processed thing" to get tons of money quickly.
Why do you need a community? Build the thing, and then launch the thing by talking to prospective customers. If you're building a SaaS, you don't need funding if you keep working a day job and build your product slowly.
The clever people are still doing good work, they're just doing so quietly.
Big money has sucked the life out of it. It’s just a returns game, there’s less care for changing the world.
I do know many funds and people working to build community and startups that actually help people but it’s an uphill battle because it’s not as sexy and the returns aren’t as immediate.
> Everyone I know who raised money is a moron or narcissist.
Not all though but to me, I okay, I am mentioning this a lot nowadays but I am in high school and this is relevant.
When I first lets say wanted to do a startup, my idea was that people would invest in me for me to grow and then I can sell it really later after 15 years of working or sorts to have financial freedom.
I wanted to build things that could make profit for 15 years and be something that could've needed capital to expand the growth just like any other business.
Just because it is in coding/tech and its spicy right now doesn't make the principles of sound business go away.
Yet, the more time I invested in here / seeing YC, it seems that the story is about hype/growth/operating at a loss knowing things aren't sustainable/building wrappers.
Only to sell them at insane profits to somebody later on while the company never made a single profit or something while hiring many people...
What can I say, its just something that I can understand if someone is doing and there might be companies that have this fundamental but untill I find something like that, I am pretty sure not gonna just go and paste AI sticker in somethings as some other people are doing right now...
Its a matter of moral backbone. I can't charge my investor wrong knowing that my project doesn't have potential or sorts and its an hype thing... ,Idk. There are a lot of systemic issues in the whole world that we have to think through to discover how we got here.
> And the genuine hackers I know of are wasting life away working on pointless projects.
Man this is something that I grapple with a lott, we must do something to survive and so most of us work a dead end job even though we can be really passionate about something and so there's definitely that which I resonate with a lot.
I feel you. Disconnect for a while, touch grass, be the change you want to see, maybe explore parts of the internet that are less “startup” focused and more hacker focused?
Everyone's panicking about "AI features" being bolted onto products like it's 2010 and we're adding social login buttons. That's not the bubble. The bubble is the assumption that current software companies have defensibility.
Here's the thing: we're not adding AI to products. We're removing the need for most products entirely.
Nobody shipped without search after Google. But search was an enhancement—it made existing software better. AI is a solvent. It dissolves the economic moat that justified building the software in the first place.
YC's entire thesis rests on startups capturing value during the window between "this problem is painful" and "an incumbent solves it." But what happens when that window collapses to zero? When any reasonably clever person can get Claude or GPT to generate a bespoke solution to their specific problem in 20 minutes?
I'm watching food service managers—people who optimize labor, inventory, and customer flow in real-time—get told they can't build software. That's a lie we told ourselves to justify $150k engineer salaries and $10M seed rounds. Those managers have exactly the cognitive toolkit needed. They just didn't know C++.
In three years, they won't need to.
The SaaS model assumes friction. It assumes most people can't build the thing themselves, so they'll pay you $50/month forever. Coding AI doesn't make software easier to build—it makes the *act of building* indistinguishable from the *act of using*.
You don't need a project management tool with 600 features. You need to tell your computer what you're trying to coordinate. You don't need Photoshop's menus. You need to describe what you want the image to convey.
Every software company selling picks and shovels to the AI gold rush is missing that they're about to get disintermediated by the prospectors themselves. The cloud was never about technical superiority—it was about control and recurring revenue. What happens when capable models run locally and people can spin up exactly what they need?
VCs are investing in moats that evaporate the moment non-engineers realize they don't need us anymore. Network effects, switching costs, proprietary data—all predicated on software remaining expensive to create.
It's not.
The actual bubble is venture capital itself. You can't invest in defensibility that doesn't exist. And you can't charge rent on the gap between intention and execution when that gap is closing.
We're not in an AI bubble. We're watching the software bubble pop in slow motion.
If want your claiming is correct, it goes for nearly every profession.
Why do I need a lawyer if I can just get the AI to do it all for me? Filings, briefs, legal arguments etc. are all just output generated from specific inputs after all.
Why do I need to go to a doctor if I can just have AI diagnose, and eventually even run tests and then operate?
Why would I need artists / marketers for whatever my product, the magical AI can just do it all.
It could be we're headed down this road, but I don't see how software is somehow special in that its the only thing AI can do competently
The fun part is what looks like the core assumption here, that the current AI thing will actually work, doesn't even need to be true; it just needs enough people to believe it for the negative effects of it to become true.
Even without AI the software industry is not in a good state for a range of reasons. The big tech companies barely sell software - it's custom software operated to perform some other much stickier function, but the value is not in the software per se beyond it enabling selling the other thing.
This - unfortunately - goes for a lot of domains. Things that you used to be able to make a career out of now have a shelf life of a few years, sometimes less than that. This time compression is not just upsetting the VC world, it is upsetting just about everything in high tech societies.
I don’t know how you can claim all this when ai is nondeterministic and hallucinates. Menu based software is merely a way to trigger a function, the same function, every time you want to run it. Ai in its current incarnation cannot even do that.
Yeah, having to type out, or even speak "rotate this by 90 degrees about the Z axis" would get annoying fairly quick. Not to mention the inference is probably going to consume several seconds.
Blender user, or probably any other, would be able to pull that with a mouse click, followed by a few keystrokes. Bam, done.
Now something time consuming like UV-unrolling, sure go ahead and incorporate AI. But I'd bet it would need quite a bit of tweaking for a "pro" job, although of course thats not always necessary
> I don’t know how you can claim all this when ai is nondeterministic and hallucinates.
It's so bad. I was thinking that it would at least replace people whose job is largely to give their worthless opinions, but it won't even do that. Those people's real job is often to add head count, to give your uncle's kid a job or to be fired when things go wrong. AI can't do any of that, it just generates the worthless opinions. Now the idiot won't even have to imitate the verbiage they sort-of heard in college; they're the ones that are using AI the most, to bullshit for them.
It's good for helping you think through stuff you're thinking about by repeating back to you in different words (and getting even that wrong often, forcing you to clarify.) It's horrifically bad at finding or following references, reasoning, or coming up with anything new.
It's not eating software, it's barely even touching software, other than being shoveled into it randomly. The obvious proof that AI is bad is that there are actual geniuses who came up with the algorithms to speed up, parallelize and to bias in a way that makes them seem more productive and creative. LLMs don't seem to be helping amazing minds like that improve AI itself. If LLMs were even going to be fertile, that pairing would be a semi-singularity even if these exceptional humans couldn't be taken out of the loop. My bet is that they don't help at all.
To be fair, AI can actually be useful. Crypto projects at one point were tokenizing every random idea, and unsurprisingly most of it turned into scams. Is there any relevant crypto product that truly works differently than just being a tool to buy other coins, apart from Polymarket?
Some sites, especially international, accept crypto in lieu of other payments like credit card. Useful especially if you aren't 100% on the site. You just pay them and don't worry about overcharging or having to dispute payments, etc.
You are right. I actually know people that pay for "fake" streaming accounts through some coins so they can watch Netflix or HBO. Usual cards get blocked there.
We'll it's nice to see usage of crypto(semi anonymous payments) for it's original purpose at least somewhere.
“All the smart kids of the Bay Area want to work on AI,”
Operative word here is "kids", and from what I can tell they're not even the smart ones, just the ones who are more geared toward being money and status-obsessed.
It's the same thing as kids that say they want to be a star athlete, rock star, influencer, or pro gamer. It's just something they are interested in that they think won't take a lot of work to be paid well. Hype train moves a lot of mass
There's always the illusion of a quick and easy path to being rich. But the only one of those that really exists is winning the birth lottery and inheriting.
All else is a combination of luck, skill, and grit, because for every rock star, star athlete, and successful influencer, there are tens or hundreds of thousands who aren't famous because they didn't win that game.
The smart kids might be happy to work on model interpretability research at Anthropic. But 90% of AI companies are ChatGPT wrappers which don't sound particularly fun.
The vast majority of engineers out there don't have the skills to work in core AI development, so the only option for "working in AI" is building one of these API wrappers.
You could have said that with any previous investment cycle and the people applying and being accepted to Ycom. Its a thing for generating money not great code or w/e.
FWIW i think that AI (or at least the LLM and some of the other neural network based stuff) can be useful for some stuff and interesting from a technical perspective for many people. Personally i find 99.999% of AI to be wayWAY overhyped and i do not really care about the commercial side of it, but at the same time i have found some neat uses for (local) LLMs and i do find the tech behind it interesting (and experimented a bit with it, at least whatever i can do on my own computer).
And on that topic, WRT Hacker News, personally i find someone running GPT2 on their 90s Silicon Graphics retro workstation much more interesting than yet another topic on how AI is eating everything :-P.
I do think that another topic on how AI is eating everything could be nice given how nobody is still believing it and somebody had shared the list of YC companies with AI and you had to see how long the list was and how unimaginative it was, to me it felt Like X but AI, Y but AI....
You yourself say that AI is overhyped and that overhype has formed into a bubble of sorts which has eaten Y combinator.
Also I probably don't understand how your point is relevant to the parent comment if I am honest or maybe I am not finding the relevance in which case feel free to explain it so that I learn something new
Forgive my ignorance, I'm just a mid-tier dev, but isn't AI pretty much PhD with multiple publications territory at this point ? I'm not talking about writing wrappers around ChatGPT but working at OpenAI or Anthropic
My guess (paywall): they wanted to drive up demand for their existing investments in AI. In other words, the bubble didn't eat Y combinator, Y combinator did it to themselves on purpose.
AI, if it pans out, allows VC gambling to succeed with fewer dollars due to downward pressure on total demand and total pay for workers, specifically annd especially skilled engineers without leadership qualities.
AI is the pinnacle of everything that came before it on HN.
It is the logical conclusion to all our efforts before it to build machines that do work for us; every idea pursued to get computers to do more for us, in every possible way, in every possible location, ALL at once and in synchrony, faster, better, stronger.
Artificial intelligence IS HN.
Resistance to that truth, is resistance to a coming singularity that already always was.
Paywalled articles ate Internet
It was inevitable when ads were no longer enough to sustain websites, I suppose. It is a shame how critical news may be paywalled out from the public, though. Tragic, but fitting.
It feels like a self-imposed trap though.
Just loading the one article results in:
335 requests 7.9 MB transferred 23.6 MB resources
Hosting would be less expensive if they didn't bloat up what could be a simple static document.
I doubt their hosting costs matter compared to other expenses. Journalists still gotta get paid.
> It was inevitable when ads were no longer enough to sustain websites, I suppose. It is a shame how critical news may be paywalled out from the public, though. Tragic, but fitting.
Also when any public content started being gulped up by scrapers and resold as AI.
YC remains a great source of creative inspiration for me. However, I tend to skip most AI-related content on HN, as the topic does more harm than good to me in my daily life. Some people around me delegate more and more decisions to the chat, and that frightens. Especially if you are somehow dependent on them or your work gets evaluated by some creepy AI-driven bossware. We should admit that AI, particularly LLMs, is not just eating: it is destroying society, human communications, the education system, and the scientific community. This enumeration is merely the sides that I personally faced.
I work with some undergrads and see this delegation increasing year over year. Unfortunately it's also happening at the expense of reading books, using library search tools to find proper sources, and information gathering in general.
"An LLM might be able to explain something to you, but it can never understand it for you."
I was recently in a work AI training where we were encouraged to have AI review all our vendor and contractor budgets and find holes and create rebuttals to line items. Was wondering what if the vendor has ai review our rebuttals and create counter-points to our ai-created arguments. At some point it will just be AI talking against itself to another AI chatbot.
> At some point it will just be AI talking against itself to another AI chatbot.
And then something like this [0] will happen, creating a weird wasteful meta-game about the model(s) used by each company.
It makes intuitive sense: If you outsourced "generate awesome assertions" to a contractor, then someone else hired the same contractor to "judge the awesomeness of these assertions", they are more likely to get lots of "awesome" results—whether they're warranted or not. The difference might even come down to quirks of word-choice and formatting which a careful human inspector would judge irrelevant.
[0] https://www.theregister.com/2025/09/03/ai_hiring_biased/
> At some point it will just be AI talking against itself to another AI chatbot.
As anyone in the US healthcare insurance claims-adjacent spaces can tell you: yes, in about 2 years.
I visibly cringe when I hear people talking about using AI for anything. With the help of the social disease we call 'social media', AI is destroying thought on all levels.
Not sure about destroying everything. Def benefitting few at the great, great expense of the many. But trickle down benefits are a thing. And I see no way to avoid the future; the genie’s out the bottle.
One should aim to be in the food chain at a level, where there is no bossware above you.
I welcome the bossware. Bye bye bosses.
Bossware, where
the boss is nowhere
Better the devil that you know...
It's funny because it's causing a tragedy of the commons where there are so many AI startups that YC is funding companies that compete with each other. Case in point, the the PearAI incident a year ago where a YC company forked and rebranded an open source repo from another YC company against its license: https://unfashionable.blog/p/yc/
HN thread on the apology: https://news.ycombinator.com/item?id=41701265
Speaking of which, apparently PearAI deleted their apology in the intervening year.
Is placing bets on competing companies that against YC's business model?
Like I'd assume the best case for YC's business model is to pick a winner, fund them, sell off at the company's peak, then find a competitor likely to unseat them and fund that competitor so they can enjoy the profits of capturing the same market again.
Is it fair for someone with privileged information to back a direct competitor? I say no but YC clearly thinks otherwise.
Yeah, to me that’s always been the obvious most "expensive" part about doing YC. It’s not the chunk of equity, it's that they now know everything about your company, your road map or ideas, and the way you think and operate. And that's a pretty good deal for whatever little money they're spending on you. The whole thing has a "frenemies" vibe.
For many investors that's a bridge they will not cross because it will always lead to conflicts of interest.
"How paid subscriptions ate the spread of information" I can't subscribe to another site and seems the archive link is not working for me either. Would love to read it though because while YCombinator just follows where they see the money being, AI has really seen the majority of investment recently. https://www.ycombinator.com/companies/industry/investing Just doing a search for "AI" its clear there is a preference. Maybe for good reason though. If I could read the article that is.
pretty entitled take. why use the phrasing about subscriptions preventing spread of information instead of something like "stingy readers preventing authors from eating"
> "stingy readers preventing authors from eating"
Because as I just counted for you I'm subscribed already to 6 others.. at different rates. Some of them crossing over with articles. To you I sound "entitled" but there's a limit to my miscellaneous spending.
> "How paid subscriptions ate the spread of information"
The "irony" here is that news organizations are moving to gate their content precisely because that's the only effective technique for preventing AI crawlers from ingesting it all ...
So we as the real people have to pay because the AI might consume it ? If you believe the AI isn't consuming those paywalled articles also, I have a bridge to sell you.
It's certainly eating up a lot of the threads on HN.
Exactly my thoughts. Couldn't agree more really, there is so much to discuss on this short time that we have on earth but a lot of that energy is going into talking about AI and AI again it seems. I have nothing against the tech but c'mon I would argue that open source itself overall is more valuable than AI but that isn't an apples to orange comparison but still, I see far fewer discussions of open source as compared to AI.
Maybe Open source is just on mind a lot these recent days.
>I see far fewer discussions of open source as compared to AI.
Because the big discussions of open source were decades ago now. If you add up all the discussions of open source in the past I'd assume they'd out number the AI discussions now. Also, there is very little novel to discuss about open source. Now novel and 'important' are different things, but novel is what tends to write articles and get eyeballs.
I don't really know man half the population doesn't know what open source means or cares whereas a lot of people seemingly have lots of opinions about AI.
One can liberate people from big tech and the other ties them to it in sorts. And there are very very more conversations about the latter than the former.
The point of discussion is to bring change. There has been a real change in how usable Linux has been lets say compared to past but now its really about user adoption I suppose. I genuinely think that we might need to reopen that discussion window as a lot of people are getting interested in linux/homelabbing /realizing that they can degoogle and what not to really get privacy.
Those discussions brought a change into how open source software is written today (git) etc. but now we probably might need discussions about the awareness of these open source products to the general public if we truly want mass adoption I suppose.
What are your thoughts?
>One can liberate people from big tech
A very small portion of the population, yes.
The percentage of the population that is going to run things on their own is comparatively tiny. Mass adoption isn't going to happen because convenience and support is what most people want. Again, these things have been discussed for decades, and yet we keep seeing tighter and tighter centralization of data and services. In phones it's pretty much dead. You have apple (totally locked) and Google (on their way to totally locked).
yes I understand and your points are valid.
Yet I still feel like things like grapheneos are really valid nowadays and there are definitely some de-facto low hanging fruits of open source where you still get the same-ish level of convenience and support and the only reason I can't think of the same thing happening is knowledge.
my country is literally filled with everyone owning whatsapp, when I ask people why not signal... they don't even know what signal is.
Everyone has always been doing chats on whatsapp and now there is this weird lock-in but all it takes is for masses to use signal and spread the word as an example for a small victory towards a path of good. I will try to do my part I suppose, we can all be pessimist but atleast I feel deep down that we can create a system of convenience and support for other open source projects too if we can donate to these projects too and have a reasonable assessment of that too.
Too many people think of open source as free and yes its free but I also think that its the responsibility of us as a whole as a society to fund open source if we want support otherwise we should stop thinking about it.
Regarding google locking down, what can I say except that I think that this decision should be fought against with as much scrutiny by the people advocating for freedom as possible.
There is a way to do things with adb but still, it is a shame that google went down this path and we should definitely fight against this too of sorts but I have hope in grapheneos + f-droid too tho. I definitely need to enlighten myself more if the google's thing is gonna cause an issue for things like grapheneos too as that would be a real deal breaker / cause even more severe issues as from what I know grapheneos is one of the safest os roms/ most privacy friendly android rom for mobile and gives a lot of security advantages that are definitely something to look at and admire.
> I don't really know man half the population doesn't know what open source means or cares whereas a lot of people seemingly have lots of opinions about AI.
I'm not the person you responded to, but I'm certain that way less than half the population knows about or considers open source, that's one of those living in a bubble illusions.
Ultimately those people aren't here and have no interest in being here.
I agree that those people aren't here and have no interest.
I would say that it is the issue of lack of knowledge if anything.
Yes, there is an infinite amount of knowledge available on the internet regarding open source and that's honestly where I learnt about stuff but its overwhelming at first and required something to kickstart the whole process/ be a catalyst of sorts.
Realizing using linux that privacy matters a lot and then realizing that I can just search open source alternative to X really helped me in the beggining/still sometimes does and there are a lot of low hanging fruits that can just be told for people to follow (like use signal instead of whatsapp) so there's definitely that.
I definitely am thinking about doing something about it tbh the more I think about it but I think that partially why people don't go around doing this is because of how AI seemingly sells and open source fundamentally doesn't sell
It's not knowledge, it's interest. You go looking because you care. But people don't even use desktops anymore. They could, but they don't, what's the point? I know devs who haven't even put together their own PCs in decades if they ever did at all.
I don't know how you shift that, but getting people interested would be the first step. They need to have a reason to want to learn.
There's an enormous amount to discuss about "open source," and it's neglected enough that the bro generation of programmers doesn't even understand it, or the difference between it and Free software, and what those differences materially mean to developers and the public at large.
The reason that HN is eaten up by LLMs is because it's eaten by any trendy topic that's in the mainstream news. HN used to be directed by an active and opinionated mind in pg, and it's been left to salaried, caretaking censors whose primary job is to make sure the site doesn't become an embarrassment. This mainly consists of artificially excluding discussions that may lead to energetic debate; debate that usually becomes swamped with low-quality comments by people speaking outside of their expertise, and can make the place look like a cesspool.
But energetic debate is where all of the energy is. The problem isn't those topics, the problem is that those topics also take a lot of energy to moderate the problems out of. We're experiencing an abandoned place, not a place driven by anything internal. It's an old barn that is kept clean just so it doesn't catch on fire.
We don't talk about FOSS because it isn't a general topic that is in the mainstream news, not because it isn't an important topic that is more vital to discuss than ever, in the face of monopoly, walled-gardens, verified signatures and centralization.
A positive editor like pg was aware of this, and treated the site as his personal playground. It partially revolved around what were essentially his journal entries. I only ever ended up on this site because he decided one day to have every front page article be about Erlang, of all things.
edit: and to add to the penultimate paragraph, an energetic discussion of FOSS that lead to productive projects and statements by people of influence would influence the mainstream. This place used to make stories, not just Digg them. The purpose of the site (other than to run something on Arc) seemed to me to be to juice new YC startups in a way that would leak into the general media. It doesn't even seem useful for YC any more.
You've really hit the nail on the head on how I feel. Especially with this line:
> This place used to make stories, not just Digg them.
I've started to dread most conversations about FOSS on this site because they just turn into the same tired old high-energy, low-quality conversation repeated over and over again. There's little incentive for anyone of influence or expertise to contribute because, well, all of these conversations end up the same way.
I guess I disagree on your view of the moderation of this site. While it's true that pg used to do a lot of guidance and tastemaking on HN, the scale of the site was small enough where he could. At this point the site is massive and only growing and this new userbase expects a Digg or Reddit like norm, not one that drives tastemaking. I think the site would require a fundamental rehaul to offer an individual or a group the tastemaking that pg could do when the site was a fraction of the size.
I also think, for better or for worse, that HN has "accepted" not being the tastemaker anymore and becoming another tech news aggregator. It's because the eyeballs of folks new to these issues doesn't really fall onto this site anymore. For a while that had been Twitter but now that Twitter is under Musk, it's lost that distinction and now tech discussions don't seem to have a good home.
Even if pg where here to do so, it wouldn't happen the way you think it would. Open source got ate by AWS and closed behind paid for services. We had those discussions in the past, we lost, the market spoke.
Moreso the internet evolved into something we don't like. In the early days it wasn't that hard to have an energetic conversation because you were having conversations with real people at their digression. Yes, there were some number of trolls you had to deal with, but over time they went from being the exception to being the rule. Any and every site you create now, the moment you collect a worthwhile market will be flooded with those attempting to market their wares to the point that actual conversation can no longer exist. The walled-gardens and verified signatures are a side effect of the infection that the internet has become.
The world you lived in aged and died and now a brave new world screams for your attention 24/7 without care for your health or sanity. The days where a large portion of people were into programming because they enjoyed it are long past. People need paychecks to support the ever spiraling costs of basic needs. Consolidation and monopolies aren't just a thing in websites and computer hardware, it's a thing everywhere with everything especially in the US. The power scaling laws of technology have come home to roost. The cyberpunk dystopia we were warned of is already here and the masses invited it with open arms.
The FOSS utopianist need to realize they lost, and it's human laziness and apathy that was the killing blow. If you can come to terms with that, maybe a rebirth of FOSS that targets our base instincts can arise.
Man this is really a great comment that I just read and I agree soo much about the maybe a rebirth of Foss that targets our base instincts can arise.
I think that a very solid (energetic) discussion can take place on that too and its just a pleasure that even now, to me, this is full of energy and maybe mimics a sense of spirit of that energy the parent comment was referring to.
I actually wanted to share that energy and I really read this comment and recorded myself a video of reading this whole text from start just to reach your post which I know isn't going to get anywhere but I just wanted this message to be beyond this thread. It definitely gave me some new insights and was a fun exercise in making me less shy around the camera. I want to create memes like burialgoods/anything to really spread this message in sorts and other messages too regarding open source. There is so much to be done :)
But I also want to do it in a hopeful way, we can come with terms on things, we should and try to advocate for the rebirth of Foss as you say.
It genuinely makes one feel a bit hopeless but I think that the approach of looking at the uncomfortable and then still wanting a rebirth/fighting for it is something worth looking for in our lifetimes.
> Moreso the internet evolved into something we don't like.
> If you can come to terms with that, maybe a rebirth of FOSS that targets our base instincts can arise.
> The cyberpunk dystopia we were warned of is already here and the masses invited it with open arms.
Mature conversations need to accept reality to move forward. I disagree with the implication in your comment that we are in uniquely lost times. I think FOSS was under much, much more threat in the Microsoft and proprietary software times than it is now. Remember when encryption was locked by the NSA? I just think the community on this site has locked itself into a local minimum of getting frustrated and sad over the state of things they don't like. Once any upvote-based site gets locked into one of these local minima it becomes really hard to escape as the incentive structure of voting continues to reward tapping into the same emotions.
That is an insight that my age simply didn't bring.
I am not sure about the accuracy of it as I was there after the AI era and I have only seen the slow but steady creep of it.
> This place used to make stories, not just Digg them
This does give me goosebumps.
Honestly, I like this place because it has a lot of nice people and we are more alike and different and it's just that this seems to be the place I have decided to call a home to all of my ramblings/thoughts on essentially everything.
The only other social media that I operate is maybe discord and a very small dose of reddit.
it does seem to me that somewhere along the internet, we might have lost it, or atleast its hidden, waiting to be discovered.
I can't help but share things like julia evan's zines and how he inspired one day when I messaged him on mail about being worried about AI when he shared me somebody who had created a better UI for the man pages and so many other interesting pages which really reflected personality that I didn't know exist.
I don't really think that we can do over hackernews but there is certainly a possibility of atleast having niche discussions like erlang and discussing them could lead to more people exposure's to it... which can benefit them or the community!
I can maybe think of that we might need to use something like matrix to atleast create a public community with better moderation if that's the issue.
I think the HN mods are also trying their best but if we want opinions, the best way I right now could think of to fixing this as an issue might be to creating a hackernews but not for AI which I had actually suggested once but everybody just said to me to block posts about AI or to create a tracker which removes all AI references at most, the post is definitely lost in my comments but I definitely remember it.
I can understand where you are coming from and I think that this place isn't a substitute for a place like that and we still might need a place like that too if we become too pessimistic of y-combinator.
Never give up hope I suppose :)
> This does give me goosebumps.
In earlier times, Slashdot used to make stories.
>creating a hackernews but not for AI
With the quality of modern AI this really isn't possible. Or as someone else said "There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists"
On the internet a community is almost always growing or dying, there is no real inbetween. Communities that make the news do so because those sites are accessible. The problem is being accessible to humans ensures that bots will show up and cause problems, so now you have the added responsibility of moderation and spam control which involves politics at the end of the day.
yeah I don't know how to write this sentiment without whining, but I am Tired of seeing Boring articles about AI where very little of interest is said or done, with mealmouthed comments. I swear the development lifecycle of really neat and novel tools used to be more than "Poll AltmanBot"
It’s retro computing, diy or AI at this point.
I remember when UBI was consuming too much bandwidth. Foreplay to the AI saturation now. Suppose it was useful to get the hype up enough to arrive where we are. Still odd we aren’t having any UBI threads now, when it’s actually relevant. (Not that I endorse the idea, which afaict traces back to Charles Murray.)
It seems to vary a lot by the day. HN still has enough good days so I keep checking, but on the bad days I just go to lobste.rs and usually there’s something quality to read over there
[flagged]
> Going through YC’s startup directory reveals that of the 170 startups in the most recent summer batch 154 are AI startups.
The author calculates this by searching for the term "AI" in the name and description of each startup's YC page. But presence of the term "AI" doesn't make a startup an "AI startup," so to speak. For example, I picked one startup at random, Topological, which is "developing physics-based foundation models for CAD optimization." Just because the company uses AI doesn't make it an "AI startup." AI is rapidly proving itself to be an extremely useful and workflow-changing tool, and many companies now have adopted it somewhere in their product without suddenly becoming AI companies.
> For example, I picked one startup at random, Topological, which is "developing physics-based foundation models for CAD optimization." Just because the company uses AI doesn't make it an "AI startup."
It is interesting that you have an opinion on what a real AI startup is and either (1) don’t know what foundation models are, or (2) do know what foundation models are but think that a company whose core focus is on developing them for a particular field is just a startup that uses AI but not one one whose core business is AI.
If building a foundation model [1] isn’t “AI” then I’m not sure what is.
[1] https://en.m.wikipedia.org/wiki/Foundation_model
Turns out 100% of YC startups are focused on Cookies and Privacy Policy. Is this the hot new trend?
I agree in general that keyword searching for a buzzword will have false positives, as people work it in to their description just to ride the trend. However, that is a bad example. If the core of your business is developing new AI models, then you are an AI startup. Personally I think it is great that someone is using the recent breakthroughs in AI and applying it to something other than LLMs, as well as trying to fuse rules based systems (physics) and AI. Way more interesting than AI startups that are just trying to slap an LLM onto a task that it isn't ready for. But still an AI startup.
If anything, building a foundation model is a more legitimate "AI startup" than wrapping slick marketing and a bit of tuning around someone else's LLM or diffusion model...
> Just because the company uses AI doesn't make it an "AI startup."
I think you need to expand a bit more this. I does sound to me that if you are using AI in the main product you are an AI startup.
It is similar with if you are offering a web version of your app you are an internet company. And if you are offering a mobile version you are a mobile company.
and a company can be more than one: An internet AI mobile company :)
If you put "AI" into your name or TLD (Description or Domain) you are intentionally playing the AI angle/game, so it's going to guide everything from the core values on down, and that's what I think is the critical influence, more so than actually doing "real" AI (whatever that means)
Topological doesn't just have "AI" in the description, they are tagged with the "AI" tag, and for good reason, they are developing machine learning models (which is what they mean by "foundation model" in the text you quoted).
Sort of like how so many people having nothing to do with "AI/ML" on Linkedin put "AI/ML" in their Headline. I always wondered if that even works, do people get found/recruited/hired more for having that?
It's a fool's game to go looking for signal/value on LI. It's an echo chamber of group delusion & fake it till you make it. Use it when you need a new job and avoid like measles the rest of the time.
i disagree. if your product or business model are tied to the use of AI so much that you put AI in your description, then you are an AI company.
By that metric, every single software company is now an AI company.
I can't think of a SINGLE company (except ours maybe, only because we haven't updated our website yet, not because we don't want to) that doesn't vomit AI terms _everywhere_ in their product messaging.
I don't know whether consumers/buyers are demanding that, or what. But it seems everywhere and in every discipline. It's in research grant applications, it's in legal tech, everywhere.
AI is the new "we use the internet" - it will just get taken for granted.
I agree with the GP and you - every company IS desperate to be an AI company today. They've been told three things: 1. this time it's different, 2. AI is disrupting everything, 3. if you don't disrupt yourself someone else will.
# 1 has been said repeatedly before every previous AI winter.
#2 is a lie from people trying to sell you something and the FOMOs parroting the same line.
#3 someone is going to disrupt you regardless, and it's impossible to disrupt yourself (unless you're prepared to stop doing everything that made you successful in the first place)
> I don't know whether consumers/buyers are demanding that, or what.
Consumers aren't demanding it, investors are.
They see the hype and ridiculous amounts of money being invested into AI startups, and they're afraid of missing out, so they throw their own money in. No time to stop and think about it, there's too much hype and things are moving too fast. Then other investors see even more money being thrown in, and throw more money in themselves - monkey see, monkey do. Repeat ad infinitum and the next thing you know, majority of venture capital is being blindly funneled into anything AI related.
If you're leading some unrelated company and paying attention to where all this money is flowing, the path forward is obvious: simply become an AI company. Doesn't matter if you're actually building something valuable with it or not, because as long as your landing page fulfills a certain quota of AI buzzwords, the investors won't care. They only care about making the line go up, because it's good when the line goes up.
It's a bubble.
I agree the AI hype needs to be put in check, but the article feels a lot like massaging data adhoc to fit a very specific world view.
The perverse incentive is that if you mention the word AI, you get a multiplier.
we're entering juicero levels of delusion
>CAD optimization.
Guess what, you aren't getting into YC as a CAD company and you aren't raising further rounds either.
I was at a meeting of a bunch of VCs a while ago (a couple of months). The principal partner of one of the larger funds said that they will now only consider investments that are AI related. Just to give you a data point, I'm not saying that that is a trend. But if that's a market representative you can't fault the supply side for adapting to it.
My view on this now is that a company whose product or service is based on 'AI' is openly admitting incompetence in what should be their core competence.
Or put another way, their competency relies completely on people not associated with the company.
until recently I worked in learning management systems (LMS) including the content creation, libraries and delivery. At a recent event with many VCs they all considered the learning space dead because AI was going to do it all. Don't hold your breath!
The one field that I have seen impacted more than just on the periphery is translation services. What was worth 20 cts (or even more, for niche stuff) per word not that long ago is now essentially free.
That impact is very real, but you aren’t getting the same thing for free that you were getting with 20¢/word.
Yes, fully agreed.
You can always fault the supply side.
No, you really can't. You either are going to play ball or you'll simply not get funded. It's pretty dumb but that's the way it is right now, maybe they'll come to their senses again once the bubble inevitably bursts but that could be quite a while.
It’s why it’s called “stupid money”.
The one thing that LLMs are good at is prototyping super fast. For accelerators/incubators etc this is a game changer since more ideas can be realized and tested for market fit.
Now if the market fit indeed exists then someone needs to rearchitect and rewrite the thing. But wasn’t it always the case? The POC always was a hacked together solution with no real viability to be used as the final product.
LLMs are good at prototyping something that is similar to something that already exists and is open sourced.
It may be that there are such projects which can be monetized or need better marketing.
Innovative it is not, however.
I suspect the vast majority of innovations are a combination of a small number of things that already exist.
Almost nobody is making things that are so unlike everything that previously existed that LLMs can't help.
I don't understand this position.
LLMs are good at prototyping using data across _all_ similar projects that exist.
It is not a 1-1 copy.
Most frontend is a dozen components. Most backend again is a handful of architectures when it comes to DB/business logic, CRUD.
It goes to say that if you can guide the LLM to build something innovative you can think of, it will put those components together in a reasonable way - good enough to start off with.
Exactly. One could build the new video platform with revolutionary customer facing features. The tech stack will likely be the same. Some frontend, some backend maybe some calls to an endpoint that happens to be an LLM.
Startups that typically end up in incubators etc are not about new fundamental systems (languages, frameworks, theoretical methodologies etc), but rather about new products.
there's only so many times (in fact, 3!) I want to implement oauth from absolute scratch
This seems to suggest a failure in our model of software. We were supposed to have reusable components. Writing the same thing more than once was not supposed to be necessary.
I recognize that in reality this hasn't always worked out. But I also don't think that the answer is a black box that can churn out questionable boiler-plate.
think about the early days of Facebook, when it was called TheFacebook.com
a bunch of PHP scripts :-D
90% (154/170) of the latest HN summer batch are AI startups. I was expecting a lot, but not this much.
Or they're including AI as a buzzword in whatever they were already doing, and not exactly ignoring a new tool, but might be overselling how useful AI is to their thing?
Just like every existing SaaS and Enterprise platform is. It's just a new checkbox that you must have.
Why almost lie though? Like, if I am a company and I integrate AI just to say to my investors that I got AI so that they can not feel FOMO is utterly bonkers and well, I don't know but the investors definitely don't sound reasonable and I think that the people who are somehow lending money to these investors who are investing on such basis definitely need to think about their life choices if the company in their portfolio is selected or not just because of this seemingly bizarre checkbox that most general public is actually in fact against of having.
I don't think anyone anywhere on the totem pole: from the junior engineer, to the engineering manager, to the founder, to the VC, to the investors, cares if it's actually AI. They just want to see the word there. Someone got it into their head that "AI is the thing now" and now the junior engineer isn't going to get hired unless he says AI. The Eng manager is not going to get promoted unless he talks about managing AI. The Founder is not going to be funded unless he says the company relates to AI. The VC is not going to line up investors unless he says AI. And investors have no clue what to do with their money, but heard somewhere that "AI is the thing now" and that's where they want to flush their money.
Nobody really wants any of this shit as a product.
Literally every tech startup company out there is spinning itself as AI. I’m familiar with some of them well enough to know that the AI is bullshit. You could be running an SVM on some data and you’re now “powered by AI”. I’m not joking.
I would love to see the same stat for submissions as a whole. I bet it’s even higher.
shownew is flooded with those
I've also noticed many startups from prior YC batches that haven't found traction yet have pivoted to AI-related offerings. It's been kind of amusing to watch them become absorbed into the hype cycle, one by one.
"AI startups."
Only around 10% of them will succeed at best.
If a massive crash happens then it would be 2% at best.
So like internet startups in the 90s. Like .com in the late 90s. Like social media startups in the late 2000s/2010s. It's all cycles. Every bubble is talked about on HN (and similar sites) because that's the purpose of sites like this.
maybe a big difference is those hype cycles were net-positive for developers. This is the first one in a while that has VCs and upper management salivating with the idea of cutting all the devs lose. Who do YOU think this is likely to replace first, technical developers or administrators?
It worries me a lot what are those devs without jobs now, will do next? Do they change to the dark side and start battling AI startups and technologies?
"Developers surprised automation takes them out after decades of taking other people out with automation"
But hey, mention the word union here and you'll still get beat with a stick.
The real AI startups are those who are using AI to make profits on day 0. They will outlive a crash. They don't need Y Combinator.
The article starts by assuming there’s an AI bubble.
Then it goes to label all these startups as AI companies.
This is how they are categorized (2025 batch only):
AI Agents (15%)
Productivity/Ops Tools (11.4%)
AI Infrastructure (10%)
Healthcare (10%)
Consumer Apps (9.3%).
We cannot argue that AI infrastructure is AI, but what about say back office automation and healthcare? Are these AI companies? Or are they automation/healthtech that happen to use LLMs?
Many Fintech and e-commerce companies use ML, are they ML companies?
If a company's supposed advantage is predicated on using LLMs, it's an AI company. Of course, if that strategy fails, it could pivot to a non-AI approach which would make it cease to be an AI company.
I wonder how much of this is perverse incentives.
If all the AI startups are using Anthropic and OpenAI, does this investment in AI startups continue to be the norm until the Anthropic and OpenAI investors see a return?
I'm predicting there will be a startup to focus on large scale forensic identification of archived samples of various things. Particularly items that may have been important enough to save in a freezer for 70 or 80 years. The refinement and availability of devices to measure microscopic amounts, and AI to process and sift through mundane results. From my experience with science, laboratory procedures before the 1980's were generally inconsistent and inadequate. Contamination was not uncommon, something biological investigators had to deal with when investigating early HIV cases. Wouldn't be surprised for an "empty the freezers" order from the FDA or CDC. Probably located underground in a BSL 4/5 facility in Arizona or Utah.
BSL-5 is a thing? I think BSL-4 is exclusively hemorrhagic fevers.
At an individual scale, working on AI makes sense, no one wants to miss out.
Though I would love if the people who take care of the forest take more responsibility to shepherd the chaos.
In a federated society like ours, I doubt that's possible, for good or bad or extinction.
Bubble or not, AI is eating everything, not just YC.
In particular, I've noticed the topic of AI is eating up corporate meetings, turning them into a kind of human slop.
AI (or specifically LLM-related) seems to have sucked so much oxygen out of the room that it's harder to focus on other things. I moved on about 18 months ago and I'd like to work as an analyst again on some other things but AI seems to pretty much dominate the conversation. I'm not disinterested in AI but it seems to simultaneously be the topic with the most interest and the most noise.
I've been exhausted by this for at least a year, so much so that (for a big part) I left my good job in August. The past 7 weeks have been amazing!
I'm not sure I'm exhausted but, given a bunch of other things going on (that have been exhausting), I've been less inclined to keep as involved in tech as I would otherwise have done. Probably do a reboot after the holidays and decide what I want to do (or not).
Corporate meetings were never not slop though. I can count the actually useful ones in my 35 year career on 1 hand.
According to The Information (which is hard-paywalled unfortunately) venture capital as a whole is set to run out of money in 18 months if things continue at the current pace. AI companies had better turn profitable sharpish.
How would that even work to be profitable? I don't see a way on how they can suddenly become profitable if I can be honest when the fundamental economics of the thing seems a little broken to me...
Additional revenue services aside from selling LLM tokens, such as the commerce play OpenAI announced yesterday.
We're all in trouble but OpenAI more so if they're depending on consumer spending (specifically via AI) to float the general economy - let alone theirs.
article link ?
https://www.theinformation.com/articles/dry-powder-running-s...
Summarized by Ed Zitron: The Information recently published some worrying data from venture capitalist Jon Sakoda — that “at today’s clip, the industry would run out of money in six quarters,” adding that the money wouldn’t run out until the end of 2028 if it wasn’t for Anthropic and OpenAI.
Captain Obvious chiming in here, but as one of those boring douche canoes whose concerns are data privacy, data locality and generally the ability to understand how AI trains off of and manipulates the data we feed it, I'm resistant to the tech. We're testing the use of AI to aggregate and explain patterns in the data we have, but this is limited to our ticketing systems and Slack. Until our current choice of AI provider can guarantee that our transactional data won't be stored outside of the EU, the people driving this internally and those keen to make the sale externally can take a long walk off of a short pier. I can almost smell the aroma of the coffee machine from the chamber in which an EU subcommittee is working on the Artificial Intelligence Act (AIA) (EU) 2025/xxxx to add to the raft of regulations that financial entities in Europe need to concern themselves with. Maybe they'll be nice and just append it to an existing act.
AI might be great. AI might be terrible. I'm not all convinced that most data aggregation features baked into AI and used by most normal companies couldn't be implemented in R or SQL (disclaimer: I couldn't code my way out of a wet paper bag if the tool I was told to use was an axe). It's just wanted so that someone can crawl over data sets to ask simple questions like 'how many merchants exceeded n number of transactions between a and b date' or 'My customer needs an eIDAS certificate. What do I ask them to send us without having to talk to Captain Obvious?'. I mean, we're busting a gut on revamping our developer docs, but given that the spec is already public, I'm pretty sure that developers can already vibe code against that. Going to test that and see how it gets on.
AI ate HN also. It’s ridiculous how many articles are about AI now. Are we truly out of any new ideas? Can we not talk about something else? I kind of like AI but it’s a tool, I don’t talk about my screwdriver constantly.
I wouldn't say that.
"AI" (quotes deliberate) is the biggest story in tech, right now, for better or for worse. HN is a news site, so it's fairly logical that "AI" would dominate.
It wasn't that long ago, that you couldn't find a pitch deck, anywhere, that was missing the word "Blockchain."
I'm finding it interesting. It's definitely a bubble, but unlike crypto, there's some real utility here. I do think that we'll be seeing some excellent stuff (and some awful crap) down the road.
Brother go to something like r/cryptocurrency to see people trying to justify the real utility...
I am not a crypto advocate but there are definitely a lot of similarities.
The only One thing I can respect Sam Altman for is the line that he said about bubbles and "When bubbles happen, smart people get overexcited about a kernel of truth,”
This is the similarity... Both had some use cases, one might have more and the other but we are being so overexcited, I think being forced to be overexcited about this tech
> see people trying to justify
Tangential pet-peeve: "That's fixed with private blockchain", which is the equivalent of "That's fixed with Segways that have a second set of wheels and the self-balancing is disabled."
There's a bunch of people busy reinventing decades-old old tech/accounting, either because they never bothered to learn what was actually new about "blockchain", or because they can't bear to admit that the new stuff wasn't really good/necessary.
________
[Edit] To digress into specifics, the characteristic that sets "Blockchain" apart from what we already had (distributed databases, Merkle trees etc.) is fulfilling a product requirement: That node-membership is distributed and uncontrollable. That's what causes a cascade of anti-abuse features.
But outside of wild-west cryptocurrency, that usually isn't necessary, nor even desirable when you consider the rest of the cycle-burning baggage and complexity that comes with it.
Oh, there’s going to be some heavy-duty snake oil, but I can say that I’ve already found it quite useful, in its current, larval state. I just haven’t found it to be a “game changer,” for me, yet.
That said, I am not really an industrial-scale user.
It is your duty to read HN new, and upvote articles you find interesting, or that you think others might find interesting. You can bias towards non-LLM ones.
If hackernews existing during the dawn of personal computing you bet your ass that every single cpu release by intel or ibm or whatever would be front page news and everyone would be talking about how they were going to use computers to automate all paperwork etc etc.
you just need some perspective.
And that's just the articles. Then there are the comments...
But what if the screwdriver has a really cool website though with cool animations which would run lagging even on modern hardware though?
I am sure then you will talk about screwdrivers...
Screwdriver wrappers probably lol /s
(This was a joke of sorts and replace screwdriver with AI/AI wrappers and these websites owned by AI wrappers)
I’m a few chapters into Empire of AI and finding it a fascinating chronicling of how we got into this AI mess.
https://archive.is/lMpbi
lol even that is paywalled
None of the usual suspects bypass it either, seems like it only delivers that reduced section of page in the HTML response.
how gpt-wrappers ate y combinator.
Not entirely sure it's a bubble.
LLM's have applicable tech in the world today, where it's usable where software prior hasn't been able to, I don't see it going away.
Where folks are trying to apply it to replace deterministic software..? Mileage may vary.
A bubble doesn't mean there's no value whatsoever, but that we've surpassed rational thought and validation. Example: we ask good developers to evaluate tools and options; not jumping in blindly to the next big thing is what makes them good in the first place. With AI, executives are literally saying if you're not all-in you have no future here. THAT is a bubble.
A tangential note -- these days when I'm casually reading through job postings, a lot of companies mention in their job requirements something along the lines of "Must be familiar/knowledgeable about GenAI technologies (RAG/MCP)" even for roles that are not strictly related to GenAI.
When you look deeper into most of these companies, they seem better off using third-party services (like Claude/ChatGPT) instead of building their own GenAI models, MCP servers and whatnot. This is not considering that the domain they are operating in would have no apparent and obvious gain from utilizing GenAI (sure, they can buy ChatGPT/Claude organization subscription and that's all they need). I don't know if this is the indicator of how big this AI bubble is, but it sure is interesting...
HN's AI hate-boner has always been a bit off-putting to me. This is a technology forum, and it's pretty much the biggest advance in recent technology that has potential implications for all of our lives. I definitely also get AI-fatigue, but it's no mystery why there's a preponderance of content about LLMs, diffusion models, self-driving cars, etc.
YC's goals are to manage risk and to make money, and new tech like this is almost certain to make someone a lot of money. All these YC companies are just different random initializations of potential ways that this new generation of AI can affect the world. It's a given that most startups of this breed will fizzle out with no impact, but I imagine that a few of them will actually change how something is done (and make a lot of money in the meantime).
The hate boner comes from HN's love for technology - software and hardware - and AI is so dominant in tech news. Once you learn the basics of LLMs and agents, which are really not that complicated, it gets sort of dull to hear about again and again and again.
Off-putting? I think skepticism over marketing-hype from workers in a field is how things are supposed to work, especially for a group that spends lots of time plotting things out looking for edge-cases and ways for it to fail.
I'd be far more disturbed by the opposite, where everybody on HN is expected to gush over the thing-du-jour.
As aptly-put for a prior hype-cycle:
> Tech Enthusiasts: "Everything in my house is wired to the Internet of Things! I control it all from my smartphone! My smart-house is bluetooth enabled and I can give it voice commands via Alexa! I love the future!"
> Programmers/Engineers: "The most recent piece of technology I own is a printer from 2004 and I keep a loaded gun ready to shoot it if it ever makes an unexpected noise."
It’s because this is a tech forum that we actually put our thinking caps on rather than just accept whatever BS comes out in corporate press releases.
[flagged]
If you drive through San Francisco on I-80, every single billboard between the Bay Bridge and the 101 is for some kind of AI service (except, amusingly, the one for Yudkowsky's new book about AI doom). All of them look terrible and completely useless. There was one that said "Still using PowerPoint? Use our AI slide maker instead" with a picture of a three-eyed cat, as though to brag that their service makes unappetizing and unreliable slop.
It's total insanity; comparisons to tulip mania no longer even apply now that people are tossing around numbers like $500 billion when talking about their capex buildouts.
Yeah, a few times per year, I take a drive to SFO, and I am again struck by the absurdity. It's not just the current AI fixation. They always seemed bizarrely niche to me.
I assumed billboards were for mass consumer marketing. What tiny percentage of the people on these highways are actually in a position to act on any B2B tech marketing? I don't understand the economic choice to pay for a billboard like that. The ones along the highway that make sense to me are for iPhones and such.
There is one on 101 that says "My boss really wants you to know Redis is an AI company". Blows my mind how that marketing team is still employed.
> I assumed billboards were for mass consumer marketing.
My hunch is that billboards on I-80 through San Francisco are a vanity product. Their actual purpose isn't marketing, but to flatter the egos of the CEOs that own the service - they get to dominate the sky, the eyes, and the brains for thousands of tech commuters twice a day. The one thing I'm not sure about is whether or not the people paying for the billboards actually think that their sales are going to materially benefit, but I'm pretty sure the sellers are clear that they absolutely will not - and either way, I'm sure people see it as a mark of prestige to have an ad there.
It’s basically sifting through ore; 99% of the people who see it aren’t the target, it’s the 1% of viewers who are buyers or funders who you otherwise couldn’t directly advertise to. Same reason you see defense contractors putting up ads for weapons systems in the DC metro.
I was in San Francisco a couple of months ago, and the BART was completely full of ads for an _AI pull-request-reviewing tool_. Horrifying concept.
Also, oddly, a few for Mulvad of all things.
Every company is an AI company and you should probably be less interested in building an AI and more interested in how you can apply AI to solve business needs that are not directly AI-related.
Is this my CTO posting on HN? I've heard this EXACT statement repeatedly over the past 18 months. It's one of those that sounds smart but when you pull it apart there's nothing of value here.
It has eaten Hacker News too. Hello LLMs!
>Hello LLMs!
"alexander grahm beLLM invented a new greeting for the telephone during the telephone bubble: heLLMo! to you too!"
Is it just me or anyone else fed up with the startup community. I cannot find anyone who actually cares about values and have backbone.
The stories I grew up with of Wozniak, Jobs, Larry and the hacker community does not resemble them anymore.
Everyone I know who raised money is a moron or narcissist.
YC has became like a resume builder.
And the genuine hackers I know of are wasting life away working on pointless projects.
Maybe it's just me not being able to not get out of the daily job schedule and taking the piss out.
The self-described "startup community" hasn't produced much of any value to me in a really long time. Dropbox-like things are somewhat useful, but that started in 2007 or so and the one I actually use in maybe 2011 or so.
I just took a look at yclist.com, which ends in 2019. Noticed only a few that I've ever heard of and none that I use. Possibly there are some since then but I sort of doubt it.
There are a couple of things that started later than say, 2015 that I actually use but none that come immediately to mind came out of the "startup community."
There are some others, like Stripe, from which I probably benefit somewhat indirectly, but even that dates back to 2010.
So yeah, for me, the SV "startup community" has been a wasteland for a long time. There are probably a few things that I'm not thinking of at the moment, but the fact that they don't come to mind suggests that they just aren't very important.
To be fair, there are probably quite a few useful startups hidden in the B2B world, especially given how much accepted startup strategy appears to have shifted to focusing on that segment.
This is true, but as far as I can tell, does not benefit me much. That's why I mentioned Stripe as something that probably does benefit me indirectly.
But a lot of the YC companies in particular seem to be making software I don't care about to sell to other companies making other software I don't care about to sell to other companies making other software I don't care about, and so on. The benefit to me of all this activity seems to be so close to zero that the difference doesn't matter.
Some of the self driving car companies were startups, no? OpenAI as well.
I think the most important self-driving car company is Waymo, apparently founded as the Stanford Self-Driving Car Team in 2004, somehow became the Google Self-Driving Car Project in 2009, the renamed Waymo in 2016.
All that is pretty long ago now and doesn't really have much connection to the SV "startup community" (Google ceased being meaningfully described as a "startup" a long, long time ago).
And anyway, I don't care about self-driving cars. Don't have one, don't particularly want one.
The direct benefits to me of OpenAI are nearly negligible. I use it (and others) now and then for various things but if it disappeared tomorrow, my life and work would not be meaningfully impacted.
it is because during from 2004/2005 on, it was seen a way to get rich quick to "just found something" and try it to sell: the "early days" where driven by engineers, tech people, technicians - then the MBA crowd joined and funding became a "pipeline processed thing" to get tons of money quickly.
Why do you need a community? Build the thing, and then launch the thing by talking to prospective customers. If you're building a SaaS, you don't need funding if you keep working a day job and build your product slowly.
The clever people are still doing good work, they're just doing so quietly.
Everyone needs a community, even if it isn't large.
Big money has sucked the life out of it. It’s just a returns game, there’s less care for changing the world.
I do know many funds and people working to build community and startups that actually help people but it’s an uphill battle because it’s not as sexy and the returns aren’t as immediate.
American capitalism is degenerating into nothing but bullshitting/scamming as many people as possible and getting out before they notice.
> Everyone I know who raised money is a moron or narcissist.
Not all though but to me, I okay, I am mentioning this a lot nowadays but I am in high school and this is relevant.
When I first lets say wanted to do a startup, my idea was that people would invest in me for me to grow and then I can sell it really later after 15 years of working or sorts to have financial freedom.
I wanted to build things that could make profit for 15 years and be something that could've needed capital to expand the growth just like any other business.
Just because it is in coding/tech and its spicy right now doesn't make the principles of sound business go away.
Yet, the more time I invested in here / seeing YC, it seems that the story is about hype/growth/operating at a loss knowing things aren't sustainable/building wrappers.
Only to sell them at insane profits to somebody later on while the company never made a single profit or something while hiring many people...
What can I say, its just something that I can understand if someone is doing and there might be companies that have this fundamental but untill I find something like that, I am pretty sure not gonna just go and paste AI sticker in somethings as some other people are doing right now...
Its a matter of moral backbone. I can't charge my investor wrong knowing that my project doesn't have potential or sorts and its an hype thing... ,Idk. There are a lot of systemic issues in the whole world that we have to think through to discover how we got here.
> And the genuine hackers I know of are wasting life away working on pointless projects.
Man this is something that I grapple with a lott, we must do something to survive and so most of us work a dead end job even though we can be really passionate about something and so there's definitely that which I resonate with a lot.
I feel you. Disconnect for a while, touch grass, be the change you want to see, maybe explore parts of the internet that are less “startup” focused and more hacker focused?
The AI bubble isn't what everyone thinks it is.
Everyone's panicking about "AI features" being bolted onto products like it's 2010 and we're adding social login buttons. That's not the bubble. The bubble is the assumption that current software companies have defensibility.
Here's the thing: we're not adding AI to products. We're removing the need for most products entirely.
Nobody shipped without search after Google. But search was an enhancement—it made existing software better. AI is a solvent. It dissolves the economic moat that justified building the software in the first place.
YC's entire thesis rests on startups capturing value during the window between "this problem is painful" and "an incumbent solves it." But what happens when that window collapses to zero? When any reasonably clever person can get Claude or GPT to generate a bespoke solution to their specific problem in 20 minutes?
I'm watching food service managers—people who optimize labor, inventory, and customer flow in real-time—get told they can't build software. That's a lie we told ourselves to justify $150k engineer salaries and $10M seed rounds. Those managers have exactly the cognitive toolkit needed. They just didn't know C++.
In three years, they won't need to.
The SaaS model assumes friction. It assumes most people can't build the thing themselves, so they'll pay you $50/month forever. Coding AI doesn't make software easier to build—it makes the *act of building* indistinguishable from the *act of using*.
You don't need a project management tool with 600 features. You need to tell your computer what you're trying to coordinate. You don't need Photoshop's menus. You need to describe what you want the image to convey.
Every software company selling picks and shovels to the AI gold rush is missing that they're about to get disintermediated by the prospectors themselves. The cloud was never about technical superiority—it was about control and recurring revenue. What happens when capable models run locally and people can spin up exactly what they need?
VCs are investing in moats that evaporate the moment non-engineers realize they don't need us anymore. Network effects, switching costs, proprietary data—all predicated on software remaining expensive to create.
It's not.
The actual bubble is venture capital itself. You can't invest in defensibility that doesn't exist. And you can't charge rent on the gap between intention and execution when that gap is closing.
We're not in an AI bubble. We're watching the software bubble pop in slow motion.
If want your claiming is correct, it goes for nearly every profession.
Why do I need a lawyer if I can just get the AI to do it all for me? Filings, briefs, legal arguments etc. are all just output generated from specific inputs after all.
Why do I need to go to a doctor if I can just have AI diagnose, and eventually even run tests and then operate?
Why would I need artists / marketers for whatever my product, the magical AI can just do it all.
It could be we're headed down this road, but I don't see how software is somehow special in that its the only thing AI can do competently
The fun part is what looks like the core assumption here, that the current AI thing will actually work, doesn't even need to be true; it just needs enough people to believe it for the negative effects of it to become true.
Even without AI the software industry is not in a good state for a range of reasons. The big tech companies barely sell software - it's custom software operated to perform some other much stickier function, but the value is not in the software per se beyond it enabling selling the other thing.
This - unfortunately - goes for a lot of domains. Things that you used to be able to make a career out of now have a shelf life of a few years, sometimes less than that. This time compression is not just upsetting the VC world, it is upsetting just about everything in high tech societies.
Chill, you are going to freak a lot of people here out.
Start a foundation model company ?
I don’t know how you can claim all this when ai is nondeterministic and hallucinates. Menu based software is merely a way to trigger a function, the same function, every time you want to run it. Ai in its current incarnation cannot even do that.
Yeah, having to type out, or even speak "rotate this by 90 degrees about the Z axis" would get annoying fairly quick. Not to mention the inference is probably going to consume several seconds.
Blender user, or probably any other, would be able to pull that with a mouse click, followed by a few keystrokes. Bam, done.
Now something time consuming like UV-unrolling, sure go ahead and incorporate AI. But I'd bet it would need quite a bit of tweaking for a "pro" job, although of course thats not always necessary
> I don’t know how you can claim all this when ai is nondeterministic and hallucinates.
It's so bad. I was thinking that it would at least replace people whose job is largely to give their worthless opinions, but it won't even do that. Those people's real job is often to add head count, to give your uncle's kid a job or to be fired when things go wrong. AI can't do any of that, it just generates the worthless opinions. Now the idiot won't even have to imitate the verbiage they sort-of heard in college; they're the ones that are using AI the most, to bullshit for them.
It's good for helping you think through stuff you're thinking about by repeating back to you in different words (and getting even that wrong often, forcing you to clarify.) It's horrifically bad at finding or following references, reasoning, or coming up with anything new.
It's not eating software, it's barely even touching software, other than being shoveled into it randomly. The obvious proof that AI is bad is that there are actual geniuses who came up with the algorithms to speed up, parallelize and to bias in a way that makes them seem more productive and creative. LLMs don't seem to be helping amazing minds like that improve AI itself. If LLMs were even going to be fertile, that pairing would be a semi-singularity even if these exceptional humans couldn't be taken out of the loop. My bet is that they don't help at all.
looks a little bit as in the bitcoin/nft/crypto/web3 time :)
Every consulting business was adding "blockchain" expertiese on their homepage. Now it's replaced by AI. What's next..
To be fair, AI can actually be useful. Crypto projects at one point were tokenizing every random idea, and unsurprisingly most of it turned into scams. Is there any relevant crypto product that truly works differently than just being a tool to buy other coins, apart from Polymarket?
Some sites, especially international, accept crypto in lieu of other payments like credit card. Useful especially if you aren't 100% on the site. You just pay them and don't worry about overcharging or having to dispute payments, etc.
You are right. I actually know people that pay for "fake" streaming accounts through some coins so they can watch Netflix or HBO. Usual cards get blocked there.
We'll it's nice to see usage of crypto(semi anonymous payments) for it's original purpose at least somewhere.
what a terrible site
YC has been an anti signal for a while but now its basically a guarantee of a company putting out pure AI slop.
“All the smart kids of the Bay Area want to work on AI,”
Operative word here is "kids", and from what I can tell they're not even the smart ones, just the ones who are more geared toward being money and status-obsessed.
It's the same thing as kids that say they want to be a star athlete, rock star, influencer, or pro gamer. It's just something they are interested in that they think won't take a lot of work to be paid well. Hype train moves a lot of mass
There's always the illusion of a quick and easy path to being rich. But the only one of those that really exists is winning the birth lottery and inheriting.
All else is a combination of luck, skill, and grit, because for every rock star, star athlete, and successful influencer, there are tens or hundreds of thousands who aren't famous because they didn't win that game.
The smart kids might be happy to work on model interpretability research at Anthropic. But 90% of AI companies are ChatGPT wrappers which don't sound particularly fun.
The vast majority of engineers out there don't have the skills to work in core AI development, so the only option for "working in AI" is building one of these API wrappers.
The vast majority of engineers don't have access to the machine resources either.
Yup. They couldn't devops/mlops their way out of a paper bag if they had to.
you could say the barrier of entry to tech has a new low
I'd be interested in demographic information about devs working on AI that'd back this claim.
In 2021 they all wanted to work in crypto / "web3".
You could have said that with any previous investment cycle and the people applying and being accepted to Ycom. Its a thing for generating money not great code or w/e.
FWIW i think that AI (or at least the LLM and some of the other neural network based stuff) can be useful for some stuff and interesting from a technical perspective for many people. Personally i find 99.999% of AI to be way WAY overhyped and i do not really care about the commercial side of it, but at the same time i have found some neat uses for (local) LLMs and i do find the tech behind it interesting (and experimented a bit with it, at least whatever i can do on my own computer).
And on that topic, WRT Hacker News, personally i find someone running GPT2 on their 90s Silicon Graphics retro workstation much more interesting than yet another topic on how AI is eating everything :-P.
I do think that another topic on how AI is eating everything could be nice given how nobody is still believing it and somebody had shared the list of YC companies with AI and you had to see how long the list was and how unimaginative it was, to me it felt Like X but AI, Y but AI....
You yourself say that AI is overhyped and that overhype has formed into a bubble of sorts which has eaten Y combinator.
Also I probably don't understand how your point is relevant to the parent comment if I am honest or maybe I am not finding the relevance in which case feel free to explain it so that I learn something new
Forgive my ignorance, I'm just a mid-tier dev, but isn't AI pretty much PhD with multiple publications territory at this point ? I'm not talking about writing wrappers around ChatGPT but working at OpenAI or Anthropic
https://archive.is/cucZu
[dead]
[dead]
[flagged]
My guess (paywall): they wanted to drive up demand for their existing investments in AI. In other words, the bubble didn't eat Y combinator, Y combinator did it to themselves on purpose.
AI, if it pans out, allows VC gambling to succeed with fewer dollars due to downward pressure on total demand and total pay for workers, specifically annd especially skilled engineers without leadership qualities.
AI is the pinnacle of everything that came before it on HN.
It is the logical conclusion to all our efforts before it to build machines that do work for us; every idea pursued to get computers to do more for us, in every possible way, in every possible location, ALL at once and in synchrony, faster, better, stronger.
Artificial intelligence IS HN.
Resistance to that truth, is resistance to a coming singularity that already always was.
Have we had any articles about people vibe coding Doom onto a toaster yet? If not, I'd say you're likely incorrect.