I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine.
What are you building? Does the tool help or hurt?
People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.
This seldom happens, even in solo hobby projects once you cost everything in.
It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
This x1000. The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
* the ability to find essentially any information ever created by anyone anywhere at anytime,
* the ability to communicate with anyone on Earth over any distance instantaneously in audio, video, or text,
* the ability to order any product made anywhere and have it delivered to our door in a day or two,
* the ability to work with anyone across the world on shared tasks and projects, with no need for centralized offices for most knowledge work.
That was a massive undertaking with many permutations requiring lots of software written by lots of people.
But it's largely done now. Software consumes a significant fraction of all waking hours of almost everyone on Earth. New software mainly just competes with existing software to replace attention. There's not much room left to expand the market.
So it's difficult to see the value of LLMs that can generate even more software even faster. What value is left to provide for users?
LLMs themselves have the potential to offering staggering economic value, but only at huge social cost: replacing human labor on scales never seen before.
All of that to say, maybe this is the reason so much time is being spent on meta-work today than on actual software engineering.
I have watched artists thoughtfully integrate digital lighting and the like at a scale I'd never seen before the LLMs rolled up and made it possible to get programs to work without knowing how to program.
The fundamental ceiling of what an LLM can do when connected to an IDE is incredible, and orders of magnitude higher than the limits of any no-code / low-code platform conceived thus far. "Democratizing" software - where now the only limits are your imagination, tenacity, and ability to keep the bots aligned with your vision, is allowing incredible things that wouldn't have happened otherwise because you now don't strictly need to learn to program for a programming-involved art project to work out.
Should you learn how to code if you're doing stuff like that? Absolutely. But is it letting people who have no idea about computing dabble their feet in and do extremely impressive stuff for the low cost of $20/month? Also yes.
Now this is the right take. It's one thing for us to do navel-gazing into the recursive autononomous future; it's another to step back and see what Normal People can do, now that the walls are coming down around our profession. Creating new walls is probably not the answer! From the Cathedral and Bazaar, we now have an entire metaphorical city of development happening, by people who would not have thought it possible a few years ago.
I don't know what the future of my job holds other than what it always had: helping people who have good ideas to get them done properly.
The thing is though it all still feels so…rudderless/pointless sometimes?
When digital cameras came out, it democratized filmmaking immensely. But it wasn’t just people screwing around - amazing new works of art, received positively by audiences and critics alike, exploded in number. They wound up winning film fests, garnering millions of views (and fans) online, and even on big screens world wide, almost immediately
Where are the vibe coded apps that are actually good? Where are the new, innovative creations built by “normal” people? Because by now you’d think we’d see them. It’s all been parlor tricks, proofs of concept, and post mortems on how a bot ruined half a year’s work or whatever. The “good stuff” is still happening behind closed doors, led by experienced engineers on existing projects. It’s a productivity multiplier more than anything it seems, but it doesn’t seem useful as a tool for new people to make new things in any given space.
Emacs can be configured with no code written by the user and Linux can be controlled with minimal user knowledge of the command line. Still some knowledge is necessary in most cases, but nowhere near what was required a handful of years back.
I see the next really big task for software as the ability to separate the signal from the noise. Sifting the wheat from the chaff has gone from a 'nice to have' to 'rescue my sanity'.
Maybe agents and AI in general will help with that. Maybe it will just make the problem worse.
A spreadsheet editor with at most a couple of hundred MBs in size that can compete against Excel, for example. While also not eating from RAM resources. The same goes for a new browser and a new browser engine, it's time for Chrome to have a real competitor, it has become a mess. I can of other such examples, but these are the 2 biggest ones.
> The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
The overwhelming majority of real jobs are not related to these things you read about on Hacker News.
I help a local group with resume reviews and job search advice. A common theme is that junior devs really want to do work in these new frameworks, tools, libraries, or other trending topics they've been reading about, but discover that the job market is much more boring. The jobs working on those fun and new topics are few and far between, generally reserved for the few developers who are willing to sacrifice a lot to work on them or very senior developers who are preferred for those jobs.
There’s a whole world out there that doesn’t seem to be addressed by the original comment. On one end of that scale you have things like bespoke software for small businesses, some niche inventory management solution that just sits quietly in the corner for years. On the other end, there’s the whole world of embedded software, game dev, design software, bespoke art pipeline tools…
It can seem that the majority of software in the world is about generating clicks and optimising engagement, but that’s just the very loud minority.
Not that you asked… But I would be happy with a junior position writing production C or ASM - but I assume that those sorts of positions are on the other end of the same boat. Who the hell has any use for an amateur dev. with an autistic fascination and _zero_ practical experience?
Someone here shared an article here, recently, espousing something along the lines of "home garden programming." I see software development moving in this direction, just like machining did: Either in a space-age shop, that looks more like a lab, with a fix-axis "machining center," or in the garage with Grandpappy's clapped out Atlas - and nothing in between.
This is a good point. I've seen people with really complex AI setups (multiple agents collaborating for hours). But what are they building? Are they building a react app with an express backend? A next js app? Which itself is a layer on top of an abstraction?
I haven't tried this myself but I'm curious if an LLM could build a scalable, maintainable app that doesn't use a framework or external libraries. Could be danger due to lack of training data but I think it's important to build stuff that people use, not stuff that people use to build stuff that people use to build stuff that....
Not that meta frameworks aren't valuable, but I think they're often solving the wrong problem.
When it comes time to debug would you rather ask questions about and dig through code in a popular open source library, or dig through code generated by an LLM specifically for your project?
The copout answer is it depends. I've debugged sloppy code in React both before and after LLMs were commonly used. I've also debugged very well-written custom frameworks before and after LLMs.
I think with proper guardrails and verification/validation, a custom framework could be easier to maintain than sloppy React code (or insert popular framework here).
My point is that as long as we keep the status quo of how software is built (using popular tools that male it fast and easy to build software without LLMs that often were unperformant), we'll keep heading down this path of trying to solve the problems of frameworks instead of directly solving the problems with our app.
You are going to allow a product from a company you have no reason to trust write important software for you and put it into production without checking the code to see what it does?
I agree with you, which makes me seem like the laggard at work. Devil's advocate is that AI-native development will use AI to ask these questions and such. So whether it's a framework or standard lib, def agree knowing your stuff is what matters, but the tools to demonstrate this knowledge is fast in flux.
Again, I am on the slow train. But this seems to be all I hear. "code optimized for humans" is marked for death.
had another thought on my drive just now. nextjs is really fantastic with LLM usage because there's so much body of work to source from. previously i found nextjs unbearable to work with with its bespoke isomorphic APIs. too dense, too many nuances, too much across the stack.
with LLMs it spit it out amazingly fast. but does that make nextjs the framework better or worse in design paradigms, that LLM is a requirement in order to navigate?
> Are these tools necessary to build what we actually need?
I think the entire software industry has reached a saturation point. There's not really anything missing anymore. Existing tools do 99% of what we humans could need, so you're just getting recycled and regurgitated versions of existing tools... slap a different logo and a veneer on it, and its a product.
The tools are mostly there, but there is a lot of need. Quality can be much better. Quality is UI, reliability, security, and a bunch of other similar things I can't think of offhand.
We still don’t have truly transparent transference in locally-run software. Go anywhere in the world, and your locally running software tags along with precisely preserved state no matter what device you happen to be dragging along with you, with device-appropriate interfacing.
We still don’t have single source documentation with lineage all the way back to the code.
We still don’t treat introspection and observability as two sides of a troubleshooting coin (I think there are more “sides” but want to keep the example simple). We do not have the kind of introspection on modern hardware that Lisp Machines had, and SOTA observability conversations still revolve around sampling enough at the right places to make up for that.
We still don’t have coordination planes, databases, and systems in general capable of absorbing the volume of queries generated by LLM’s. Even if LLM models themselves froze their progress as-is, they’re plenty sophisticated enough when deployed en masse to overwhelm existing data infrastructure.
The list is endless.
IMHO our software world has never been so fertile with possibilities.
> I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
Feels like there’s a counter to the frequent citation of Jevon’s Paradox in there somewhere, in the context of LLM impact on the software dev market. Overestimation of external demand for software, or at least any that can be fulfilled by a human-in-the-loop / one-dev-to-many-users model? The end goal of LLMs feels like, in effect, the Last Framework, and the end of (money in) meta-engineering by devs for devs.
> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly?
Don't forget App Stores. Everyone's still trying to build app stores, even if they have nothing to sell in them.
It's almost as if every major company's actual product is their stock price. Every other thing they do is a side quest or some strategic thing they think might convince analysts to make their stock price to move.
Well that's the thing, AI can mean anyone with an idea can build it, but only the people that own stuff will be able to leverage that to own more stuff.
The legal doctrine that a company's primary responsibility is to maximize shareholder value dates from the 1970s. It started with Milton Friedman with a 1971 essay in the NYTimes [1] and then gained a lot of currency throughout the 70s stagflation and economic malaise. The final death-knell of the corporation as a social enterprise came during the 1980s era of corporate raiders and PE buyouts.
Note that the system that came before it had problems too. In the 50s and 60s, the top marginal tax rate was about 90%, which meant that above a certain level it made almost no sense for a corporate executive to be paid more. This kept executive salaries to a reasonable multiple of employee salaries, but it meant that executives and high-ranking managers tended to pay themselves in perks. This was the "Mad Men" era of private jets, private company apartments, secretaries who were playthings, etc. Friedman's essay was basically arguing against this world of corporate unaccountability and corruption, where formal pay and compensation were reasonable, but informal perks and arrangements managed to privilege the people in power in a complete opaque, unaccountable way.
Turns out that power is a hell of a drug, and the people in power will always find ways to use that to enrich themselves regardless of what the laws and incentives are.
>> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
This is because all the low-hanging fruit has already been built. CRM. Invoicing. HR. Project/task management. And hundreds of others in various flavors.
It may exist (with a loose term of exist) but they are all mostly garbage. There's still plenty opportunity to make non-garbage version of things that already exist
This is technically true but also a bit naive. Established incumbents are very difficult to dislodge with merely a better version of their products. This becomes more true the larger the product and the average customer size. A good example is QuickBooks, which is a really janky accounting/bookkeeping software that is almost universally hated, but newer and better solutions haven't been able to capture much market share from it.
It’s hard to actually build a better QuickBooks because to build a better QuickBooks you need 1000+ integrations that each took hundreds of man hours to build.
People don't realize how much software engineering has improved. I remember when most teams didn't use version control, and if we did have it, it was crappy. Go through the Joel Test [1] and think about what it was like at companies where the answers to most of those questions was "no."
At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.
Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.
Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.
Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughput, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)
> Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around.[1]
This is definitely something that is happening with software systems. The question is: is having an AI that is fundamentally undecipherable in its intention to extend these systems a good approach? Or is an approach of slowing down and fundamentally trying understand the systems we have created a better approach?
Has software become safer? Well planes don't fall from the sky but the number of zero day exploits built into our devices has vastly improved. Is this an issue? Does it matter that software is shipped broken? Only to be fixed with the next update.
I think its hard to have the same measure of safety for software. A bridge is safe because it doesn't fall down. Is email safe when there is spam and phishing attacks? Fundamentally Email is a safe technology only that it allows attacks via phishing. Is that an Email safety problem? Probably not just as as someone having a car accident on a bridge is generally not a result of the bridge.
I think that we don't learn from our mistakes. As developers we tend to coat over the accidents of our software. When was the last time a developer was sued for shipping broken software? When was the last time an engineer was sued for building a broken bridge? Notice that there is an incentive as engineer to build better and safer bridges, for developers those incentives don't exist.
The other day I was thinking about how stupid little things in the Javascript ecosystem where you have to change your configuration file "just because" are a real billion-dollar mistake and speculating that I could sue some of the developers in small claims court.
Right away I scoffed when I heard people had 20 agents running in parallel because I've been at my share of startups with 20 person teams that tend to break down somewhere between:
- 20 people that get about as much done as an optimal 5 person team with a lot more burnout and backlash
- There is a sprint every two weeks but the product is never done
and people who are running those teams don't know which one they are!
I'm sure there are better ones out there but even one or two SD north of the mean you find that people are in over their heads. All the ceremony of agile hypnotizes people into thinking they are making progress (we closed tickets!) and have a plan (Sprint board!) and know what they are doing (user stories!)
Put on your fieldworker hat and interview the manager about how the team works [1] and the state of the code base and compare that to the ground truth of the code and you tend to find the manager's mental is somewhere between "just plain wrong" and "not even wrong". Teams like that get things done because there are a few members, maybe even dyads and triads, who know what time it is and quietly make sure the things that are important-but-ignored-by-management are taken care of.
Take away those moral subjects and eliminate the filtering mechanisms that make that 20-person manager better than average and I can't help but think 'gas town' is a joke that isn't even funny. Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.
[1] I'll take even odds he doesn't know how long the build takes!
> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.
I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)
I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?
Version control is useful but it has nothing to do with software engineering per se. Most software development is craft work which doesn't meet the definition of engineering (and that's usually fine). Conversely, it's possible to do real software engineering without having a modern version control system.
... but it helps tremendously to have a solid computer engineering background since you are (finding and) transforming hard facts of reality into working code. I'd say its a mix of both, you can't just vibecode (or hack together before current times) a properly beautiful design (whatever that means in given instance).
> People don't realize how much software engineering has improved.
It has, but we have gotten there by stacking turtles, by building so many layers of abstraction that things no longer make sense.
Think about this hardware -> hypervisor -> vm -> container -> python/node/ruby run time all to compile it back down to Bytecode to run on a cpu.
Some layers exist because of the push/pull between systems being single user (PC) and multi user (mainframe). We exacerbated the problem when "installable software" became a "hard problem" and wanted to mix in "isolation".
And most of that software is written on another pile of abstractions. Most codebases have disgustingly large dependency trees. People keep talking about how "no one is reviewing all this ai generated code"... Well the majority of devs sure as shit arent reviewing that dependency tree... Just yesterday there was yet another "supply chain attack".
How do you protect yourself from such a thing... stack on more software. You cant really use "sub repositories/modules" in git. It was never built that way because Linus didnt need that. The rest of us really do... so we add something like artifactory to protect us from the massive pile of stuff that you're dependent on but NOT looking at. It's all just more turtles on more piles.
Lots of corporate devs I know are really bad at reviewing code (open source much less so). The PR code review process in many orgs is to either find the person who rubber-stamps and avoid the people who only bike shed. I suspect it's because we have spent the last 20 years on the leet code interview where memorizing algorithms and answering brain teasers was the filter. Not reading, reviewing, debugging and stepping through code... Our entire industry is "what is the new thing", "next framework" pilled because of this.
You are right that it got better, but we got there by doing all the wrong things, and were going to have to rip a lot of things apart and "do better".
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
> can anyone on the product side actually predict traffic
Hypothetically, could you not? If you engineer a bridge you have no idea what kind of traffic it'll see. But you know the maximum allowable weight for a truck of X length is Y tons and factoring in your span you have a good idea of what the max load will be. And if the numbers don't line up, you add in load limits or whatever else to make them match. Your bridge might end up processing 1 truck per hour but that's ultimately irrelevant compared to max throughput/load.
Likewise, systems in regulated industries have strict controls for how many concurrent connections they're allowed to handle[1], enforced with edge network systems, and are expected to do load testing up to these numbers to ensure the service can handle the traffic. There are entire products built around this concept[2]. You could absolutely do this, you just choose not to.
If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.
In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.
With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.
That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.
> Bridges are partly a design problem but mainly a build problem.
I think this vastly underestimates how much of the build problem is actually a design problem.
If you want to build a bridge, the fact one already exists nearby covering a similar span is almost meaningless. Engineering is about designing things while using the minimal amount of raw resources possible (because cost of design is lower than the cost of materials). Which means that bridge in the other town is designed only within its local context. What are the properties of the ground it's built on? What local building materials exist? Where local can be as small as only a few miles, because moving vast quantities of material of long distances is really expensive. What specific traffic patterns and loadings it is built for? What time and access constraints existed when it was built?
If you just copied the design of a bridge from a different town, even one only a few miles up the road, you would more than likely end up with a design that either won't stand up in your local context, or simply can't be built. Maybe the other town had plenty of space next to the location of the bridge, making it trivial to bring in heavy equipment and use cranes to move huge pre-fabbed blocks of concrete, but your town doesn't. Or maybe the local ground conditions aren't as stable, and the other towns design has the wrong type of foundation resulting in your new bridge collapsing after a few years.
Engineering in other disciplines don't have the luxury of building for a very uniform, tightly controlled target environment where it's safe to make assumptions that common building blocks will "just work" without issue. As a result engineering is entirely a design problem, i.e. how do you design something that can actually be built? The building part is easy, there's a reason construction contractors get paid comparatively little compared to the engineers and architects that design what they're building.
Software packages are more complicated than you make them out to be. Off the top of my head:
- license restrictions, relicensing
- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package
- supply chain attacks
- sunsetting
There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.
I have talked to colleagues who wrote software running on microcontrollers a decade ago, that software still runs fine. So yes there is set and forget software. And it is all around us, mostly in microcontrollers. But microcontrollers far outnumber classical computers (trivially: each classical computer or phone contain many microcontrollers such as SSD controllers, power management, wifi, ethernet, cellular,... And then you can add appliances, cars etc to that).
If something in software works and isn't internet connected it really is set and forget. And far too many things are being connected needlessly these days. I don't need or want an online washing machine or car.
The way the authors of the book on material strengths got those numbers, was through testing. If you're using mature technologies, that testing has been done by others and you can rely on it for your design, at least in a general way. Otherwise you have to do the testing yourself, which is something a structural engineering project might do also, if it's unusual in some way.
We have a long way to go but large software companies have gotten really, really good at scaling to handle larger and larger traffic loads. It's not like there are no materials to consult to learn current best practices, even if there are still more improvements to be made.
There are also fundamentally different acceptance criteria for a bridge vs a website. Failure modes differ. Consequences of failure are nowhere near the same, so risk tolerance is adjusted accordingly. Perhaps true "engineering" really boils down to risk management... is what you're building so potentially destructive that it requires extremely careful thought and risk management? Engineering. If what you're building can fail, and really cause no harm, that's just building.
I think it is in certain very limited circumstances. The Space Shuttle's software seems like it was actually engineered. More generally, there are systems where all the inputs and outputs are well understood along with the entire state space of the software. Redundancy can be achieved by running different software on different computers such that any one is capable of keeping essential functions running on its own. Often there are rigorous requirements around test coverage and formal verification.
This is tremendously expensive (writing two or more independent copies of the core functionality!) and rapidly becomes intractable if the interaction with the world is not pretty strictly limited. It's rarely worth it, so the vast majority of software isn't what I'd call engineered.
Maybe back in the beginning, but I don't think it's an engineering discipline now. I don't think that's bad though. I always thought we tagged on the word "engineer" so that we could make more money. I'm ok with not being one. The engineers I've known are very strict in their approach which is good since I don't want my deck to fall down. Most of us are too risky with our approach. We love to try new things and patterns, not just used established ones over time. This is fine with me, and when we apply the term "engineer" to work, I get a little uneasy, because I think it implies us doing something that most of us really don't want to do. That is, absolutely prove our approach works and will work for years to come. Just my opinion though.
I’ve had jobs where my title was “software engineer”, but I never refer to myself as such outside of work. When I tell others what I do, I say I am a software developer. It may seem a pointless distinction, but to me there is a distinction.
Neither myself nor the vast majority of other “software engineers” in our field are living up to what it should mean to be an “engineer”.
The people that make bridges and buildings, those are the engineers. Software engineers, for the very very most part, are not.
I was just reading "how the world became rich" and they made an interesting distinction economic "development" vs plain "growth". Amusingly, "development" to them means exactly what you're saying "engineer" should mean. It's sustainable, structural, not ephemeral. Development in the abstract hints at foundational work. Building something up to last. It seems like this meaning degradation is common in software. It still blows my mind how the "full-stack" naming stuck, for example.
Edit-on a related note, are there any studies on the all-in long-term cost between companies that "develop" vs. "engineer". I doubt there would be clean data since the managers that ignored all of the warning of "tech debt" would probably have the say on both compiling and releasing such data.
Does the cost of "tech-debt" decrease as the cost of "coding" decreased or is there a phase transition on the quality of the code? I bet there will be an inflection point if you plotted the adoption time of AI coding by companies. Late adapters that timed it after the models and harnesses and practices were good enough (probably still some time in the near future) would have less all-in cost per same codebase quality.
I'm similar except for me reason is no degree. So some jobs eng others just developer... although my current job I'm a "technology specialist" which is funny. But I'm getting paid so whatever.
Most recently I wrote cloudformation templates to bring up infra for AWS-based agents. I don't use ai-assisted coding except googling which I acknowledge is an ai summary.
A friend of mine is in a toxic company where everyone has to use AI and they're looked down upon if they don't use it. Every minute of their day has to be logged doing something. They're also going to lay off a bunch of people soon since "AI has replaced them" this is in the context of an agency.
It’s a bit of a misclassification. In my mind we tend to be more like architects where there are a fair amount of innovative ideas that don’t work all that well in practice. Train stations with beautiful roofs that leak and slippery marble floors, airports with smoke ventilation systems in the floor, etc.
Of course, we use that term for something else in the software world, but architecture really has two tiers, the starchitects building super fancy stuff (equivalent to what we’d call software architects) and the much more normal ones working on sundry things like townhomes and strip malls.
That being said I don’t think people want the architecture pay grades in the software fields.
It's an understandable mistake to make; culturally an engineer is defined by the building of physical objects that have extremely high reliability expectations. But "engineer" originally referred to someone who used their ingenuity to build or do things in a manner not routine or primarily physical [1]. Basically an inventor who produced. The main engineering accreditation body in the United States adds the requirement of a professional education, but it is more or less the same [2].
At the same time, if you remove 'engineer' , informatics should fall under the faculty of Science, so scientists, which are even more rigorous than engineers ;)
Computer Science (kind of a misnomer) should be in the faculty of Mathematics. Software Development should be in the faculty of Performing Arts. Informatics should be in the faculty of Business Administration.
It's a Systems Engineering job. You provide context, define interfaces to people, tests for critical failure modes affecting customer, describe system behavior, and translate to other people.
> A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".
- Edsger Dijkstra, 1988
I think, unfortunately, he may have had us all dead to rights on this one.
One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."
Dijkstra was a mathematician. It is a necessary discipline. If it alone were sufficient, then the "program correctness" fans would have simply and inarguably outdone everyone else forty years ago at the peak of their efforts, instead of having resorted to eloquently whiny, but still whiny, thinkpieces (such as the 1988 example [1] quoted here above) about how and why they would like history to understand them as having failed.
[2] I will freely grant that the man both wrote and lettered with rare beauty, which shames me even in this photocopier-burned example when I compare it to the cheerful but largely unrefined loops and scrawls of my own daily hand.
The formal methods people may yet have the last laugh. I did not have Lean becoming a hyped programming language / proof assistant on my bingo card for 2025-26 and yet here we are, because these tools help us close the validation loop for LLM agents. That is not dead which can eternal lie...
But yes, I think the best rebuttal to Dijkstra-style griping is Perlis' "one can't proceed from the informal to the formal by formal means". That said I also believe kind of like Chesterton's quote about Christianity, they've also mostly not been tried and found wanting but rather found hard and left untried. By myself included, although I do enjoy a spot of the old dependent types (or at least their approximations). There's an economic argument lurking there about how robust most software really needs to be.
Certainly, and it's at that economic argument that I strive to get, I think.
Every so often an article makes the rounds on the correctness and verification methods used for Space Shuttle avionics software and applications of similar import, or if not that then Nancy Leveson's comprehensive 1995 review of the Therac-25 accidents. [1]
Most software doesn't need to be nearly so robust, but Dijkstra constructs his argument as though all did, hinging the inversion on the obvious and frankly shocking cheat across the gap between his pages 14 and 15, ie, that paragraph beginning "But before a computer is ready to perform..." Here he casually, and without direct acknowledgement much less justification, assumes as rhetorically axiomatic that a program, not the machine that executes it, is the original artifact of computing, of which any reification merely constitutes less than perfect instantiation, which he is then free to criticize on the wholly theoretical grounds of mathematical beauty; that is, on the grounds he prefers to inhabit in all cases, whether to do so in any given example makes any sense or not.
If that's his preferred ground, fair enough; after all, he was a mathematician. But his hypocrisy in concealing the insistence by means of subtle rhetoric - mere pages after inveighing against "medieval thinking" by way of an example, his "reasoning by analogy," faulting specifically that argument made by way of specious rhetoric! - casts suspicion on all that both precedes and follows. From a layperson, I could regard it as honest error, but I have known and loved academic mathematicians, and I really can't conceive of any of them leaving intact so consequential a mistake.
Perhaps Dijkstra was different, or merely becoming old, but for someone so heavily invested in pushing a paradigm of programming with mathematical rigor at its core, it seems a remarkable flaw in what should be a crucial argument (especially in advance of a solution for the halting problem). I regret that flaw, because he isn't all wrong about what an engineering paradigm can do to the agency and optionality of programmers especially in industry - not that his one extremely privileged position therein, parallel with Feynman's time at Thinking Machines, would much acquaint him with our desiderata or our constraints - and I would like to find that point made in better company than he was able to give it.
But then, his conception never offered much in preference, did it? The labor of mathematicians is scarce and expensive: what good is a proof assistant to anyone who can't understand its output, much less give it input? And Dijkstra himself, not less strange a bird than any other mathematician, famously did all he could to avoid actually using the machines on whose correct use he here wrote. (Hence his hand, which I complimented so highly before. I also use a fountain pen, but as I said, not so beautifully - and I'm glad I know how to use a keyboard well, instead.)
There would not be more programmers or more software in a world run on such principles, I think, than in this one - on the contrary, less by far. Maybe that would be preferable, but mostly not for the reasons Dijkstra claimed.
I think the real tragedy here is that we can spend *all* of our time trying to improve the quality of our output, but it simply doesn't matter, because as long as the button is where the boss wants it to be and is the right color, all is right with the world.
Literally nothing else matters, and we (or at least I) have wasted a ton of time getting good at writing software.
> One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."
I agree, but I'm not sure this says what you think it does.
The people on the car assembly line may know nothing of engineering, and the assembly line has theoretically been set up where that is OK.
The people on the software assembly line may also (and arguably often do) know nothing of engineering, but it's not clear that it is possible to set up the assembly line in such a way so as to make this OK.
Arguably, the use of LLMs will at least have some utility in helping us to figure this out, because a lot of LLMs are now being used on the assembly line.
Exactly! I’ve noticed a resounding amount of people are writing the same pieces recently, it’s almost like everyone’s sounding their alarm for the upcoming tsunami. Who’s listening? Here’s my piece: https://humantodo.dev
> What are you building? Does the tool help or hurt?
> People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
I'm assuming you're saying these tools hurt more than help?
In that case I disagree so much that I'm struggling to reply. It's like trying to convince someone that the Earth is not flat, to my mental model.
PHP, Ruby and VB have more successful code written in them than all current academic or disproportionately hyped languages will ever have combined.
And there's STILL software being written in them. I did Visual Basic consulting for a greenfield project last week despite my current expertise being more with Go, Python, C# and C. And there's a RoR work lined up next. So the presence gap between these helpful tools and other minor, but over index tools, is still increasing.
It's easy to think that the languages one see mor often in HN are the prevalent ones but they are just the tip of the iceberg.
People built a lot of great stuff with Ruby, PHP, Notes and VB. I don't know what the problem really is.
Personally I think that whole Karpathy thing is the slowest thing in the world. I mean you can spin the wheels on a dragster all you like and it is really loud and you can smell the fumes but at some point you realize you're not going anywhere.
My own frustration with the general slowness of computing (iOS 26, file pickers, build systems, build systems, build systems, ...) has been peaking lately and frankly the lack of responsiveness is driving me up the wall. If I wasn't busy at work and loaded with a few years worth of side projects I'd be tearing the whole GUI stack down to the bottom and rebuilding it all to respect hard real time requirements.
Hey Visual Basic is still there, and last time I checked it was still the goto option to do OLE Automation.
RoR is no longer at its peak, but is still have its marginal stable share of the web, while PHP gets the lion part[1]
Ok, Lotus Notes is really relic from an other era now. But it’s not a PL, so not the same kind of beast.
Well, also LLMs are different beast compared to PL. They actually really are the things that evocate the most the expression "taming the beast" when you need to deal with them. So it indeed as far away as possible of engineering as one can probably use a computer to build any automation. Maybe to stay in scientific realms ethology would be a better starting point than a background in informatics/CS to handle these stuffs.
I'm watching a team which is producing insane amounts of code for their team size, but the level of thought that has gone into all of the details that would make their product a fit predator to run at scale and solve the underlying business problem has been neglected.
Moving really fast in the wrong direction is no help to anyone.
1. Applied physics - Software is immediately disqualified. Symbols have no physics.
2. Ethics - Lives and livelihoods depend on you getting it right. Software people want to be disqualified because that stuff is so boring, but this is becoming a more serious issue with every passing day.
That might vary by countries but in France with have an official "engineering degree" (diplome d'ingénieur) which is also a master's degree, and most software developers have this.
So most software developers in France are absolutely software engineers.
>After five or six cycles it does become a bit fatiguing. Use the tool sanely.
That's increasingly not possible. This is the first time for me in 20 years where I've had a programming tool rammed down my throat.
There's a crisis of software developer autonomy and it's actually hurting software productivity. We're making worse software, slower because the C levels have bought this fairy tale that you can replace 5 development resource with 1 development resource + some tokens.
In 18 years AI is the third or 4th tool forced upon a shop/team, I will say of those it is the forst one that is genuinely able to make me more productive overall, even with the drawbacks.
Software was an engineering discipline... at some places. And it still is, at some places.
Other places were "hack it until we don't know of any major bugs, then ship it before someone finds one". And now they're "hey, AI agents - we can use that as a hack-o-matic!" But they were having trouble with sustainability before, and they're going to still, except much faster.
All (not some) of the most successful devs I've known in the sense of building something that found market fit and making money off it were terrible engineers. They were fairly productive at building features. That's it. And they were productive - until they weren't. Their work ultimately led to outages, lost data, and sensitive data being leaked (to what extent, I don't even know).
The ones who got acquired - never really had to stand up to any due diligence scrutiny on the technical side. Other sides of the businesses did for sure, but not that side.
Many of you here work for "real" tech companies with the budget and proper skin in the game to actually have real engineers and sane practices. But many of you do not, and I am sure many have seen what I have seen and can attest to this. If someone like the person I mentioned above asks you to join them to help fix their problems, make sure the compensation is tremendous. Slop clean-up is a real profession, but beware.
There used to be a saying along the lines of “while you’re designing your application to scale to 1m requests/min, someone out there is making $1m ARR with php and duct tape”
It feels like this takes on a whole new meaning now we have agents - which I think is the same point you were making
As far as I can tell, the only reason agents exist is because large context increase the probability of context poisoning, purely by the inability of these models to actually make conceptual decisions about the context.
I was interested in making a semi-automous skill improvement program for open code, and I wired up systemd to watch my skills directory; when a new skill appeared, it'd run a command prompt to improve it and cohere it to a skill specification.
It was told to make a lock file before making a skill, then remove the lock files. Multiple times it'd ignore that, make the skill, then lock and unlock on the same line. I also wanted to lock the skill from future improvements, but that context overode the skills locking, so instead I used the concept of marking the skills as readonly.
So in reality, agents only exist because of context poisoning and overlap; they're not some magicaly balm to improving the speed of work, or multiplying the effort, they simply prevent context poisoning from what's essentially subprocesses.
Once you realize that, you really have to scale back the reality because not only are they just dumb, they're not integrating any real information about what they're doing.
Software engineering is real engineering because we rigorously engineer software the way real engineers engineer real things.
Software engineering is not real engineering because we do not rigorously engineer software the way "real" engineers engineer real things. <--- YOU ARE HERE
Software engineering is real engineering because we "rigorously" engineer software the way "real" engineers engineer real things.
Largely a problem of VCs and shareholders. After my 12th year of "we'll get around to bug fixes" and "this is an emergency" I realize I am absolutely not doing anything related to engineering. My job means less than the moron PM who graduated bottom of their class in <field>. The lack of trust in me despite having almost a life in software is actually so insulting it's hard to quantify.
Now I barely look at ticket requirements, feed it to an LLM, have it do the work, spend an hour reviewing it, then ship it 3 days later. Plenty of fuck off time, which is time well spent when I know nothing will change anyway. If I'm gonna lose my career to LLMs I may as well enjoy burning shareholder capital. I've optimized my life completely to maximize fuck off time.
At the end of the day they created the environment. It would be criminal to not take advantage of their stupidity.
same experience here. trust deficits so rampant i question if ive ever been right once in my career. dont forget the lack of the word 'iterate' in the decision makers vocabulary. and as soon as the word sunset is uttered you know your in for a bumpy ride once again
Change it to "Some people" if your pedanticism won't let you follow the flow.
Or better yet point out the better paths they chose instead. Were they wrestling with Java and "Joda Time"? Talking to AWS via a Python library named after a dolphin? Running .NET code on Linux servers under Mono that never actually worked? Jamming apps into a browser via JQuery? Abstracting it up a level and making 1,400 database calls via ActiveRecord to render a ten item to-do list and writing blog posts about the N+1 problem? Rewriting grep in Rust to keep the ruskies out of our precious LLCs?
Asking the wrong questions, using the wrong tools, then writing dumb blog posts about it is what we do. It's what makes us us.
There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
On one hand there's an approach to computing where it is a branch of mathematics that is universal. There are some creatures that live under the ice on a moon circling a gas giant around another star and if they have computers they are going to understand the halting problem (even if they formulate it differently) and know bubble sort is O(N^2) and about algorithms that sort O(N log N).
On the other hand we are divided by communities of practice that don't like one another. For instance there is the "OO sux" brigade which thinks I suck because I like Java. There still are shops where everything is done in a stored procedure (oddly like the fashionable architecture where you build an API server just because... you have to have an API) and other shops where people would think you were brain damaged to go anywhere near stored procs, triggers or any of that. It used to be Linux enthusiasts thought anybody involved in Windows was stupid and you'd meet Windows admins who were click-click-click-click-clicking over and over again to get IIS somewhat working who thought IIS was the only web server good enough for "the enterprise"
Now apart for the instinctual hate for the tools there really are those chronic conceptual problems for which datetime is the poster child. I think every major language has been through multiple datetime libraries in and out of the standard lib in the last 20 years because dates and times just aren't the simple things that we wish they would be and the school of hard knocks keeps knocking us to accept a complicated reality.
> There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
I'm laughing over the current Delve/SOC2 situation right now. Everyone pulls for 'licenses' as the first card, but we all know that is equally fraught with trauma. https://xkcd.com/927/
Pedanticism (or pedantry) is the excessive, tiresome concern for minor details, literal accuracy, or formal rules, often at the expense of understanding the broader context.
I don't think this had anything to do with minor details at all. You're trying to convey a point while ignoring the half of the population who didn't go down that route.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It isn't. Show me the licensing requirements to be a "software engineer." There are none. A 12 year old can call himself a software engineer and there are probably some who have managed to get remote work on major projects.
That's assuming the axiom that "engineer" must require licensing requirements. That may be true in some jurisdictions, but it's not axiomatically or definitionally true.
Some kinds of building software may be "engineering", some kinds may not be, but anyone seeking to argue that "licensing requirements" should come into play will have to actually argue that rather than treat it as an unstated axiom.
Depends on the country. In some countries, it is a legal axiom (or at least identity).
For the other countries, though, arguing "some countries do it that way" is as persuasive as "some countries drive on the other side of the road." It's true, but so what? Why should we change to do it their way?
> Depends on the country. In some countries, it is a legal axiom (or at least identity).
As I said, "That may be true in some jurisdictions, but it's not axiomatically or definitionally true.". The law is emphatically not an axiom, nor is it definitionally right or wrong, or correct or incorrect; it only defines what's legal or illegal.
When the article raised the question of whether "building software is an engineering discipline", it was very obviously not asking a question about whether the term 'engineering' is legally restricted in any particular jurisdiction.
To my mind, the term "engineering discipline" implies something roughly analogous to Electrical Engineering, Civil Engineering, Mechanical Engineering, Chemical Engineering.
There is no such rigorous definition for "software engineer" which normally is just a self-granted title meaning "I write code."
In Europe they are. Call yourself an Engineer without a degree and your company and you will be sued with a big fine, because here you must be legally accountable on disasters and ofc there are hard constraints .
Where specifically? I've been working as a "Software engineer" for multiple decades, across three countries in Europe, and 2-3 countries outside of Europe, never been sued or received a "big fine" for this, even have had presentations for government teams and similar, not a single person have reacted to me (or others) calling ourselves "software engineers" this whole time.
In Germany. I have a degree in mechanical engineering and am thus allowed to call myself an engineer, even though I write software professionally. Colleagues who have studied computer science cannot, as it is not considered an engineering, but a science degree. This is why most people talk about "software developers" and not about "software engineers" (in German) to avoid this problem.
That being said, most people would not actually care.
An iron ring does not technically make you an engineer in Canada. It just says you graduated from an engineering program. A P.Eng, which is a professional engineer's license is something you acquire after multiple years of experience and testing.
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.
Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.
AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.
And I am not using agents, subagents which would only multiply the costs - for what?
So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.
Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.
Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.
Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.
All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.
This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.
Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."
And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.
No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference.
The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.
Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today.
No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.
In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.
> the open source invisible hand makes everything completely free.
In this case the limitation is the compute. Very few people have the compute required for AI/LLMs locally or for free (comparable to the performance of Claude). So yes, there are plenty of Open Source models that can be used locally but you need to invest in hardware to make that happen and especially if you want the quality that is available from the commercial offerings.
Not to speak of the training of those models. It's all there to make it possible to do this locally however where's the hardware? AWS? Google? There are hidden costs of the Open Source model in this case.
I agree with most of your points, but computation can be transferred from a place where energy is cheap to a place that is expensive. Energy for cooking cannot be transferred that way.
See for example Amazon-Google datacenters in the Gulf region. We've also got a whole continent, Australia, to put as many solar panels as we desire. Australia got dark for half a day, every day? Put solar panels to the opposite side of the planet.
Energy is a concern, for cooking, transportation etc. Energy for computation is not.
> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.
Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.
Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.
AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.
> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices
Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.
If you're mentally comparing this to things like oil, you're not on the right track
> Oil is a finite resource that comes out of the ground
Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.
These components are also far more fragile to source, see COVID and the collapse of global supply chains. Also the factories to create these components are expensive to build and fragile to maintain. See the Dutch company that seems to be the sole supply of certain manufacturing skills.[1]
> I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
My bet would be that it would fuel the profits of AI companies and not make the price of AI come down. Over supply makes price come down but if supply is kept artificially low, then prices stay high.
That's the comparison to OPEC and oil. There is plenty of oil to go around yet the supply is capped and thereby prices kept high. There is no guarantee that savings in hardware or supply will be passed on by AI corps.
Indeed there is no guarantee that there will be serious competition in the market, OPEC is a monopoly so why not have an AI monopoly? At the moment, all major players in AI are based in the same geopolitical sphere, making a monopoly more likely, IMHO.
In the end, it's all speculation what will happen. It just depends on which fairy tail one believes in.
> Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.
Raw material cost is not a driver of datacenter GPU costs.
> Over supply makes price come down but if supply is kept artificially low, then prices stay high.
Where are you getting "supply kept artificially low" when we're in the middle of an explosion of datacenter buildouts and AI companies?
We're in a race to the bottom on pricing. I haven't seen a realistic argument for why you think prices are going to go up. You're starting with a conclusion and trying to find reasons it might be true.
While I fundamentally agree with the basis of compute getting cheaper by the year, I think a missed consideration here is the fact that these models are also requiring exponentially more compute with each iteration to train, in a way that arguably has outscaled the advances in compute.
Whether a generalized and broadly usable model will be able to trained within some N multiple of our current compute availability allowing the price to come down with iterative compute advances is yet to be seen. With the current race to the top in terms of SOTA models and increasingly iteratively smaller improvements on previous generations, I have a feeling the scaling need for compute will outpace the improvements in our hardware architecture, and that's if Moore's law even holds as we start to reach the bounds of physics and not engineering.
However as it stands today, essentially none of these providers are profitable so it's really a question of whether that disconnect will come within their current runway or not and they'll be required to increase their price point to stay alive and/or raise more capital. It's pure conjecture either way.
this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers).
I agree, that is the great danger that CS students aren't even taught the fundamentals of "computer science" any longer. It would be the equivalent of physics students not learning Newtons laws or e-m-c-squared.
Probably there is an issue with how much there is in CS - each programming language basically represents a different fundamental approach to coding machines. Each paradigm has its application, even COBOL ;)
Perhaps CS has not - yet - found its fundamental rules and approaches. Unlike other sciences that have hard rules and well trodden approaches - the speed of light is fixed but not the speed of a bit.
Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
> “Heard joke once: Man goes to doctor. Says he's depressed. Says life seems harsh and cruel. Says he feels all alone in a threatening world where what lies ahead is vague and uncertain. Doctor says, "Treatment is simple. Great clown Pagliacci is in town tonight. Go and see him. That should pick you up." Man bursts into tears. Says, "But doctor...I am Pagliacci.”
That's a great shout because I'm sure a lot of people would otherwise just discredit this take as just another anti-ai skeptic. But he probably has more experience working with LLM's and agents than most of us on this site, so his opinion holds more weight than most.
If you were going to dismiss an argument because of who it comes from rather than its content, that is a flaw in your thinking. The argument is correct, or it isn't, no matter who said it.
Your ability to evaluate whether the argument is correct is limited. In theory, the author and the correctness of the argument are unrelated; in practice, the degree of experience the author has with the topic they’re making an argument on does indeed have some correlation with the argument and should influence the attention you give to arguments, especially counterintuitive ones.
That doesn't work for me. Knowing who is making the argument is important for understanding how credible the parts of their argument that derive from their personal experience are.
If someone anonymous says "Using coding agents carelessly produces junk results over time" that's a whole lot less interesting to me than someone with a proven track record of designing and implementing coding agents that other people extensively use.
Someone making an argument needs relevant experience/context to substantiate their argument. Just because the end opinion is "correct", doesn't mean they arrived there in a reasonable way.
> The argument is correct, or it isn't, no matter who said it.
Yes, but we all have insufficient intelligence and knowledge to fully evaluate all arguments in a reasonable timeframe.
Argument from authority is, indeed, a logical fallacy.
But that is not what is happening here. There is a huge difference between someone saying "Trust me, I'm an expert" and a third party saying "Oh, by the way, that guy has a metric shitton of relevant experience."
The former is used in lieu of a valid argument. The latter is used as a sanity check on all the things that you don't have time to verify yourself.
I think its kind of like technical indicators. Obviously they mean nothing but because other people believe them you have to take them into account. So when someone with authority says something assertively many critical thinking faculties go out the window for many people
> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes
One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.
Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.
> Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
It's not the glut of compute resources, we've already accepted bloat in modern software. The new crutch is treating every device as "always online" paired with mantra of "ship now! push fixes later." Its easier to setup a big complex CI pipeline you push fixes into and it OTA patches the users system. This way you can justify pushing broken unfinished products to beat your competitors doing the same.
I think you're just recalling the few software products that were actually good. There was plenty of crap software that would crash and lose your work in the old days.
I always found it funny how Word on Window 3.1/95 would have a day dream moment and just completely lock up, usually when you were about to save the document
I still save stuff every few minutes out of habits formed in the 90s.
Old DOS stuff could either be a total nightmare or some of the most brilliant code you had ever seen. Thats just the way having no giard rails goes.
Another factor at work is the use of rolling updates to fix things that should better have been caught with rigorous testing before release. Before the days of 'always on' internet it was far too costly to fix something shipped on physical media. Not that everything was always perfect, but on the whole it was pretty well stress-tested before shipping.
The sad truth is that now, because of the ease of pushing your fix to everything while requiring little more from the user than that their machine be more or less permanently connected to a network, even an OS is dealt with as casually as an application or game.
> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services
As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.
DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.
And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.
One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.
This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.
It's a systems engineering job. You need to provide context, acceptable failure modes, and test at each level for validation. Identify false coupling, poor interfaces, things that don't match business context during agent planning phase. Then communicate / translate to others so their decisions improve instead of destroying the system by optimizing only for their local situation.
It also seems like massive consolidation has caused issues too. Everyone is on Github. Everyone is on AWS. Everyone is behind cloudflare. Whenever an issue happens here it effects everyone and everyone sees it.
In the past with smaller services those services did break all the time, but the outage was limited to a much smaller area. Also systems were typically less integrated with each other so one service being down rarely took out everything.
The power company is massively consolidated, as is the water supply, telephone service. These are monolithic, monopolistic entities. But they are also very reliable (failures are usually isolated by region, or a result of natural disaster).
What leads to more failure is when you don't engineer those consolidated entities to be reliable. Tech companies have none of the legal requirements or incentives to be reliable, the way physical infrastructure companies do. I agree that the tighter integration is an issue, but the root cause is tech companies have no incentive other than profits. If they're making profits, everything's fine.
I mean recommend professional software engineering licenses here on HN and it goes over like a turd in a punch bowl. Everyone knows where the search for more profit was going, no one wanted to get off the ride though.
> One way to deal with this in DevOps/Lean/TPS is the Andon cord.
Many years ago, I started working for chip companies. It was like a breath of fresh air. Successful chip companies know the costs (both direct money and opportuity) of a failed tapeout, so the metaphorical equivalent of this cord was there.
Find a bug the morning of tapeout? It will be carefully considered and triaged, and maybe delay tapeout. And, as you point out, the cultural aspect is incredibly important, which means that the messenger won't be shot.
I understand your pain, we're just a peak hype, I think people will learn to backtrack and use the tool in a more sensible way. It always happens. I remember when MongoDB and other NoSql databases came out, people went as far as to say that "SQL is dead" and refuse to use a normal SQL database for anything. Not even for the most obvious relational application. People would store everything as key-value pairs with no schema and do all the joins in the application layer. Fast forward 10 years and we're back to using SQL for most of our applications. NoSql hasn't disappeared, it has just been reduced to the nice where it's useful.
Just yesterday I was discussing many of the ideas presented here with a coworker. I had just walked out of a workshop led by $BIGTECHCOMPANY where someone presented the following toy example:
A service goes down. He tells the agent to debug it and fix it. The agent pulls some logs from $CLOUDPROVIDER, inspects the logs, produces a fix and then automatically updates a shared document with the postmortem.
This got me thinking that it's very hard to internalize both issue and solution -updating your model of the system involved- because there is not enough friction for you to spend time dealing with the problem (coming up with hypotheses, modifying the code, writing the doc). I thought about my very human limitation of having to write things down in paper so that I can better recall them.
Then I recalled something I read years ago: "Cars have brakes so they can go fast."
Even assuming it is now feasible to produce thousands of lines of quality code, there is a limitation on how much a human can absorb and internalize about the changes introduced to a system. This is why we will need brakes -- so we can go faster.
The gap in your example is that a human had to realize the system is broken so that he could nudge the agent into fixing it. He can fix that gap by updating the agent to recognize when the system breaks. This now becomes the level at which he debugs… did the agent recognize the failure and self-heal, or not?
And at that point, if the autonomous system breaks, realized it’s broken, and fixes itself before you even notice… then do you need to care whether you learn from it? I suppose this could obfuscate some shared root cause that gets worse and worse, but if your system is robust and fault-tolerant _and_ self-heals, then what is there to complain about? Probably plenty, but now you can complain about one higher level of abstraction.
This aligns with my observation from product design point as well.
Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.
It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.
Agree. The issue was never, how can we get our engineers to squirt out more lines of code in a day? It has always been, how can we effectively iterate using customer feedback to deliver the highest quality product. That type of thing needs time to bake.
It occurred to me on my walk today that a program is not the only output of programming.
The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).
> You realize you can no longer trust the codebase.
This cuts to the problem and is excellent framing. A rogue employee can achieve the same, but probably less quickly, and we've designed systems to help catch them early.
It's not really malware, but it's a mess. It installed so much shit and it interfered with your git hooks and stuff. It was kind of messy. I kind of gave up on it. I just went back to using built-in claude code todowrite tasks.
It managed to throw itself into a global file for me that Claude used which caused beads to appear in random projects on my machine. Because of how it was there the agent attempted to re-install beads after I already removed it because the guy hook errored.
I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.
I'm capturing videos of all the bugs I am seeing as of late. The folder is filling fast. I'll write a compilation post but I'm thinking a techno remix video could be fitting too.
If there are any common apps which are unhinged please do share your experiences. LinkedIn was never great quality but it's off the charts. Also catching some on Spotify.
But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!
It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.
Were you not reviewing every line when a human wrote it before it went to prod? I think the output of these tools is about as good as a human would write - which means it needs thorough review if I’m going to be on the hook to resolve its issues at 2AM.
This is a weird analogy. You can ask the A.I. to fix the issue at any time of day (assuming the person asking someone with enough technical knowledge that can evaluate the fix at least).
You won't always be able to get ahold of someone at 2am. You won't be able to get ahold of me at 2am, for example. It'll throw some notification on my screen and I won't see it until I wake up.
Maybe in the future humans won't need to pour over every line. However I quickly learn which interns I can trust and which I need to pour over their code - I don't trust AI because it has been wrong too often. I'm not saying AI is useless - I do most of my coding with an agent, but I don't trust it until I verify every line.
I did this for a while… and until Opus 4.5, I couldn't fully trust the model. But at this point, while it does make the occasional mistake, I don't need to scrutinize every line. Unit and integration tests catch the bugs we can imagine, and the bugs we can't imagine take us by surprise, which is how it has always been.
Even with 4.6 I find there are a lot of mistakes it makes that I won't allow. Though it is also really good at finding complex thread issues that would take me forever...
We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever.
Prod in this context doesn't refer to one person's website for their personal project. It refers to an environment where downtime has consequences, generally one that multiple people work on and that many people rely on.
There's a middle ground here. Code for your website? Sure, whatever, I assume you're not Dell and the cost of your website being unavailable to some subset of users for a minute doesn't have 5 zeroes on the end of it. If you're writing code being used by something that matters though you better be getting that stuff reviewed because LLMs can and will make absolutely ridiculous mistakes.
It's tough to not interpret this as "I don't care about my website". Do you not check the copy? Or what if AI one-shots something that will harm your reputation in the metadata?
That sounds better. I assume the stakes are low enough that you are happy reviewing after the fact, but setting up a workflow to check the diffs before pushing to production shouldn't be too difficult
That a personal website? Prod means different things in different contexts. Even then, I'd be a bit worried about prompt injection unless you control your context closely (no web access etc).
> Nothing should go straight to prod ever, ever ever, ever
Air Traffic Controller software - sure. 99% of other softwares around that are not mission-critical (like Facebook) just punch it to production - "move fast and break shit" has been cool way before "AI"
There's a lot of software in between Air Traffic Controller and Facebook. And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes? I'd think at this point that'd be considered a fairly severe incident.
Even if we ignore criticality, things just get really messy and confusing if you push a bunch of broken stuff and only try to start understanding what's actually going on after it's already causing issues.
> And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes?
sure, they coined the term “move fast and break things”
and not every “bug” brings the system down, there is bugs after bugs after bugs in both facebook and insta being pushed to production daily, it is fine… it is (almost) always fine. if you are at a place where “deploying to production” is a “thing” you better be at some super mission-critical-lives-at-stake project or you should find another project to work on.
These are the bugs after bugs after bugs after bugs after bugs.
Simply put they are going through dev, QA, and UAT first before they are the bugs that we see. When you're running an organization using software of any size writing bugs that takes the software down is extremely easy, data corruption even easier.
> We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever
Things should 100% go to prod whenever they need to go to prod. While this in theory makes sense, there is insane amount of ceremony in large number of places I have seen personally where it takes an act of congress to deploy to production all the while it is just ceremony, people are hunting other people with links to PR sent to various slack channels "hey anyone available to take a look at this" and then someone is like "I know nothing about that service/system but I'll look at approve." I would wager a high wager that this "we must review every line of code" - where actually implemented - is largely a ceremony. Today I deployed three services to production without anyone looking at what I did. Deploying to production should absolutely be a non-event in places that are ran well and where right people are doing their jobs.
I'm sure some companies do this poorly but there's lots of places where code review happens on every PR and there's processes and systems in place to make sure it's an easy process (or at least, as easy as it should be). Many large tech companies have things pushed to prod automatically many, many times per day and still have code review for all changes going out.
Even with code review, a well configured CI/CD system is going to include a wealth of automated unit and integration tests, and then also a complex deploy system involving canaries and ramp-up and blue/green deployment and flags and monitoring and alerts that's backed by a pager and on-call rotation with runbooks. Code review simply will never be perfect and catch 100% of issues, so systems are designed with that in mind.
So then then question is what's actually reasonable given today's code generating tools? 0% review seems foolish but 100% seems similarly unreal. Automated code review systems like CodeRabbit are, dare I even say, reasonable as a first line of defense these days. It all comes down too developer velocity balanced with system stability. Error budgets like Google's SRE org is able to enforce against (some) services they support are one way of accomplishing that, but those are hard to put into practice.
So then, as you say, it takes an act of Congress to get anything deployed.
So in the abstract, imo it all comes down to the quality of the automated CI/CD system, and developers being on call for their service so they feel the pain of service unreliability and don't just throw code over the wall. But it's all talk at this level of abstraction. The reality of a given company's office politics and the amount of leverage the platform teams and whatever passes for SRE there have vs the rest of the company make all the difference.
>sure, they coined the term “move fast and break things”
Yeah I'm aware, but as any company gets larger and has more and more traffic (and money) dependent on their existing systems working, keeping those systems working becomes more and more important.
There's lots of things worth protecting to ensure that people keep using your product that fall short of "lives are at stake". Of course it's a spectrum but lots of large enterprises that aren't saving lives but still care a lot about making sure their software keeps running.
How do you know which lines you need to review and which you don't?
Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?
Honestly a lot of useful software is ‘unimportant’ in the sense that the consequences of introducing a bug or bad code smell aren’t that significant, and can be addressed if needed. It might well be for many projects the time saved not reviewing is worth dealing with bugs that escape testing. Also, it’s entirely possible for software to be both well engineered and useless.
It's a conversation I've had many times in my career and I'm sure I'll have many more. We've got code that seems plausible on a surface level, at a glance it solves the problem it's meant to solve - why can't we just send it to prod and address whatever problems we find with it later?
The answer is that it's very easy for bad code to cause more problems than it solves. This:
> Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn't allow your army of agents to make the change in a functioning way.
is not a hypothetical, but a common failure mode which routinely happens today to teams who don't think carefully enough about what they're merging. I know a team of a half-dozen people who's been working for years to dig themselves out of that hole; because of bad code they shipped in the past, changes that should have taken a couple hours without agentic support take days or weeks even with agentic support.
> It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
It's insane to me that someone can arrive at any other conclusion. LLMs very obviously put out bad code, and you have no idea where it is in their output. So you have to review it all.
You say it's borderline archaic. I say trusting agents enough to not look at every single line is an abdication of ethics, safety, and engineering. You're just absolving yourself of any problems. I hope you aren't working in medical devices or else we're going to get another Therac-25. Please have some sort of ethics. You are going to kill people with your attitude.
Almost nobody works on medical devices... And some of you lucky folks might be working with mega minds everyday, but the rest of us are but shadows and dust. I trust 5.4 or 4.6 more than most developers. Through applying specific pressure using tests and prompts I force it to built better code for my silly hobby game than I ever saw in real production software. Before those models I was still on the other side of the line but the writing is on the wall.
Not having a code review process is archaic engineering practice at this point(at any point in history, really), be it for human written or AI written code.
If you keep the scope small enough it can be production ready ootb, and with some stuff (eg. a throwaway React component) who really cares. But I think it's insane to look at the output of Claude Code or Codex with frontier models and say "yep, that looks good to me".
Fwiw OP isn't an agent skeptic, he wrote one of the most popular agent frameworks.
This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.
> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
This is a great point.
I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.
I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make
> There were precursors like Aider and early Cursor, but they were more assistant than agent.
I use Aider on my private computers and Copilot at work. Both feel equally powerful when configured with a decent frontier model. Are they really generations apart? What am I missing?
I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development.
> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.
That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.
Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.
The problem is not the AI users who frequent this board and are shipping code they don't understand. It is the moronic MBA trained executives who can only think about speed, more speed, more revenue for less cost. Quality is an optional expense. A race where the finish line is the current fiscal quarter, to hell with everything after that. The "we can fix it later" Band-Aid over a tumor.
Sensible engineers who look AI as another (potentially powerful) tool in the toolbox "aren't forward looking enough". I watched this happen in real time at my previous company, where every discussion about quality was interpreted as slowing down progress, and the only thing that was looked on favorably was the idea of replacing developers with machines - because they are "cheaper and faster".
The logical minds here on HN are less prone to believing in magic and AI fairies, but they are often not the ones setting the rules. And the number of companies being run by people with critical thinking skills is getting smaller by the day.
It's a matter of affordances. The path of least resistance with agents is to let it commit whatever it wants. That's a natural outcome of the design and implementation of agents.
Yes, humans are accountable for the ultimate output. But so are the people who design and build these automation tools. As the saying goes, the purpose of a system is what it does.
Great take, spot on. Very similar to Armin's post the other day about things taking time. The need for speed and its ill effects are being rediscovered (again).
This is what I call content based on 'garbage'. Because garbage is the random collection of peoples' stuff. You can try and make sense and commentary on a society through the garbage dump, but it's pretty superficial. It doesn't tell you a lot about any real person's motivations. So it's not a great basis for commenting on real people. OPs comments are on the collection of things that they happen to come across through news and social media. Sure it looks like a lot is happening, but look at any one person's or business's approach and it will make a lot more sense. Yes, I realize people are producing content that appeals to the 'garbage' mindset, but it's obviously theater. A system that writes 10,000 lines of code for you a week, is headline theater.
I think this post should be directed to every Typescript developer.
I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.
The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.
I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist:
What I have leaned from the exercise above is that we paid more attention and spent more resources on "metadata" than real data. They are the rabbit holes that lead us to more metadata and forget what we really want.
I don't understand why we seem to always try to make things do more than what they were built for in the first place. Rather than waiting for modifications, we try to make the square fit the circle and then become disgusted when it doesn't work. I'm not in the 'slow down to be cautious' camp. I'm more in the 'slow down and find ways to work with what we actually have.' When you use the tools the way they were meant to be used, life does become easier, or at least mine has anyway.
It's always been this way - the people that rise to the top are the people who never had to deeply understand something, so they can't even comprehend what that would look like or why it should be important. They're trying to automate the "understanding" part, with predictably disastrous consequences that those of us who aren't the "rise to the top" type could see coming. Agentic AI is just another symptom.
I really don't get the author's conclusion here. I agree with his premises: organizations using LLMs to churn out software are turning out terrible quality software. But the conclusion from that shouldn't be "slow down", it should be "this tool isn't currently fit for use, don't use it". It feels like the author starts from the premise of "I want to use AI" and is trying to figure out how to make that work, rather than "I want to make good software" and trying to figure out how to do that.
It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it.
I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.
I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.
Every problem is self-correcting in that some new normal will emerge. Either through acceptance or because something is changed.
It’s very hard to say right now what happens at the other side of this change right now.
All these new growing pains are happening in many companies simultaneously and they are happening at elevated speed. While that change is taking place it can be quite disorienting and if you want to take a forward looking view it can be quite unclear of how you should behave.
Unfortunately, I think the lesson from recent history seems to be that outside of highly-regulated industries, customers and businesses will accept terrible quality as long as it's cheap.
Yes, every slack is optimized out of systems. If something has an ounce more quality than would suffice to obtain the same profit, it must be cut out. It's an inefficiency. A quality overhang. If people buy it even if it's crap, then the conclusion is that it has to be crap, else money is left on the table. It's a large scale coordination issue. This gives us a world where everything balances exactly near the border where it just barely works, for just barely enough time.
Nah, there is a quality floor that consumers are willing to accept. Once you get below that, where it's actually affecting their lives in a meaningful way, it will self-correct as companies will exploit the new market created for quality products.
I keep returning to this thought: Assuming our abstraction architecture is missing something fundamental, what is it?
My gut says something simple is missing that makes all of the difference.
One thought I had was that our problem lives between all the things taking something in and spitting something out. Perhaps 90% of the work writing a "function" should be to formally register it as taking in data type foo 1.54.32 and bar 4.5.2 then returning baz 42.0 The register will then tell you all the things you can make from baz 42.0 and the other data you have. A comment(?) above the function has a checksum that prevents anyone from changing it.
But perhaps the solution is something entirely different. Maybe we just need a good set of opcodes and have abstractions represent small groups of instructions that can be combined into larger groups until you have decent higher languages. With the only difference being that one can read what the abstraction actually does. The compiler can figure lots of things out but it wont do architecture.
There's more to a function than just types. It's not sufficient to know that the function outputs a baz 42.0. You have to understand which one. The oldest? The latest? The one that matches the foo and bar input parameters?
I think that's the part where it remains difficult. Someone has to convey clearly what the semantics and side effects of the function are. Consumers have to read and understand it. Failing that, you get breakage.
Fine to read a fellow countryman on HN :) "Dere!"
I have disabled my coding agent by default. I first try to think, plan, code something myself and only when I get stuck or the code gets repetitive, only then I tell him to do the stuff.
But I get what you are saying, and I agree ... I am clearly pro human on this debate, and the low bloat trash everywhere is annoying. I have come to the conclusion - if you find docs on something, and it is plain HTML - it will be probably of high quality. If you find docs with a flashy, dynamic, effectful and unnecessary 100mb js booboo, then you what you are about to read ...
I expected this to be yet another anti-AI rant, but the guy is actually right. You should guide the agents, and this is a full-time job where you have to think hard.
> While all of this is anecdotal, it sure feels like software has become a brittle mess
That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.
Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.
If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.
I think there is a line somewhere people need to draw, when a technology such as AI invades into ALL areas, threatening to reduce a percentage of jobs so quickly, without the potential to creating new TYPES of jobs that can feed many. It is different from computers, and it is different from trains.
> If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
I'm one of those people and I'm not going to slow down. I want to move on from bullshit jobs.
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
So are you aiming for death poverty? Once those bullshit jobs go, we’re going to find a lot of people incapable of producing anything of value while still costing quite a bit to upkeep. These people will have to be gotten rid of somehow.
> and think we are going to run out of things to do, or run out of problems to create and solve.
There will be plenty of problems to solve. Like who will wipe the ass of the very people that hate you and want to subjugate you.
Name a single time doomers were right about anything. Doomers consistently overstate their expected outcome in every single domain and consistently fail to predict how society evolves and adapts.
Again:
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
Also, there have been plenty of awful things caused by technological progress. Tons of death and poverty was created by the transition to factories and mechanization 150 years ago.
Did we come out the other end with higher living standards? Yes, but that doesn't make the decades of brutal transition period any less awful for those affected.
That's generous. Climate scientists were right, climate doomers were definitely wrong.
Society is mostly unchanged due to climate change. That's not to say climate has no effect, but it is certainly still not some doomer scenario that's played out. New York and Florida are most certainly not underwater as predicted by the famous "Inconvenient Truth". People still live in deserts just as they always have. Human lifespan is still increasing. We have less hunger worldwide than ever before, etc.
Climate change doomers conveniently leave out the part where climate has ALWAYS affected society and is one of the main inputs to our existence, therefore we are extremely adaptable to it.
Before "climate change" ever entered the general consciousness, climate wiped out civilizations MORE FREQUENTLY than it does now. All signs point to doomers being wrong and yet they all hold onto it stubbornly.
Doomers were never impressive because they got anything right, they are impressive because they have the unique skill of moving the goalpost when they are wrong. Any time you think the goalpost can't be moved further out, they prove it's possible.
The effects of climate change are just starting to happen. Ecosystems are dying. Very few "climate doomers" thought the world would be like the Day after Tomorrow.
The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
Climate scientists, and climate reporting, often UNDERSTATED the worst of these effects.
I think it'd be worth stating what your definition of doomerism is. For me, seeing the increases in forest fires, seeing the sky reddened and the air quality diminish and floods and hurricanes increase... I don't think being able to buy a big mac doesn't make that any less pessimistic.
> The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
If this is true then how are there more people than ever, fewer famines than ever? Migrations due to climate has been a part of human history since the beginning of all of human, and animal history. It's almost as if that's the default state of being. Are people migrating more than ever? Yes, but not just because of climate change, because it's so god damn easy to do so in modern times.
We aren't walking across a sheet of ice to try to survive a drought. We are on boats with motors and a life vest at our worst, in first class getting wine and dined at our most hedonistic. Entire (illegal) migration pipelines have been made and turned into a black-market economy. There are government funded apps created to support these migration pipelines.
Again, you're a doomer that has failed to predict the impact of what you're observing, and it mostly comes down to the fact that you underestimate human creativeness and ingenuity, and human drive for progress.
You frame every scenario as if humans will just stare at impending doom like deer in headlights and let it wash over them, while at the same time arguing that mass amounts of people are so adaptable that they would be willing to traverse the entire globe to find a better life. Your model of reality contradicts itself from the very start.
The CO2 concentration continues to climb year after year, at an accelerating rate. The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
We're on a hothouse earth trajectory. All signs point to you not being aware of serious climate research and hanging on to a naive Steven Pinker "everything is always improving" outlook.
> The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
All signs point to you being a doomer that is excellent at moving the goal post. "If it doesn't happen tomorrow surely it will happen the next day."
You can do this until the end of time. A waste of brain cycles for anybody with a real job. This is the exact same pattern for every single kind of doomer and they are all wrong in the exact same way over and over. You still can't name a single doomer point of view that has played out to some kind of catastrophic society collapsing event accurately.
It's always "it's coming" eventually.
Running out of oil, overpopulation, financial system collapse that sends us back to the dark ages, climate change that causes everybody to move migrate to Colorado, a coronavirus that permanently makes us board up indoors. None of it ever plays out the way you doomers fantasize about it playing out.
When some kind of catastrophic society collapsing event happens it's most likely going to be because of something that is not in the mainstream consciousness.
If doomers were good at predicting these events and how it will play out they'd all be rich as hell, but no, they are for the most part a bunch of broke whiners. (Except for those doomers that have made their wealth off of scaring people)
> All signs point to you being a doomer that is excellent at moving the goal post.
All signs point to it being really easy for you to dismiss "doomers" as wrong and "scientists" as right retroactively. If someone was wrong about the direction of the climate crisis 20 years ago they were a doomer. If they were right they were a scientist. Easy!
You can apply this to anything that went to shit with the world in the past, not just the climate. If someone predicted the financial crisis of 2008, they were not a doomer, they were a particularly savvy financial analyst. All the others who keep predicting crises are wrong, until they're right, and then they're not a doomer, so your point always stands no matter what. Super convenient!
> If someone predicted the financial crisis of 2008, they were not a doomer, they were a particularly savvy financial analyst.
Zoom out buddy, the 2008 financial crisis is a blip. The world's financial system is almost exactly the same as it was pre-2008. Hardly the collapse that made the world stop spinning that doomers have a fetish for. That's not a good example to support your argument.
You fundamentally cannot grasp the concept of doomerism. Doomerism isn't simply observing some first order effect "The oceans will increase by 2 degrees".
Doomerism is observing that first order effect and trying to assert that we should change behavior at a societal level because they above everybody else, can predict what the secondary or tertiary+ effects are for society.
"The oceans will increase by 2 degrees, all marine life will perish, hurricanes will make vast swaths of the world uninhabitable. Therefore we should stop eating beef!"
And they are wrong about it every - single - time. Do you need examples?
Society has a long history of ignoring doomers, and the impact? Society is right and Doomers are consistently wrong.
Society keeps going. We have all of history up until the current moment, but from that we understand so far, Doomers have never been right about how disruptive their observations are for society at large. If you want to provide a contradiction to this statement please do so.
Nuclear power doomers -> completely wrong. Fukashima was the latest that proved this
Covid doomers -> wrong. in 50 years covid will be as forgotten as the spanish flu was.
Climate doomers -> wrong. famines are down across the globe and population still growing, still no clear example of the disruption to society or world in a way that is new. For any disruption we can find historical disruptions of the same category with more impact to humans and the world. Floods? More people killed in historical floods and more societies extinguished from them >100 years ago. Fires? More people killed in fires and more cities completely burned down from them >100 years ago.
Overpopulation doomers -> wrong, population still growing, but leveling off and not collapsing
AI doomers -> wrong on both sides so far. no bubble pop, capabilities still advancing, humans are also still relevant
Peak oil doomers -> completely wrong, more oil being discovered, didn't account for technology, didn't account for other forms of energy
With this kind of track record, you'd think that doomers would have enough self reflection to realize that their model of reality is insufficient at predicting outcomes and shut the fuck up, but nope - they just keep on trying to force a square peg into a round hole while annoying everybody around them who are trying to do something to move the needle towards a better life that doesn't involve becoming a vegetable so the earth can heal or whatever.
Compare this against another model of reality: Whatever challenges humans face, when it's dire enough, we will adapt and overcome.
You can backtest this model against all of human history. It would be dishonest to say that this model isn't more accurate so far than whatever model you're using as a doomer.
No need for doomers to virtue signal and lecture everybody about their shitty model of reality that fails to backtest
>If doomers were good at predicting these events and how it will play out they'd all be rich as hell, but no, they are for the most part a bunch of broke whiners.
Oh, the classic "if you're so smart then why aren't you rich" non argument. I'm sure Carl Sagan was a just whiny loser because he didn't figure out how to become a billionaire from knowing how physics works. His prediction that the planet would warm several degrees by the mid to late 21st century failed to reward him what he was owed. By the way we haven't even gotten halfway there yet, so your "shifting goalposts" thesis is null.
People who push dangerous neoliberal propaganda like carbon capture or "infinite growth on a finite planet is possible" on the other hand do get very rich, and they don't even need to make good predictions. Such is the planet governed by pedophiles.
> People who push dangerous neoliberal propaganda like carbon capture or "infinite growth on a finite planet is possible"
Good thing we are not confined to a closed system in any practical sense. You act like we haven't already used space for economic growth. It's also a good thing that the concept of "growth" in this context is not limited by physical constraints. You're talking about growth of value, not growth in a physical sense. Did you think the valuation of every company was based on something physical 1:1? Do you live somewhere whose financial system is based on a gold standard or something? There are multiple levels where your idea falls apart.
Crazy to so confidently assert an idea which is conceptually flawed on a surface level.
You actually think the economy has reached the point of maximum growth due to the laws of thermodynamics? Please tell me you didn't formulate your entire worldview on this idea because it's unlikely that you can function in this society in a way that makes your life better or those around you better with this flawed model of reality.
Doomers are always hurting themselves first and foremost and then dragging everybody else around them down with them.
>You actually think the economy has reached the point of maximum growth due to the laws of thermodynamics?
Of course it hasn't. The real problem is that the atmosphere is being poisoned beyond repair, at an increasing pace, and that is tied to economic growth. That will eventually un-terraform the planet into a place hostile to agriculture, be it in 50 or 100 years. We're nowhere near being able to reverse this in any way, and there are no signs of it slowing down.
>Good thing we are not confined to a closed system in any practical sense.You act like we haven't already used space for economic growth.
Oh, am I to believe space mining fantasies maybe? I'm sure we'll get there, just after AGI solves nuclear fusion for us in the next 5 years. Then we can have star trek replicators to go with them. I just wish it would happen sooner, that sea floor mining stuff is starting to gain traction and it isn't looking pretty.
>It's also a good thing that the concept of "growth" in this context is not limited by physical constraints
It actually is. The concept of "decoupling" of the economy from material resources has been debunked for a while now. Theoretically there can be efficiency gains that generate further growth, but those are usually quickly cannibalized by increasing demand, plus we're deep on the diminishing returns phase in a lot of fields.
> That will eventually un-terraform the planet into a place hostile to agriculture, be it in 50 or 100 years. We're nowhere near being able to reverse this in any way, and there are no signs of it slowing down.
yet another classic bullshit doomer prediction that never plays out where you'll conveniently not be around to admit you're wrong about.
> Oh, am I to believe space mining fantasies maybe?
You don't need to, we're already at the point where we are using space for economic growth, so it's not some kind of fantasy. We also have this celestial body that is used throughout the planet called "the Sun" which already makes your closed system argument fall apart, unless your worried about us being close to peak utilization of the sun too.
> It actually is. The concept of "decoupling" of the economy from material resources has been debunked for a while now.
Hilarious how something can be "debunked" yet it's exactly how the metric for "growth" that you're talking about functions today. Again if you add up the combined valuation for every company today did you think it's based on material resources? It's obviously not. Your first clue is that the valuation of companies is calculated and expressed as a dollar which is not backed by ANYTHING material. If the thing that you're using to measure growth in an economy is already an abstract concept with no basis on material resource, then it follows that "growth" is NOT CONSTRAINED by material resource.
Or did you think every time nvidia announces their quarterly results and the market puts a valuation on nvidia that we are allocating materials to nvidia? Again, your model of reality sucks. It doesn't fit.
Your closest peer is that guy that's a diehard fan of a sports team from their hometown that keeps losing but it doesn't matter because being a fan and supporting the hometown is more important than performing the sport well.
That's you. Your model of reality fails every single day but it doesn't matter because you're a fan of your bad model. The worst part? You keep telling everybody around you to place a bet on your model with their life savings despite never being able to produce an example where your model was right
> Name a single time doomers were right about anything.
- NFTs
- Surveillance schizos
- Global Pedophile Cabal schizos
- Anyone who didn’t believe we were a year out from Star Trek living when LLMs first started picking up steam
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
- Anyone amongst the sea of delusional democrats who did indeed believe Trump could win a second term.
All of those doomers were vindicated, and that’s just recently.
- NFTS doomers? I mean I appreciate the humor here.
- Surveillance schizos - Society still works
- Global Pedophile Cabal schizos - Again, funny use of 'doomers' but that's what the current society seems to be run by so I wouldn't say it's fitting for doomerism.
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
-- I'm a software "engineer" for ~14 years now. I still have no concern.
None of these things are that disruptive to our society at large. You will still be able to walk down the street and grab a Big Mac pretty much any day of the week. A large portion of society is going to look at all of what you're worried about and say "it's not that serious" while consuming their 20 second videos.
You're asking the wrong person. I haven't seen a single example of a doomer warning that came true. Can you provide one? It seems like society still exists when I look out the window and the impact that doomers assert are greatly exaggerated in every instance.
So are disingenuous or just stupid? Of course society exists still, but what society?
Only the very dumbest think “doom” is some apocalyptic scene from a Hollywood film in which humans are nearly wiped out.
“Doom” is instead when swaths of Roman citizens with rights amidst a powerful, civically and technologically impressive hegemony, over time find themselves reduced to unfree serfs. They and their descendants would remain in that position for centuries until a horrific disease came through and killed so many of them that the serfdom became untenable.
> Only the very dumbest think “doom” is some apocalyptic scene from a Hollywood film in which humans are nearly wiped out.
So you're all just out here telling everybody they should stop what they are doing because of the doom, but the doom isn't that impactful in the grand scheme of things?
That checks out with my understanding of doomers. Just a bunch of useless whiners that produce a bunch of meaningless noise for everybody else.
> “Doom” is instead when swaths of Roman citizens with rights amidst a powerful, civically and technologically impressive hegemony, over time find themselves reduced to unfree serfs. They and their descendants would remain in that position for centuries until a horrific disease came through and killed so many of them that the serfdom became untenable.
And look at where we are now. Rome has been surpassed many times over. The quality of life for the average living person is FAR SURPASSED anything that anybody in Rome could dream of. Seems like it wasn't worth worrying about what happened in Rome. If you make "doom" some kind of local event that affects a small group of people in a short window of time while trying to tell everybody they should hit the brakes and pause - maybe you should reflect on how these two things contradict each other.
In other words, if the doom isn't that doomful in the grand scheme of things then your argument is just again, moving goalposts. There are clear examples for every doom scenario you're talking about where the world moved on and built bigger and better. I guess it's on you to wait until that's no longer true but until then the ball is in your court. Just realize that you should at some point reflect and realize that every swing and miss is just more evidence that doomers are consistently wrong about the impact of their observations.
> People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
How was this group vindicated? It absolutely has caused problems at orgs and in the industry.
Just look at all the linkedin/twitter/youtube garbage of influencers trying to post boot camp tier advice and a sizable portion of new developers latching on to often questionable advice/viewpoints.
> How was this group vindicated? It absolutely has caused problems at orgs and in the industry.
I think you misread. In fairness, I arranged the sentence awkwardly, as I do often. I think my mind was conjuring the various dooms and then trying to rephrase the doom into the doomer.
What I mean is the people who warned against it were vindicated.
Of course vindicated may not the best word to use. If I say the world blows up tomorrow and you say it can never, and then it blown up, perhaps I’m not necessarily vindicated. But I certainly get a brief moment of schadenfreude
I was thinking the other day about why a "global pedophile cabal" would be a thing. I still think that phrase overstates it a bit, but not that much.
Committing a crime with someone bonds you to them.
First, it's a kind of shared social behavior, and it's one that is exclusive to you and your friends who commit the same kinds of crimes. Any shared experience bonds people, crimes included. Having a shared secret also bonds people.
Second, it creates an implied pact of mutually assured destruction. Everyone knows the skeletons in everyone else's closet, so it creates a web of trust. Anyone defecting could possibly be punished by selectively revealing their crimes, and vice versa. Game theoretically it overcomes tit-for-tat and enables all-cooperate interactions, at least to some extent, and even among people who otherwise don't like each other or don't have a lot in common.
Third, it separates the serious from the unserious. If you want to be a member of the club, do the bad thing. It's a form of high cost membership gating.
This works for other kinds of crimes too. It's not that unusual for criminal gangs to demand that initiates commit a crime and provide evidence, or commit a crime in front of existing members. These can be things like robbery, murder, and so on. Anyone not willing to do this probably isn't serious and can't be trusted. Once someone does do it, you know they're really in.
It naturally creates cabals. The crime comes first, the cabal second, but then the cabal can realize this and start using the crime as a gateway to admission.
Every mutual interest creates a community, but a secret criminal mutual interest creates a special kind of tight knit community. In a world that's increasingly atomized and divided, that's power. I think it neatly explains how the Epstein network could be so powerful and effective.
Ah yes, me on a high horse. Not the person whose entire worldview depends on defying nash equilibrium. You're all wasting brain cycles to discuss some unrealistic cooperative agreement to slow down and sing 'kumbaya' and telling us that if we don't get to this state that we will on the streets homeless. If this is me on a horse then you are on top of an ivory tower managing my beast of burden.
I suppose everyone on HN reaches a certain point with these kind of thought pieces and I just reached mine.
What are you building? Does the tool help or hurt?
People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
After five or six cycles it does become a bit fatiguing. Use the tool sanely. Work at a pace where your understanding of what you are building does not exceed the reality of the mess you and your team are actually building if budgets allow.
This seldom happens, even in solo hobby projects once you cost everything in.
It's not about agile or waterfall or "functional" or abstracting your dependencies via Podman or Docker or VMware or whatever that nix crap is. Or using an agent to catch the bugs in the agent that's talking to an LLM you have next to no control over that's deleting your production database while you slept, then asking it to make illustrations for the postmortem blog post you ask it to write that you think elevates your status in the community but probably doesn't.
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
> What are you building?
This x1000. The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
Hard to shake the feeling that this looks like one big pyramid scheme. I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It was, and is. But not universally.
If you formulate questions scientifically and use the answers to make decisions, that's engineering. I've seen it happen. It can happen with LLMs, under the proper guidance.
If you formulate questions based on vibes, ignore the answers, and do what the CEO says anyway, that's not engineering. Sadly, I've seen this happen far too often. And with this mindset comes the Claudiot mindset - information is ultimately useless so fake autogenerated content is just as valuable as real work.
In my lifetime software has given us:
* the ability to find essentially any information ever created by anyone anywhere at anytime,
* the ability to communicate with anyone on Earth over any distance instantaneously in audio, video, or text,
* the ability to order any product made anywhere and have it delivered to our door in a day or two,
* the ability to work with anyone across the world on shared tasks and projects, with no need for centralized offices for most knowledge work.
That was a massive undertaking with many permutations requiring lots of software written by lots of people.
But it's largely done now. Software consumes a significant fraction of all waking hours of almost everyone on Earth. New software mainly just competes with existing software to replace attention. There's not much room left to expand the market.
So it's difficult to see the value of LLMs that can generate even more software even faster. What value is left to provide for users?
LLMs themselves have the potential to offering staggering economic value, but only at huge social cost: replacing human labor on scales never seen before.
All of that to say, maybe this is the reason so much time is being spent on meta-work today than on actual software engineering.
I have watched artists thoughtfully integrate digital lighting and the like at a scale I'd never seen before the LLMs rolled up and made it possible to get programs to work without knowing how to program.
The fundamental ceiling of what an LLM can do when connected to an IDE is incredible, and orders of magnitude higher than the limits of any no-code / low-code platform conceived thus far. "Democratizing" software - where now the only limits are your imagination, tenacity, and ability to keep the bots aligned with your vision, is allowing incredible things that wouldn't have happened otherwise because you now don't strictly need to learn to program for a programming-involved art project to work out.
Should you learn how to code if you're doing stuff like that? Absolutely. But is it letting people who have no idea about computing dabble their feet in and do extremely impressive stuff for the low cost of $20/month? Also yes.
Now this is the right take. It's one thing for us to do navel-gazing into the recursive autononomous future; it's another to step back and see what Normal People can do, now that the walls are coming down around our profession. Creating new walls is probably not the answer! From the Cathedral and Bazaar, we now have an entire metaphorical city of development happening, by people who would not have thought it possible a few years ago.
I don't know what the future of my job holds other than what it always had: helping people who have good ideas to get them done properly.
The thing is though it all still feels so…rudderless/pointless sometimes?
When digital cameras came out, it democratized filmmaking immensely. But it wasn’t just people screwing around - amazing new works of art, received positively by audiences and critics alike, exploded in number. They wound up winning film fests, garnering millions of views (and fans) online, and even on big screens world wide, almost immediately
Where are the vibe coded apps that are actually good? Where are the new, innovative creations built by “normal” people? Because by now you’d think we’d see them. It’s all been parlor tricks, proofs of concept, and post mortems on how a bot ruined half a year’s work or whatever. The “good stuff” is still happening behind closed doors, led by experienced engineers on existing projects. It’s a productivity multiplier more than anything it seems, but it doesn’t seem useful as a tool for new people to make new things in any given space.
Emacs can be configured with no code written by the user and Linux can be controlled with minimal user knowledge of the command line. Still some knowledge is necessary in most cases, but nowhere near what was required a handful of years back.
I’m not sophisticated enough to enjoy abstract art. Maybe AI will bring abstract software projects to the world next.
I can imagine all the people staring at these software projects amazed at the genius it must have taken to create them. :)
I see the next really big task for software as the ability to separate the signal from the noise. Sifting the wheat from the chaff has gone from a 'nice to have' to 'rescue my sanity'.
Maybe agents and AI in general will help with that. Maybe it will just make the problem worse.
> But it's largely done now
Somehow I doubt that. The monkey is never satisfied.
Agree. Productivity tools all the way down.
> What value is left to provide for users?
A spreadsheet editor with at most a couple of hundred MBs in size that can compete against Excel, for example. While also not eating from RAM resources. The same goes for a new browser and a new browser engine, it's time for Chrome to have a real competitor, it has become a mess. I can of other such examples, but these are the 2 biggest ones.
None of that is blocking money from being made
> The last 10 years in the software industry in particular seems full of meta-work. New frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. Ultimately so we can build... what exactly? Are these necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
The overwhelming majority of real jobs are not related to these things you read about on Hacker News.
I help a local group with resume reviews and job search advice. A common theme is that junior devs really want to do work in these new frameworks, tools, libraries, or other trending topics they've been reading about, but discover that the job market is much more boring. The jobs working on those fun and new topics are few and far between, generally reserved for the few developers who are willing to sacrifice a lot to work on them or very senior developers who are preferred for those jobs.
There’s a whole world out there that doesn’t seem to be addressed by the original comment. On one end of that scale you have things like bespoke software for small businesses, some niche inventory management solution that just sits quietly in the corner for years. On the other end, there’s the whole world of embedded software, game dev, design software, bespoke art pipeline tools…
It can seem that the majority of software in the world is about generating clicks and optimising engagement, but that’s just the very loud minority.
Not that you asked… But I would be happy with a junior position writing production C or ASM - but I assume that those sorts of positions are on the other end of the same boat. Who the hell has any use for an amateur dev. with an autistic fascination and _zero_ practical experience?
Someone here shared an article here, recently, espousing something along the lines of "home garden programming." I see software development moving in this direction, just like machining did: Either in a space-age shop, that looks more like a lab, with a fix-axis "machining center," or in the garage with Grandpappy's clapped out Atlas - and nothing in between.
This is a good point. I've seen people with really complex AI setups (multiple agents collaborating for hours). But what are they building? Are they building a react app with an express backend? A next js app? Which itself is a layer on top of an abstraction?
I haven't tried this myself but I'm curious if an LLM could build a scalable, maintainable app that doesn't use a framework or external libraries. Could be danger due to lack of training data but I think it's important to build stuff that people use, not stuff that people use to build stuff that people use to build stuff that....
Not that meta frameworks aren't valuable, but I think they're often solving the wrong problem.
When it comes time to debug would you rather ask questions about and dig through code in a popular open source library, or dig through code generated by an LLM specifically for your project?
The copout answer is it depends. I've debugged sloppy code in React both before and after LLMs were commonly used. I've also debugged very well-written custom frameworks before and after LLMs.
I think with proper guardrails and verification/validation, a custom framework could be easier to maintain than sloppy React code (or insert popular framework here).
My point is that as long as we keep the status quo of how software is built (using popular tools that male it fast and easy to build software without LLMs that often were unperformant), we'll keep heading down this path of trying to solve the problems of frameworks instead of directly solving the problems with our app.
(BTW, it was your comment to my comment that inspired my comment, talk about meta! https://news.ycombinator.com/item?id=47512874 )
If the LLM doing it, it doesn't matter, isn't that the point?
Not saying I personally believe in this scenario, but everything I've heard supports the idea that code is no longer for humans to consume.
You are going to allow a product from a company you have no reason to trust write important software for you and put it into production without checking the code to see what it does?
I agree with you, which makes me seem like the laggard at work. Devil's advocate is that AI-native development will use AI to ask these questions and such. So whether it's a framework or standard lib, def agree knowing your stuff is what matters, but the tools to demonstrate this knowledge is fast in flux.
Again, I am on the slow train. But this seems to be all I hear. "code optimized for humans" is marked for death.
had another thought on my drive just now. nextjs is really fantastic with LLM usage because there's so much body of work to source from. previously i found nextjs unbearable to work with with its bespoke isomorphic APIs. too dense, too many nuances, too much across the stack.
with LLMs it spit it out amazingly fast. but does that make nextjs the framework better or worse in design paradigms, that LLM is a requirement in order to navigate?
A lot of us use software written by other people we have no reason to trust and we haven't reviewed - most of open source libraries.
At least with any open source library I use, many other people have.
Yeah a nice thing about OSS is that they usually come with a community and you can ask questions or even submit bug fixes.
I’ve seen so many articles of “introducing flimflam: a squiggle for burfy” it makes my head spin.
Yeah here you go
https://youtu.be/DSzUYX7n2_A?si=q0_0lePoQ6MEz5d5
> Are these tools necessary to build what we actually need?
I think the entire software industry has reached a saturation point. There's not really anything missing anymore. Existing tools do 99% of what we humans could need, so you're just getting recycled and regurgitated versions of existing tools... slap a different logo and a veneer on it, and its a product.
The tools are mostly there, but there is a lot of need. Quality can be much better. Quality is UI, reliability, security, and a bunch of other similar things I can't think of offhand.
Oh ye of little faith in the possible.
We still don’t have truly transparent transference in locally-run software. Go anywhere in the world, and your locally running software tags along with precisely preserved state no matter what device you happen to be dragging along with you, with device-appropriate interfacing.
We still don’t have single source documentation with lineage all the way back to the code.
We still don’t treat introspection and observability as two sides of a troubleshooting coin (I think there are more “sides” but want to keep the example simple). We do not have the kind of introspection on modern hardware that Lisp Machines had, and SOTA observability conversations still revolve around sampling enough at the right places to make up for that.
We still don’t have coordination planes, databases, and systems in general capable of absorbing the volume of queries generated by LLM’s. Even if LLM models themselves froze their progress as-is, they’re plenty sophisticated enough when deployed en masse to overwhelm existing data infrastructure.
The list is endless.
IMHO our software world has never been so fertile with possibilities.
It's interesting how everything you list is created problems in the tools themselves.
If you step back and just look at "can this do what I wanted" without worrying about what shit storm of software makes it work.
Mind you perfectionists will always have work. That doesn't mean anything.
Resume driven development.
> I strongly suspect that vast majority of the "innovation" in recent years has gone straight to supporting the funding model and institution of the software profession, rather than actual software engineering.
Feels like there’s a counter to the frequent citation of Jevon’s Paradox in there somewhere, in the context of LLM impact on the software dev market. Overestimation of external demand for software, or at least any that can be fulfilled by a human-in-the-loop / one-dev-to-many-users model? The end goal of LLMs feels like, in effect, the Last Framework, and the end of (money in) meta-engineering by devs for devs.
> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly?
Don't forget App Stores. Everyone's still trying to build app stores, even if they have nothing to sell in them.
It's almost as if every major company's actual product is their stock price. Every other thing they do is a side quest or some strategic thing they think might convince analysts to make their stock price to move.
Well that's the thing, AI can mean anyone with an idea can build it, but only the people that own stuff will be able to leverage that to own more stuff.
> It's almost as if every major company's actual product is their stock price.
It's almost as if we lived under capitalism.
What other thing would they do? They are literally setting the Earth on fire to raise the stock price. No hostages taken.
The true alignment problem behind the ploy AGI alignment problem for prêt-à-penser SF philosophers. Or prestidigitators.
> It's almost as if every major company's actual product is their stock price.
They are pretty much legally obligated to act in this manner.
Has it always been this way? If not, did it used to be better? If so, how can we get back?
The legal doctrine that a company's primary responsibility is to maximize shareholder value dates from the 1970s. It started with Milton Friedman with a 1971 essay in the NYTimes [1] and then gained a lot of currency throughout the 70s stagflation and economic malaise. The final death-knell of the corporation as a social enterprise came during the 1980s era of corporate raiders and PE buyouts.
Note that the system that came before it had problems too. In the 50s and 60s, the top marginal tax rate was about 90%, which meant that above a certain level it made almost no sense for a corporate executive to be paid more. This kept executive salaries to a reasonable multiple of employee salaries, but it meant that executives and high-ranking managers tended to pay themselves in perks. This was the "Mad Men" era of private jets, private company apartments, secretaries who were playthings, etc. Friedman's essay was basically arguing against this world of corporate unaccountability and corruption, where formal pay and compensation were reasonable, but informal perks and arrangements managed to privilege the people in power in a complete opaque, unaccountable way.
Turns out that power is a hell of a drug, and the people in power will always find ways to use that to enrich themselves regardless of what the laws and incentives are.
[1] https://www.nytimes.com/1970/09/13/archives/a-friedman-doctr...
>> The last 10 years in the software industry in particular seems full of meta-work. Building new frameworks, new tools, new virtualization layers, new distributed systems, new dev tooling, new org charts. All to build... what exactly? Are these tools necessary to build what we actually need? Or are they necessary to prop up an unsustainable industry by inventing new jobs?
This is because all the low-hanging fruit has already been built. CRM. Invoicing. HR. Project/task management. And hundreds of others in various flavors.
It may exist (with a loose term of exist) but they are all mostly garbage. There's still plenty opportunity to make non-garbage version of things that already exist
This is technically true but also a bit naive. Established incumbents are very difficult to dislodge with merely a better version of their products. This becomes more true the larger the product and the average customer size. A good example is QuickBooks, which is a really janky accounting/bookkeeping software that is almost universally hated, but newer and better solutions haven't been able to capture much market share from it.
It’s hard to actually build a better QuickBooks because to build a better QuickBooks you need 1000+ integrations that each took hundreds of man hours to build.
People don't realize how much software engineering has improved. I remember when most teams didn't use version control, and if we did have it, it was crappy. Go through the Joel Test [1] and think about what it was like at companies where the answers to most of those questions was "no."
[1] https://www.joelonsoftware.com/2000/08/09/the-joel-test-12-s...
At the same time, systems have become far more complex. Back when version control was crap, there weren't a thousand APIs to integrate and a million software package dependencies to manage.
Sure everything seems to have gotten better and that's why we now need AIs to understand our code bases - that we created with our great version control tooling.
Fundamentally we're still monkeys at keyboards just that now there are infinitely many digital monkeys.
Perrow’s book Normal Accidents postulates that, given advances which could improve safety, people just decide to emphasize throughput, speed, profits, etc. he turned out to be wrong about aviation (got much safer over time) and maritime shipping (there was a perception of a safety crisis in the late 1970s with oil tankers exploding, now you just hear about the odd exceptional event.)
> Perrow argues that multiple and unexpected failures are built into society's complex and tightly coupled systems, and that accidents are unavoidable and cannot be designed around.[1]
This is definitely something that is happening with software systems. The question is: is having an AI that is fundamentally undecipherable in its intention to extend these systems a good approach? Or is an approach of slowing down and fundamentally trying understand the systems we have created a better approach?
Has software become safer? Well planes don't fall from the sky but the number of zero day exploits built into our devices has vastly improved. Is this an issue? Does it matter that software is shipped broken? Only to be fixed with the next update.
I think its hard to have the same measure of safety for software. A bridge is safe because it doesn't fall down. Is email safe when there is spam and phishing attacks? Fundamentally Email is a safe technology only that it allows attacks via phishing. Is that an Email safety problem? Probably not just as as someone having a car accident on a bridge is generally not a result of the bridge.
I think that we don't learn from our mistakes. As developers we tend to coat over the accidents of our software. When was the last time a developer was sued for shipping broken software? When was the last time an engineer was sued for building a broken bridge? Notice that there is an incentive as engineer to build better and safer bridges, for developers those incentives don't exist.
[1]: https://en.wikipedia.org/wiki/Normal_Accidents
The other day I was thinking about how stupid little things in the Javascript ecosystem where you have to change your configuration file "just because" are a real billion-dollar mistake and speculating that I could sue some of the developers in small claims court.
Right away I scoffed when I heard people had 20 agents running in parallel because I've been at my share of startups with 20 person teams that tend to break down somewhere between:
- 20 people that get about as much done as an optimal 5 person team with a lot more burnout and backlash
- There is a sprint every two weeks but the product is never done
and people who are running those teams don't know which one they are!
I'm sure there are better ones out there but even one or two SD north of the mean you find that people are in over their heads. All the ceremony of agile hypnotizes people into thinking they are making progress (we closed tickets!) and have a plan (Sprint board!) and know what they are doing (user stories!)
Put on your fieldworker hat and interview the manager about how the team works [1] and the state of the code base and compare that to the ground truth of the code and you tend to find the manager's mental is somewhere between "just plain wrong" and "not even wrong". Teams like that get things done because there are a few members, maybe even dyads and triads, who know what time it is and quietly make sure the things that are important-but-ignored-by-management are taken care of.
Take away those moral subjects and eliminate the filtering mechanisms that make that 20-person manager better than average and I can't help but think 'gas town' is a joke that isn't even funny. Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.
[1] I'll take even odds he doesn't know how long the build takes!
> Seems folks have forgotten that Yegge used to blog that he owed all his success in software development to chronic cannabis use, like if wasn't for all that weed there wouldn't be any Google today.
I remember a lot of Steve Yegge's impressive claims from back when he and Zed Shaw were what I would call "fringe contemporaries" in the early 2010s - like all the time he spent gassing on about his unmaintainable, barely usable nightmare of a Javascript mode for Emacs. (I did like the MozRepl integration, for what that's worth.)
I don't particularly recall him talking about smoking pot, and I think I would have, if he'd been as memorably effusive there as about js2-mode. But it's been a lot of years and I couldn't begin to remember where to look for an archive of his old blog. Would you happen to have a link?
> planes don’t fall from the sky
Boeing would like a word (; https://en.wikipedia.org/wiki/Maneuvering_Characteristics_Au...
> that's why we now need AIs to understand our code bases
I don't need an AI to understand my code base, and neither do you. You're smarter then you give yourself credit for.
The better processes and tools made larger project possible.
Version control is useful but it has nothing to do with software engineering per se. Most software development is craft work which doesn't meet the definition of engineering (and that's usually fine). Conversely, it's possible to do real software engineering without having a modern version control system.
And maybe it's dangerous for one to think they're doing engineering when in reality they're doing craft work.
... but it helps tremendously to have a solid computer engineering background since you are (finding and) transforming hard facts of reality into working code. I'd say its a mix of both, you can't just vibecode (or hack together before current times) a properly beautiful design (whatever that means in given instance).
> People don't realize how much software engineering has improved.
It has, but we have gotten there by stacking turtles, by building so many layers of abstraction that things no longer make sense.
Think about this hardware -> hypervisor -> vm -> container -> python/node/ruby run time all to compile it back down to Bytecode to run on a cpu.
Some layers exist because of the push/pull between systems being single user (PC) and multi user (mainframe). We exacerbated the problem when "installable software" became a "hard problem" and wanted to mix in "isolation".
And most of that software is written on another pile of abstractions. Most codebases have disgustingly large dependency trees. People keep talking about how "no one is reviewing all this ai generated code"... Well the majority of devs sure as shit arent reviewing that dependency tree... Just yesterday there was yet another "supply chain attack".
How do you protect yourself from such a thing... stack on more software. You cant really use "sub repositories/modules" in git. It was never built that way because Linus didnt need that. The rest of us really do... so we add something like artifactory to protect us from the massive pile of stuff that you're dependent on but NOT looking at. It's all just more turtles on more piles.
Lots of corporate devs I know are really bad at reviewing code (open source much less so). The PR code review process in many orgs is to either find the person who rubber-stamps and avoid the people who only bike shed. I suspect it's because we have spent the last 20 years on the leet code interview where memorizing algorithms and answering brain teasers was the filter. Not reading, reviewing, debugging and stepping through code... Our entire industry is "what is the new thing", "next framework" pilled because of this.
You are right that it got better, but we got there by doing all the wrong things, and were going to have to rip a lot of things apart and "do better".
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
If I engineer a bridge I know the load the bridge is designed to carry. Then I add a factor of safety. When I build a website can anyone on the product side actually predict traffic?
When building a bridge I can consult a book of materials and understand how much a material deforms under load, what is breaking point is, it’s expected lifespan, etc. Does this exist for servers, web frameworks, network load balancers, etc.?
I actually believe that software “could” be an engineering discipline but we have a long way to go
> can anyone on the product side actually predict traffic
Hypothetically, could you not? If you engineer a bridge you have no idea what kind of traffic it'll see. But you know the maximum allowable weight for a truck of X length is Y tons and factoring in your span you have a good idea of what the max load will be. And if the numbers don't line up, you add in load limits or whatever else to make them match. Your bridge might end up processing 1 truck per hour but that's ultimately irrelevant compared to max throughput/load.
Likewise, systems in regulated industries have strict controls for how many concurrent connections they're allowed to handle[1], enforced with edge network systems, and are expected to do load testing up to these numbers to ensure the service can handle the traffic. There are entire products built around this concept[2]. You could absolutely do this, you just choose not to.
[1] See NIST 800-53 control SC-7 (3)
[2] https://learn.microsoft.com/en-us/azure/app-testing/load-tes...
Software and bridges are entirely different.
If I need a bridge, and there's a perfectly beautiful bridge one town over that spans the same distance - that's useless to me. Because I need my own bridge. Bridges are partly a design problem but mainly a build problem.
In software, if I find a library that does exactly what I need, then my task is done. I just use that library. Software is purely a design problem.
With agentic coding, we're about to enter a new phase of plenty. If everyone is now a 10x developer then there's going to be more software written in the next few years than in the last few decades.
That massive flurry of creativity will move the industry even further from the calm, rational, constrained world of engineering disciplines.
> Bridges are partly a design problem but mainly a build problem.
I think this vastly underestimates how much of the build problem is actually a design problem.
If you want to build a bridge, the fact one already exists nearby covering a similar span is almost meaningless. Engineering is about designing things while using the minimal amount of raw resources possible (because cost of design is lower than the cost of materials). Which means that bridge in the other town is designed only within its local context. What are the properties of the ground it's built on? What local building materials exist? Where local can be as small as only a few miles, because moving vast quantities of material of long distances is really expensive. What specific traffic patterns and loadings it is built for? What time and access constraints existed when it was built?
If you just copied the design of a bridge from a different town, even one only a few miles up the road, you would more than likely end up with a design that either won't stand up in your local context, or simply can't be built. Maybe the other town had plenty of space next to the location of the bridge, making it trivial to bring in heavy equipment and use cranes to move huge pre-fabbed blocks of concrete, but your town doesn't. Or maybe the local ground conditions aren't as stable, and the other towns design has the wrong type of foundation resulting in your new bridge collapsing after a few years.
Engineering in other disciplines don't have the luxury of building for a very uniform, tightly controlled target environment where it's safe to make assumptions that common building blocks will "just work" without issue. As a result engineering is entirely a design problem, i.e. how do you design something that can actually be built? The building part is easy, there's a reason construction contractors get paid comparatively little compared to the engineers and architects that design what they're building.
Software packages are more complicated than you make them out to be. Off the top of my head:
- license restrictions, relicensing
- patches, especially to fix CVEs, that break assumptions you made in your consumption of the package
- supply chain attacks
- sunsetting
There’s no real “set it and forget it” with software reuse. For that matter, there’s no “set it and forget it” in civil engineering either, it also requires monitoring and maintenance.
I have talked to colleagues who wrote software running on microcontrollers a decade ago, that software still runs fine. So yes there is set and forget software. And it is all around us, mostly in microcontrollers. But microcontrollers far outnumber classical computers (trivially: each classical computer or phone contain many microcontrollers such as SSD controllers, power management, wifi, ethernet, cellular,... And then you can add appliances, cars etc to that).
If something in software works and isn't internet connected it really is set and forget. And far too many things are being connected needlessly these days. I don't need or want an online washing machine or car.
Ignoring the actual useful reasons to connect something to be internet, the subscription business model is just too damn tempting.
>I actually believe that software “could” be an engineering discipline but we have a long way to go
It certain mission critical applications, it is treated as engineering. One example - https://en.wikipedia.org/wiki/DO-178B
The way the authors of the book on material strengths got those numbers, was through testing. If you're using mature technologies, that testing has been done by others and you can rely on it for your design, at least in a general way. Otherwise you have to do the testing yourself, which is something a structural engineering project might do also, if it's unusual in some way.
We have a long way to go but large software companies have gotten really, really good at scaling to handle larger and larger traffic loads. It's not like there are no materials to consult to learn current best practices, even if there are still more improvements to be made.
There are also fundamentally different acceptance criteria for a bridge vs a website. Failure modes differ. Consequences of failure are nowhere near the same, so risk tolerance is adjusted accordingly. Perhaps true "engineering" really boils down to risk management... is what you're building so potentially destructive that it requires extremely careful thought and risk management? Engineering. If what you're building can fail, and really cause no harm, that's just building.
I think it is in certain very limited circumstances. The Space Shuttle's software seems like it was actually engineered. More generally, there are systems where all the inputs and outputs are well understood along with the entire state space of the software. Redundancy can be achieved by running different software on different computers such that any one is capable of keeping essential functions running on its own. Often there are rigorous requirements around test coverage and formal verification.
This is tremendously expensive (writing two or more independent copies of the core functionality!) and rapidly becomes intractable if the interaction with the world is not pretty strictly limited. It's rarely worth it, so the vast majority of software isn't what I'd call engineered.
Maybe back in the beginning, but I don't think it's an engineering discipline now. I don't think that's bad though. I always thought we tagged on the word "engineer" so that we could make more money. I'm ok with not being one. The engineers I've known are very strict in their approach which is good since I don't want my deck to fall down. Most of us are too risky with our approach. We love to try new things and patterns, not just used established ones over time. This is fine with me, and when we apply the term "engineer" to work, I get a little uneasy, because I think it implies us doing something that most of us really don't want to do. That is, absolutely prove our approach works and will work for years to come. Just my opinion though.
I’ve had jobs where my title was “software engineer”, but I never refer to myself as such outside of work. When I tell others what I do, I say I am a software developer. It may seem a pointless distinction, but to me there is a distinction.
Neither myself nor the vast majority of other “software engineers” in our field are living up to what it should mean to be an “engineer”.
The people that make bridges and buildings, those are the engineers. Software engineers, for the very very most part, are not.
I was won over by this distinction from another senior some years ago. I think he said…
“Developers build things. Engineers build them and keep them running.”
I like the linguistic point from a standpoint of emphasizing a long term responsibility.
I was just reading "how the world became rich" and they made an interesting distinction economic "development" vs plain "growth". Amusingly, "development" to them means exactly what you're saying "engineer" should mean. It's sustainable, structural, not ephemeral. Development in the abstract hints at foundational work. Building something up to last. It seems like this meaning degradation is common in software. It still blows my mind how the "full-stack" naming stuck, for example.
https://www.howtheworldbecamerich.com/
Edit-on a related note, are there any studies on the all-in long-term cost between companies that "develop" vs. "engineer". I doubt there would be clean data since the managers that ignored all of the warning of "tech debt" would probably have the say on both compiling and releasing such data.
Does the cost of "tech-debt" decrease as the cost of "coding" decreased or is there a phase transition on the quality of the code? I bet there will be an inflection point if you plotted the adoption time of AI coding by companies. Late adapters that timed it after the models and harnesses and practices were good enough (probably still some time in the near future) would have less all-in cost per same codebase quality.
When your bridge falls down, you don't call an incident and ask your engineer to fix it, you sue them.
In software there's a lot more emphasis on post-hoc fixes rather than up front validation, in my experience.
I like this one from Russ Cox:
"Software engineering is what happens to programming when you add time and other programmers."
I'm similar except for me reason is no degree. So some jobs eng others just developer... although my current job I'm a "technology specialist" which is funny. But I'm getting paid so whatever.
Most recently I wrote cloudformation templates to bring up infra for AWS-based agents. I don't use ai-assisted coding except googling which I acknowledge is an ai summary.
A friend of mine is in a toxic company where everyone has to use AI and they're looked down upon if they don't use it. Every minute of their day has to be logged doing something. They're also going to lay off a bunch of people soon since "AI has replaced them" this is in the context of an agency.
It’s a bit of a misclassification. In my mind we tend to be more like architects where there are a fair amount of innovative ideas that don’t work all that well in practice. Train stations with beautiful roofs that leak and slippery marble floors, airports with smoke ventilation systems in the floor, etc.
Of course, we use that term for something else in the software world, but architecture really has two tiers, the starchitects building super fancy stuff (equivalent to what we’d call software architects) and the much more normal ones working on sundry things like townhomes and strip malls.
That being said I don’t think people want the architecture pay grades in the software fields.
It's an understandable mistake to make; culturally an engineer is defined by the building of physical objects that have extremely high reliability expectations. But "engineer" originally referred to someone who used their ingenuity to build or do things in a manner not routine or primarily physical [1]. Basically an inventor who produced. The main engineering accreditation body in the United States adds the requirement of a professional education, but it is more or less the same [2].
We're engineers.
1. https://en.wikipedia.org/wiki/Engineer#Definition
2. https://www.abet.org/accreditation/accreditation-criteria/cr...
At the same time, if you remove 'engineer' , informatics should fall under the faculty of Science, so scientists, which are even more rigorous than engineers ;)
Maybe software tinkerer?
Computer Science (kind of a misnomer) should be in the faculty of Mathematics. Software Development should be in the faculty of Performing Arts. Informatics should be in the faculty of Business Administration.
> scientists, which are even more rigorous than engineers ;)
You should see the code that scientists write...
This x1000000
Software craftsman seems to strike a good balance.
It's a Systems Engineering job. You provide context, define interfaces to people, tests for critical failure modes affecting customer, describe system behavior, and translate to other people.
classic ... https://www.hillelwayne.com/post/are-we-really-engineers/
> A number of these phenomena have been bundled under the name "Software Engineering". As economics is known as "The Miserable Science", software engineering should be known as "The Doomed Discipline", doomed because it cannot even approach its goal since its goal is self-contradictory. Software engineering, of course, presents itself as another worthy cause, but that is eyewash: if you carefully read its literature and analyse what its devotees actually do, you will discover that software engineering has accepted as its charter "How to program if you cannot.".
- Edsger Dijkstra, 1988
I think, unfortunately, he may have had us all dead to rights on this one.
One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."
Dijkstra was a mathematician. It is a necessary discipline. If it alone were sufficient, then the "program correctness" fans would have simply and inarguably outdone everyone else forty years ago at the peak of their efforts, instead of having resorted to eloquently whiny, but still whiny, thinkpieces (such as the 1988 example [1] quoted here above) about how and why they would like history to understand them as having failed.
[1] https://www.cs.utexas.edu/~EWD/ewd10xx/EWD1036.PDF [2]
[2] I will freely grant that the man both wrote and lettered with rare beauty, which shames me even in this photocopier-burned example when I compare it to the cheerful but largely unrefined loops and scrawls of my own daily hand.
The formal methods people may yet have the last laugh. I did not have Lean becoming a hyped programming language / proof assistant on my bingo card for 2025-26 and yet here we are, because these tools help us close the validation loop for LLM agents. That is not dead which can eternal lie...
But yes, I think the best rebuttal to Dijkstra-style griping is Perlis' "one can't proceed from the informal to the formal by formal means". That said I also believe kind of like Chesterton's quote about Christianity, they've also mostly not been tried and found wanting but rather found hard and left untried. By myself included, although I do enjoy a spot of the old dependent types (or at least their approximations). There's an economic argument lurking there about how robust most software really needs to be.
Certainly, and it's at that economic argument that I strive to get, I think.
Every so often an article makes the rounds on the correctness and verification methods used for Space Shuttle avionics software and applications of similar import, or if not that then Nancy Leveson's comprehensive 1995 review of the Therac-25 accidents. [1]
Most software doesn't need to be nearly so robust, but Dijkstra constructs his argument as though all did, hinging the inversion on the obvious and frankly shocking cheat across the gap between his pages 14 and 15, ie, that paragraph beginning "But before a computer is ready to perform..." Here he casually, and without direct acknowledgement much less justification, assumes as rhetorically axiomatic that a program, not the machine that executes it, is the original artifact of computing, of which any reification merely constitutes less than perfect instantiation, which he is then free to criticize on the wholly theoretical grounds of mathematical beauty; that is, on the grounds he prefers to inhabit in all cases, whether to do so in any given example makes any sense or not.
If that's his preferred ground, fair enough; after all, he was a mathematician. But his hypocrisy in concealing the insistence by means of subtle rhetoric - mere pages after inveighing against "medieval thinking" by way of an example, his "reasoning by analogy," faulting specifically that argument made by way of specious rhetoric! - casts suspicion on all that both precedes and follows. From a layperson, I could regard it as honest error, but I have known and loved academic mathematicians, and I really can't conceive of any of them leaving intact so consequential a mistake.
Perhaps Dijkstra was different, or merely becoming old, but for someone so heavily invested in pushing a paradigm of programming with mathematical rigor at its core, it seems a remarkable flaw in what should be a crucial argument (especially in advance of a solution for the halting problem). I regret that flaw, because he isn't all wrong about what an engineering paradigm can do to the agency and optionality of programmers especially in industry - not that his one extremely privileged position therein, parallel with Feynman's time at Thinking Machines, would much acquaint him with our desiderata or our constraints - and I would like to find that point made in better company than he was able to give it.
But then, his conception never offered much in preference, did it? The labor of mathematicians is scarce and expensive: what good is a proof assistant to anyone who can't understand its output, much less give it input? And Dijkstra himself, not less strange a bird than any other mathematician, famously did all he could to avoid actually using the machines on whose correct use he here wrote. (Hence his hand, which I complimented so highly before. I also use a fountain pen, but as I said, not so beautifully - and I'm glad I know how to use a keyboard well, instead.)
There would not be more programmers or more software in a world run on such principles, I think, than in this one - on the contrary, less by far. Maybe that would be preferable, but mostly not for the reasons Dijkstra claimed.
[1] http://sunnyday.mit.edu/papers/therac.pdf
I think the real tragedy here is that we can spend *all* of our time trying to improve the quality of our output, but it simply doesn't matter, because as long as the button is where the boss wants it to be and is the right color, all is right with the world.
Literally nothing else matters, and we (or at least I) have wasted a ton of time getting good at writing software.
> One would as sensibly dismiss the concept of an assembly line as "how to build a car if you cannot."
I agree, but I'm not sure this says what you think it does.
The people on the car assembly line may know nothing of engineering, and the assembly line has theoretically been set up where that is OK.
The people on the software assembly line may also (and arguably often do) know nothing of engineering, but it's not clear that it is possible to set up the assembly line in such a way so as to make this OK.
Arguably, the use of LLMs will at least have some utility in helping us to figure this out, because a lot of LLMs are now being used on the assembly line.
Exactly! I’ve noticed a resounding amount of people are writing the same pieces recently, it’s almost like everyone’s sounding their alarm for the upcoming tsunami. Who’s listening? Here’s my piece: https://humantodo.dev
> What are you building? Does the tool help or hurt?
> People answered this wrong in the Ruby era, they answered it wrong in the PHP era, they answered it wrong in the Lotus Notes and Visual BASIC era.
I'm assuming you're saying these tools hurt more than help?
In that case I disagree so much that I'm struggling to reply. It's like trying to convince someone that the Earth is not flat, to my mental model.
PHP, Ruby and VB have more successful code written in them than all current academic or disproportionately hyped languages will ever have combined.
And there's STILL software being written in them. I did Visual Basic consulting for a greenfield project last week despite my current expertise being more with Go, Python, C# and C. And there's a RoR work lined up next. So the presence gap between these helpful tools and other minor, but over index tools, is still increasing.
It's easy to think that the languages one see mor often in HN are the prevalent ones but they are just the tip of the iceberg.
People built a lot of great stuff with Ruby, PHP, Notes and VB. I don't know what the problem really is.
Personally I think that whole Karpathy thing is the slowest thing in the world. I mean you can spin the wheels on a dragster all you like and it is really loud and you can smell the fumes but at some point you realize you're not going anywhere.
My own frustration with the general slowness of computing (iOS 26, file pickers, build systems, build systems, build systems, ...) has been peaking lately and frankly the lack of responsiveness is driving me up the wall. If I wasn't busy at work and loaded with a few years worth of side projects I'd be tearing the whole GUI stack down to the bottom and rebuilding it all to respect hard real time requirements.
Hey Visual Basic is still there, and last time I checked it was still the goto option to do OLE Automation.
RoR is no longer at its peak, but is still have its marginal stable share of the web, while PHP gets the lion part[1]
Ok, Lotus Notes is really relic from an other era now. But it’s not a PL, so not the same kind of beast.
Well, also LLMs are different beast compared to PL. They actually really are the things that evocate the most the expression "taming the beast" when you need to deal with them. So it indeed as far away as possible of engineering as one can probably use a computer to build any automation. Maybe to stay in scientific realms ethology would be a better starting point than a background in informatics/CS to handle these stuffs.
[1] https://w3techs.com/technologies/comparison/pl-php
Absolutely agree.
I'm watching a team which is producing insane amounts of code for their team size, but the level of thought that has gone into all of the details that would make their product a fit predator to run at scale and solve the underlying business problem has been neglected.
Moving really fast in the wrong direction is no help to anyone.
Engineering is two things:
1. Applied physics - Software is immediately disqualified. Symbols have no physics.
2. Ethics - Lives and livelihoods depend on you getting it right. Software people want to be disqualified because that stuff is so boring, but this is becoming a more serious issue with every passing day.
That might vary by countries but in France with have an official "engineering degree" (diplome d'ingénieur) which is also a master's degree, and most software developers have this.
So most software developers in France are absolutely software engineers.
Software is applied mathematics, though
And still not applied physics
> Software is immediately disqualified. Symbols have no physics.
Many physical processes are controlled by software.
>After five or six cycles it does become a bit fatiguing. Use the tool sanely.
That's increasingly not possible. This is the first time for me in 20 years where I've had a programming tool rammed down my throat.
There's a crisis of software developer autonomy and it's actually hurting software productivity. We're making worse software, slower because the C levels have bought this fairy tale that you can replace 5 development resource with 1 development resource + some tokens.
That lucky?
In 18 years AI is the third or 4th tool forced upon a shop/team, I will say of those it is the forst one that is genuinely able to make me more productive overall, even with the drawbacks.
Software was an engineering discipline... at some places. And it still is, at some places.
Other places were "hack it until we don't know of any major bugs, then ship it before someone finds one". And now they're "hey, AI agents - we can use that as a hack-o-matic!" But they were having trouble with sustainability before, and they're going to still, except much faster.
All (not some) of the most successful devs I've known in the sense of building something that found market fit and making money off it were terrible engineers. They were fairly productive at building features. That's it. And they were productive - until they weren't. Their work ultimately led to outages, lost data, and sensitive data being leaked (to what extent, I don't even know).
The ones who got acquired - never really had to stand up to any due diligence scrutiny on the technical side. Other sides of the businesses did for sure, but not that side.
Many of you here work for "real" tech companies with the budget and proper skin in the game to actually have real engineers and sane practices. But many of you do not, and I am sure many have seen what I have seen and can attest to this. If someone like the person I mentioned above asks you to join them to help fix their problems, make sure the compensation is tremendous. Slop clean-up is a real profession, but beware.
There used to be a saying along the lines of “while you’re designing your application to scale to 1m requests/min, someone out there is making $1m ARR with php and duct tape”
It feels like this takes on a whole new meaning now we have agents - which I think is the same point you were making
I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It's a craft.
Software reminds me more of construction or home contracting work then engineering.
We do the actual building of things
> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
Just another reason we should cut software jobs and replace them with A(G)I.
If the human "engineers" were never doing anything precisely, why would the robot engineers need to?
As far as I can tell, the only reason agents exist is because large context increase the probability of context poisoning, purely by the inability of these models to actually make conceptual decisions about the context.
I was interested in making a semi-automous skill improvement program for open code, and I wired up systemd to watch my skills directory; when a new skill appeared, it'd run a command prompt to improve it and cohere it to a skill specification.
It was told to make a lock file before making a skill, then remove the lock files. Multiple times it'd ignore that, make the skill, then lock and unlock on the same line. I also wanted to lock the skill from future improvements, but that context overode the skills locking, so instead I used the concept of marking the skills as readonly.
So in reality, agents only exist because of context poisoning and overlap; they're not some magicaly balm to improving the speed of work, or multiplying the effort, they simply prevent context poisoning from what's essentially subprocesses.
Once you realize that, you really have to scale back the reality because not only are they just dumb, they're not integrating any real information about what they're doing.
Software engineering is real engineering because we rigorously engineer software the way real engineers engineer real things.
Software engineering is not real engineering because we do not rigorously engineer software the way "real" engineers engineer real things. <--- YOU ARE HERE
Software engineering is real engineering because we "rigorously" engineer software the way "real" engineers engineer real things.
Edit: quotes imply sarcasm.
Largely a problem of VCs and shareholders. After my 12th year of "we'll get around to bug fixes" and "this is an emergency" I realize I am absolutely not doing anything related to engineering. My job means less than the moron PM who graduated bottom of their class in <field>. The lack of trust in me despite having almost a life in software is actually so insulting it's hard to quantify.
Now I barely look at ticket requirements, feed it to an LLM, have it do the work, spend an hour reviewing it, then ship it 3 days later. Plenty of fuck off time, which is time well spent when I know nothing will change anyway. If I'm gonna lose my career to LLMs I may as well enjoy burning shareholder capital. I've optimized my life completely to maximize fuck off time.
At the end of the day they created the environment. It would be criminal to not take advantage of their stupidity.
same experience here. trust deficits so rampant i question if ive ever been right once in my career. dont forget the lack of the word 'iterate' in the decision makers vocabulary. and as soon as the word sunset is uttered you know your in for a bumpy ride once again
> People answered this wrong in the Ruby era, they answered it wrong in the PHP era
Aren't you conveniently ignoring the fact that there were people saw through that and didn't go down those routes?
Change it to "Some people" if your pedanticism won't let you follow the flow.
Or better yet point out the better paths they chose instead. Were they wrestling with Java and "Joda Time"? Talking to AWS via a Python library named after a dolphin? Running .NET code on Linux servers under Mono that never actually worked? Jamming apps into a browser via JQuery? Abstracting it up a level and making 1,400 database calls via ActiveRecord to render a ten item to-do list and writing blog posts about the N+1 problem? Rewriting grep in Rust to keep the ruskies out of our precious LLCs?
Asking the wrong questions, using the wrong tools, then writing dumb blog posts about it is what we do. It's what makes us us.
There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
On one hand there's an approach to computing where it is a branch of mathematics that is universal. There are some creatures that live under the ice on a moon circling a gas giant around another star and if they have computers they are going to understand the halting problem (even if they formulate it differently) and know bubble sort is O(N^2) and about algorithms that sort O(N log N).
On the other hand we are divided by communities of practice that don't like one another. For instance there is the "OO sux" brigade which thinks I suck because I like Java. There still are shops where everything is done in a stored procedure (oddly like the fashionable architecture where you build an API server just because... you have to have an API) and other shops where people would think you were brain damaged to go anywhere near stored procs, triggers or any of that. It used to be Linux enthusiasts thought anybody involved in Windows was stupid and you'd meet Windows admins who were click-click-click-click-clicking over and over again to get IIS somewhat working who thought IIS was the only web server good enough for "the enterprise"
Now apart for the instinctual hate for the tools there really are those chronic conceptual problems for which datetime is the poster child. I think every major language has been through multiple datetime libraries in and out of the standard lib in the last 20 years because dates and times just aren't the simple things that we wish they would be and the school of hard knocks keeps knocking us to accept a complicated reality.
> There's this interesting issue that we've never had occupational licensing for software developers despite the sheer incompetence that we see all the time.
I'm laughing over the current Delve/SOC2 situation right now. Everyone pulls for 'licenses' as the first card, but we all know that is equally fraught with trauma. https://xkcd.com/927/
> pedanticism
I don't think this had anything to do with minor details at all. You're trying to convey a point while ignoring the half of the population who didn't go down that route.> I'm not even sure building software is an engineering discipline at this point. Maybe it never was.
It isn't. Show me the licensing requirements to be a "software engineer." There are none. A 12 year old can call himself a software engineer and there are probably some who have managed to get remote work on major projects.
> It isn't. Show me the licensing requirements
That's assuming the axiom that "engineer" must require licensing requirements. That may be true in some jurisdictions, but it's not axiomatically or definitionally true.
Some kinds of building software may be "engineering", some kinds may not be, but anyone seeking to argue that "licensing requirements" should come into play will have to actually argue that rather than treat it as an unstated axiom.
Depends on the country. In some countries, it is a legal axiom (or at least identity).
For the other countries, though, arguing "some countries do it that way" is as persuasive as "some countries drive on the other side of the road." It's true, but so what? Why should we change to do it their way?
> Depends on the country. In some countries, it is a legal axiom (or at least identity).
As I said, "That may be true in some jurisdictions, but it's not axiomatically or definitionally true.". The law is emphatically not an axiom, nor is it definitionally right or wrong, or correct or incorrect; it only defines what's legal or illegal.
When the article raised the question of whether "building software is an engineering discipline", it was very obviously not asking a question about whether the term 'engineering' is legally restricted in any particular jurisdiction.
To my mind, the term "engineering discipline" implies something roughly analogous to Electrical Engineering, Civil Engineering, Mechanical Engineering, Chemical Engineering.
There is no such rigorous definition for "software engineer" which normally is just a self-granted title meaning "I write code."
In Europe they are. Call yourself an Engineer without a degree and your company and you will be sued with a big fine, because here you must be legally accountable on disasters and ofc there are hard constraints .
> In Europe they are
Where specifically? I've been working as a "Software engineer" for multiple decades, across three countries in Europe, and 2-3 countries outside of Europe, never been sued or received a "big fine" for this, even have had presentations for government teams and similar, not a single person have reacted to me (or others) calling ourselves "software engineers" this whole time.
In Germany. I have a degree in mechanical engineering and am thus allowed to call myself an engineer, even though I write software professionally. Colleagues who have studied computer science cannot, as it is not considered an engineering, but a science degree. This is why most people talk about "software developers" and not about "software engineers" (in German) to avoid this problem. That being said, most people would not actually care.
Canada also (at least some provinces). I have quite a few Canadian software engineer colleagues with their iron rings to prove it.
An iron ring does not technically make you an engineer in Canada. It just says you graduated from an engineering program. A P.Eng, which is a professional engineer's license is something you acquire after multiple years of experience and testing.
No, that's plain wrong (I am from Czech Republic). You can even get an "engineering degree" (Ing.) by studying economics.
What the article doesn't touch on is the vendor lock-in that is currently underway. Many corps are now moving to an AI-based development process that is reliant on the big AI providers.
Once the codebase has become fully agentic, i.e., only agents fundamentally understand it and can modify it, the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
Sure it will be - perhaps - possible to interchange the underlying AI for the development of the codebase but will they be significantly cheaper? Of course, the invisible hand of the market will solve that problem. Something that OPEC has successfully done for the oil market.
Another issue here is once the codebase is agentic and the price for developers falls sufficiently that it will significant cheaper to hire humans again, will these be able to understand the agentic codebase? Is this a one-way transition?
I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices and the global economy, fundamentally everything is getting better.
I have similar concerns.
We will miss SaaS dearly. I think history is repeating just with DVD and streaming - we simply bought the same movie twice.
AI more and more feels the same. Half a year ago Claude Opus was Anthropics most expensive model - boy, using Claude Opus 4.6 in the 500k version is like paying 1 dollar per minute now. My once decent budgets get hit not after weeks but days (!) now.
And I am not using agents, subagents which would only multiply the costs - for what?
So what we arrive more and more is the same as always: low, medium, luxury tier. A boring service with different quality and payment structures.
Proof: you cannot compensate with prompt engineering anymore. Month ago you fixed any model discrepancies by being more clever and elaborate with your prompts etc.
Not anymore. There is a hidden factor now that accounts for exactly that. It seems that the reliance on skills and different tiers simply moves us away from prompt engineering which is considered more and more jailbreaking than guidance.
Prompt engineering lately became so mundane, I wonder what vendors were really doing by analyzing the usage data. It seems like that vendors tied certain inquiries with certain outcomes modeled by multistep prompting which was reduced internally to certain trigger sentences to create the illusion of having prompted your result while in fact you haven't.
All you did was asking the same result thousands of user did before and the LLM took an statistical approach to deliver the result.
This is a great point, and I routinely use it as an argument for why seasoned professionals should work hard to keep their skills and why new professionals should build them in the first place. I would never be comfortable leasing my ability to perform detailed knowledge work from one of these companies.
Sometimes the argument lands, very often it doesn't. As you said, a common refrain is, "but prices won't go up, cost to serve is the highest it will ever be." Or, "inference is already massively profitable and will become more so in the future--I read so on a news site."
And that remark, for me, is unfortunately a discussion-ender. I just haven't ever had a productive conversation with somebody about this after they make these remarks. Somebody saying these things has placed their bets already and are about to throw the dice.
No one ever asks how much it costs Facebook or Uber to serve requests because it is irrelevant, they set prices to maximize their profit like any good monopolist. Similarly the future cartel of big providers will charge their captive users whatever they can get away with, not the cost of inference.
The current discourse around "AI", swarms of agents producing mountains of inscrutable spaghetti, is a tell that this is the future the big players are looking for. They want to create a captive market of token tokers who have no hope of untangling the mess they made when tokens were cheap without buying even more at full price.
Code is so low entropy that smaller and more economical models will be up to the task the same as gigantic models from big providers are today.
No worries there, the huge improvements we see today from GPT and Claude, are at their heart just Reinforcement Learning (CoT, chain of thought and thinking tokens are just one example of many). RL is the cheapest kind of training one can perform, as far as I understand. Please correct me if that's not the case.
In the economy the invisible hand manages to produce everything cheaper and better all the time, but in the digital space the open source invisible hand makes everything completely free.
> the open source invisible hand makes everything completely free.
In this case the limitation is the compute. Very few people have the compute required for AI/LLMs locally or for free (comparable to the performance of Claude). So yes, there are plenty of Open Source models that can be used locally but you need to invest in hardware to make that happen and especially if you want the quality that is available from the commercial offerings.
Not to speak of the training of those models. It's all there to make it possible to do this locally however where's the hardware? AWS? Google? There are hidden costs of the Open Source model in this case.
>In this case the limitation is the compute.
I agree with most of your points, but computation can be transferred from a place where energy is cheap to a place that is expensive. Energy for cooking cannot be transferred that way.
See for example Amazon-Google datacenters in the Gulf region. We've also got a whole continent, Australia, to put as many solar panels as we desire. Australia got dark for half a day, every day? Put solar panels to the opposite side of the planet.
Energy is a concern, for cooking, transportation etc. Energy for computation is not.
> the prices will start rising. After all, these loss making AI companies will eventually need to recoup on their investments.
I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
Every genre-defining startup seems to go through this same cycle where the naysayers tell us that it's all going to collapse once the investment money runs out. This was definitely true for technologies without use cases (remember the blockchain-all-the-things era?) but it is not true for businesses that have actual users.
Some early players may go bust by chasing market share without a real business plan, like the infamous Webvan grocery delivery service. But even Webvan was directionally correct, with delivery services now a booming business sector.
Uber is another good example. We heard for years that ridesharing was a fad that would go away as soon as the VC money ran out. Instead, Uber became a profitable company and almost nobody noticed because the naysayers moved on to something else.
AI is different because the hardware is always getting faster and cheaper to operate. Even if LLM progress stalled at Opus 4.6 levels today, it would still be very useful and it would get cheaper with each passing year as hardware improved.
> I'm sure the pro-AIs will explain that technology will only get cheaper and better and that fundamentally it ain't an issue. Just like oil prices
Comparing compute costs to oil prices is apples to oranges. Oil is a finite resource that comes out of the ground and the technology to extract it doesn't improve much over decades. AI compute gets better and cheaper every year because the technology advances rapidly. GPU servers that were as expensive as cars a few years ago are now deprecated and available for cheap because the new technology is vastly faster. The next generation will be faster still.
If you're mentally comparing this to things like oil, you're not on the right track
> almost nobody noticed
Rideshare costs are much higher than they have been in years past. Everyone noticed
> Oil is a finite resource that comes out of the ground
Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.
These components are also far more fragile to source, see COVID and the collapse of global supply chains. Also the factories to create these components are expensive to build and fragile to maintain. See the Dutch company that seems to be the sole supply of certain manufacturing skills.[1]
> I would bet a lot of money that the price of LLM assistance will go down, not up, as the hardware and software advance.
My bet would be that it would fuel the profits of AI companies and not make the price of AI come down. Over supply makes price come down but if supply is kept artificially low, then prices stay high.
That's the comparison to OPEC and oil. There is plenty of oil to go around yet the supply is capped and thereby prices kept high. There is no guarantee that savings in hardware or supply will be passed on by AI corps.
Indeed there is no guarantee that there will be serious competition in the market, OPEC is a monopoly so why not have an AI monopoly? At the moment, all major players in AI are based in the same geopolitical sphere, making a monopoly more likely, IMHO.
In the end, it's all speculation what will happen. It just depends on which fairy tail one believes in.
[1]: https://en.wikipedia.org/wiki/ASML_Holding
> Yes but the chips, hardware, copper cables, silicon and all the rest of the components that make up a server are finite. Unless these magically appear from outer space, we'll face the same resource constraints as everything else that is pulled out of the ground.
Raw material cost is not a driver of datacenter GPU costs.
> Over supply makes price come down but if supply is kept artificially low, then prices stay high.
Where are you getting "supply kept artificially low" when we're in the middle of an explosion of datacenter buildouts and AI companies?
We're in a race to the bottom on pricing. I haven't seen a realistic argument for why you think prices are going to go up. You're starting with a conclusion and trying to find reasons it might be true.
While I fundamentally agree with the basis of compute getting cheaper by the year, I think a missed consideration here is the fact that these models are also requiring exponentially more compute with each iteration to train, in a way that arguably has outscaled the advances in compute.
Whether a generalized and broadly usable model will be able to trained within some N multiple of our current compute availability allowing the price to come down with iterative compute advances is yet to be seen. With the current race to the top in terms of SOTA models and increasingly iteratively smaller improvements on previous generations, I have a feeling the scaling need for compute will outpace the improvements in our hardware architecture, and that's if Moore's law even holds as we start to reach the bounds of physics and not engineering.
However as it stands today, essentially none of these providers are profitable so it's really a question of whether that disconnect will come within their current runway or not and they'll be required to increase their price point to stay alive and/or raise more capital. It's pure conjecture either way.
this is a good point. Some of the ai companies are trying to hook cs students so they'll only know "dev" as a function of their products. First one's free as they say (the drug dealers).
I agree, that is the great danger that CS students aren't even taught the fundamentals of "computer science" any longer. It would be the equivalent of physics students not learning Newtons laws or e-m-c-squared.
Probably there is an issue with how much there is in CS - each programming language basically represents a different fundamental approach to coding machines. Each paradigm has its application, even COBOL ;)
Perhaps CS has not - yet - found its fundamental rules and approaches. Unlike other sciences that have hard rules and well trodden approaches - the speed of light is fixed but not the speed of a bit.
Useful context here is that the author wrote Pi, which is the coding agent framework used by OpenClaw and is one of the most popular open source coding agent frameworks generally.
> “Heard joke once: Man goes to doctor. Says he's depressed. Says life seems harsh and cruel. Says he feels all alone in a threatening world where what lies ahead is vague and uncertain. Doctor says, "Treatment is simple. Great clown Pagliacci is in town tonight. Go and see him. That should pick you up." Man bursts into tears. Says, "But doctor...I am Pagliacci.”
https://www.goodreads.com/quotes/141645-heard-joke-once-man-...
you get me
Good joke.
That's hilarious. I've been following Mario since his work on libGDX and RoboVM.
His blog post on pi is here: https://mariozechner.at/posts/2025-11-30-pi-coding-agent/
For reference, the creator of OpenClaw has roughly the opposite philosophy:
https://steipete.me/posts/2025/shipping-at-inference-speed
That's a great shout because I'm sure a lot of people would otherwise just discredit this take as just another anti-ai skeptic. But he probably has more experience working with LLM's and agents than most of us on this site, so his opinion holds more weight than most.
If you were going to dismiss an argument because of who it comes from rather than its content, that is a flaw in your thinking. The argument is correct, or it isn't, no matter who said it.
Your ability to evaluate whether the argument is correct is limited. In theory, the author and the correctness of the argument are unrelated; in practice, the degree of experience the author has with the topic they’re making an argument on does indeed have some correlation with the argument and should influence the attention you give to arguments, especially counterintuitive ones.
That doesn't work for me. Knowing who is making the argument is important for understanding how credible the parts of their argument that derive from their personal experience are.
If someone anonymous says "Using coding agents carelessly produces junk results over time" that's a whole lot less interesting to me than someone with a proven track record of designing and implementing coding agents that other people extensively use.
Appeal to authority, the logical fallacy, is not attempting to claim that authority is irrelevant or has zero signal whatsoever.
Someone making an argument needs relevant experience/context to substantiate their argument. Just because the end opinion is "correct", doesn't mean they arrived there in a reasonable way.
> The argument is correct, or it isn't, no matter who said it.
Yes, but we all have insufficient intelligence and knowledge to fully evaluate all arguments in a reasonable timeframe.
Argument from authority is, indeed, a logical fallacy.
But that is not what is happening here. There is a huge difference between someone saying "Trust me, I'm an expert" and a third party saying "Oh, by the way, that guy has a metric shitton of relevant experience."
The former is used in lieu of a valid argument. The latter is used as a sanity check on all the things that you don't have time to verify yourself.
I think its kind of like technical indicators. Obviously they mean nothing but because other people believe them you have to take them into account. So when someone with authority says something assertively many critical thinking faculties go out the window for many people
... people like that have a way of writing articles that don't seem to say anything at all.
> Companies claiming 100% of their product's code is now written by AI consistently put out the worst garbage you can imagine. Not pointing fingers, but memory leaks in the gigabytes, UI glitches, broken-ass features, crashes
One thing about the old days of DOS and original MacOS: you couldn't get away with nearly as much of this. The whole computer would crash hard and need to be rebooted, all unsaved work lost. You also could not easily push out an update or patch --- stuff had to work out of the box.
Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
Not that I want to go back to DOS but Wordperfect 5.1 was pretty damn rock solid as I recall.
> Modern OSes with virtual memory and multitasking and user isolation are a lot more tolerant of shit code, so we are getting more of it.
It's not the glut of compute resources, we've already accepted bloat in modern software. The new crutch is treating every device as "always online" paired with mantra of "ship now! push fixes later." Its easier to setup a big complex CI pipeline you push fixes into and it OTA patches the users system. This way you can justify pushing broken unfinished products to beat your competitors doing the same.
I think you're just recalling the few software products that were actually good. There was plenty of crap software that would crash and lose your work in the old days.
I always found it funny how Word on Window 3.1/95 would have a day dream moment and just completely lock up, usually when you were about to save the document
I still save stuff every few minutes out of habits formed in the 90s.
Old DOS stuff could either be a total nightmare or some of the most brilliant code you had ever seen. Thats just the way having no giard rails goes.
Lol right!
Remember when OS uptime was super duper important? Now it's a given that you can basically never restart your computer and be fine.
Another factor at work is the use of rolling updates to fix things that should better have been caught with rigorous testing before release. Before the days of 'always on' internet it was far too costly to fix something shipped on physical media. Not that everything was always perfect, but on the whole it was pretty well stress-tested before shipping.
The sad truth is that now, because of the ease of pushing your fix to everything while requiring little more from the user than that their machine be more or less permanently connected to a network, even an OS is dealt with as casually as an application or game.
> it sure feels like software has become a brittle mess, with 98% uptime becoming the norm instead of the exception, including for big services
As somebody who has been running systems like these for two decades: the software has not changed. What's changed is that before, nobody trusted anything, so a human had to manually do everything. That slowed down the process, which made flaws happen less frequently. But it was all still crap. Just very slow moving crap, with more manual testing and visual validation. Still plenty of failures, but it doesn't feel like it fails a lot of they're spaced far apart on the status page. The "uptime" is time-driven, not bugs-per-lines-of-code driven.
DevOps' purpose is to teach you that you can move quickly without breaking stuff, but it requires a particular way of working, that emphasizes building trust. You can't just ship random stuff 100x faster and assume it will work. This is what the "move fast and break stuff" people learned the hard way years ago.
And breaking stuff isn't inherently bad - if you learn from your mistakes and make the system better afterward. The problem is, that's extra work that people don't want to do. If you don't have an adult in the room forcing people to improve, you get the disasters of the past month. An example: Google SREs give teams error budgets; the SREs are acting as the adult in the room, forcing the team to stop shipping and fix their quality issues.
One way to deal with this in DevOps/Lean/TPS is the Andon cord. Famously a cord introduced at Toyota that allows any assembly worker to stop the production line until a problem is identified and a fix worked on (not just the immediate defect, but the root cause). This is insane to most business people because nobody wants to stop everything to fix one problem, they want to quickly patch it up and keep working, or ignore it and fix it later. But as Ford/GM found out, that just leads to a mountain of backlogged problems that makes everything worse. Toyota discovered that if you take the long, painful time to fix it immediately, that has the opposite effect, creating more and more efficiency, better quality, fewer defects, and faster shipping. The difference is cultural.
This is real DevOps. If you want your AI work to be both high quality and fast, I recommend following its suggestions. Keep in mind, none of this is a technical issue; it's a business process isssue.
It's a systems engineering job. You need to provide context, acceptable failure modes, and test at each level for validation. Identify false coupling, poor interfaces, things that don't match business context during agent planning phase. Then communicate / translate to others so their decisions improve instead of destroying the system by optimizing only for their local situation.
It also seems like massive consolidation has caused issues too. Everyone is on Github. Everyone is on AWS. Everyone is behind cloudflare. Whenever an issue happens here it effects everyone and everyone sees it.
In the past with smaller services those services did break all the time, but the outage was limited to a much smaller area. Also systems were typically less integrated with each other so one service being down rarely took out everything.
The power company is massively consolidated, as is the water supply, telephone service. These are monolithic, monopolistic entities. But they are also very reliable (failures are usually isolated by region, or a result of natural disaster).
What leads to more failure is when you don't engineer those consolidated entities to be reliable. Tech companies have none of the legal requirements or incentives to be reliable, the way physical infrastructure companies do. I agree that the tighter integration is an issue, but the root cause is tech companies have no incentive other than profits. If they're making profits, everything's fine.
I mean recommend professional software engineering licenses here on HN and it goes over like a turd in a punch bowl. Everyone knows where the search for more profit was going, no one wanted to get off the ride though.
Super good take - the Andon cord is needed everywhere.
> One way to deal with this in DevOps/Lean/TPS is the Andon cord.
Many years ago, I started working for chip companies. It was like a breath of fresh air. Successful chip companies know the costs (both direct money and opportuity) of a failed tapeout, so the metaphorical equivalent of this cord was there.
Find a bug the morning of tapeout? It will be carefully considered and triaged, and maybe delay tapeout. And, as you point out, the cultural aspect is incredibly important, which means that the messenger won't be shot.
I understand your pain, we're just a peak hype, I think people will learn to backtrack and use the tool in a more sensible way. It always happens. I remember when MongoDB and other NoSql databases came out, people went as far as to say that "SQL is dead" and refuse to use a normal SQL database for anything. Not even for the most obvious relational application. People would store everything as key-value pairs with no schema and do all the joins in the application layer. Fast forward 10 years and we're back to using SQL for most of our applications. NoSql hasn't disappeared, it has just been reduced to the nice where it's useful.
Just yesterday I was discussing many of the ideas presented here with a coworker. I had just walked out of a workshop led by $BIGTECHCOMPANY where someone presented the following toy example:
A service goes down. He tells the agent to debug it and fix it. The agent pulls some logs from $CLOUDPROVIDER, inspects the logs, produces a fix and then automatically updates a shared document with the postmortem.
This got me thinking that it's very hard to internalize both issue and solution -updating your model of the system involved- because there is not enough friction for you to spend time dealing with the problem (coming up with hypotheses, modifying the code, writing the doc). I thought about my very human limitation of having to write things down in paper so that I can better recall them.
Then I recalled something I read years ago: "Cars have brakes so they can go fast."
Even assuming it is now feasible to produce thousands of lines of quality code, there is a limitation on how much a human can absorb and internalize about the changes introduced to a system. This is why we will need brakes -- so we can go faster.
The gap in your example is that a human had to realize the system is broken so that he could nudge the agent into fixing it. He can fix that gap by updating the agent to recognize when the system breaks. This now becomes the level at which he debugs… did the agent recognize the failure and self-heal, or not?
And at that point, if the autonomous system breaks, realized it’s broken, and fixes itself before you even notice… then do you need to care whether you learn from it? I suppose this could obfuscate some shared root cause that gets worse and worse, but if your system is robust and fault-tolerant _and_ self-heals, then what is there to complain about? Probably plenty, but now you can complain about one higher level of abstraction.
This aligns with my observation from product design point as well.
Product design has a slightly different problem than engineering, because the speed of development is so high we cannot dogfood and play with new product decisions, features. By the time I’ve realized we made a stupid design choice and it doesn’t really work in real world, we already built 4 features on top of it. Everyone makes bad product decisions but it was easy and natural to back out of them.
It’s all about how we utilize these things, if we focus on sheer speed it just doesn’t work. You need own architecture and product decisions. You need to use and test your products with humans (and automate those as regression testing). You need to able to hold all of the product or architecture in your mind and help agents to make the right decisions with all the best practice you’ve learned.
Agree. The issue was never, how can we get our engineers to squirt out more lines of code in a day? It has always been, how can we effectively iterate using customer feedback to deliver the highest quality product. That type of thing needs time to bake.
It occurred to me on my walk today that a program is not the only output of programming.
The other, arguably far more important output, is the programmer.
The mental model that you, the programmer, build by writing the program.
And -- here's the million dollar question -- can we get away with removing our hands from the equation? You may know that knowledge lives deeper than "thought-level" -- much of it lives in muscle memory. You can't glance at a paragraph of a textbook, say "yeah that makes sense" and expect to do well on the exam. You need to be able to produce it.
(Many of you will remember the experience of having forgotten a phone number, i.e. not being able to speak or write it, but finding that you are able to punch it into the dialpad, because the muscle memory was still there!)
The recent trend is to increase the output called programs, but decrease the output called programmers. That doesn't exactly bode well.
See also: Preventing the Collapse of Civilization / Jonathan Blow (Thekla, Inc)
https://www.youtube.com/watch?v=ZSRHeXYDLko
Peter Naur had that realization back in 1985: https://pages.cs.wisc.edu/~remzi/Naur.pdf
Nature will handle this in time. Just expect to see a "Bear Stearns moment" in the software world if this spirals completely out of control (and companies don't take a hint from recent outages).
I’m worried we end up with an AIG moment, and we all end up on the hook.
That's a valid fear imo.
> You realize you can no longer trust the codebase.
This cuts to the problem and is excellent framing. A rogue employee can achieve the same, but probably less quickly, and we've designed systems to help catch them early.
> You installed Beads, completely oblivious to the fact that it's basically uninstallable malware.
Did I miss something? I haven't used it in a minute, but why is the author claiming that it's "uninstallable malware"?
Have a read through everything that's needed for a full uninstall: https://gist.github.com/banteg/1a539b88b3c8945cd71e4b958f319...
Minimalist alternative with no hooks or dependencies for the curious: https://github.com/wedow/ticket
Malware might be a bit of stretch but could refer to this issue?
https://github.com/steveyegge/beads/issues/1857
It's not really malware, but it's a mess. It installed so much shit and it interfered with your git hooks and stuff. It was kind of messy. I kind of gave up on it. I just went back to using built-in claude code todowrite tasks.
It managed to throw itself into a global file for me that Claude used which caused beads to appear in random projects on my machine. Because of how it was there the agent attempted to re-install beads after I already removed it because the guy hook errored.
Haven't tried it, but this rewrite might be better?
https://github.com/Dicklesworthstone/beads_rust
Try https://github.com/hmans/beans - I find it a refreshingly pragmatic take that works great with my agents use.
Maybe they meant un-uninstallable?
I only have so long on earth. (I have no idea how long) I need things to be faster for me. Sometimes that means I need to take extra time now so they don't come back to me later.
I'm capturing videos of all the bugs I am seeing as of late. The folder is filling fast. I'll write a compilation post but I'm thinking a techno remix video could be fitting too.
If there are any common apps which are unhinged please do share your experiences. LinkedIn was never great quality but it's off the charts. Also catching some on Spotify.
I think the core idea here is a good one.
But in many agent-skeptical pieces, I keep seeing this specific sentiment that “agent-written code is not production-ready,” and that just feels… wrong!
It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
Yes, there are still issues, and yes, keeping mental context of your codebase’s architecture is critical, but I’m sorry, it just feels borderline archaic to pretend we’re gonna live in a world where these agents have to have a human poring over every single line they commit.
Were you not reviewing every line when a human wrote it before it went to prod? I think the output of these tools is about as good as a human would write - which means it needs thorough review if I’m going to be on the hook to resolve its issues at 2AM.
Yeah in many places we had two humans with context on every line, and now we're advocating going to zero?
Maybe that's the distinction. If I write it, you can call me at 2AM. If an AI wrote it, call the AI at 2AM.
Oh, it can't take the phone call and fix the issue? Then I'm reviewing its output before it goes into prod.
This is a weird analogy. You can ask the A.I. to fix the issue at any time of day (assuming the person asking someone with enough technical knowledge that can evaluate the fix at least).
You won't always be able to get ahold of someone at 2am. You won't be able to get ahold of me at 2am, for example. It'll throw some notification on my screen and I won't see it until I wake up.
Maybe in the future humans won't need to pour over every line. However I quickly learn which interns I can trust and which I need to pour over their code - I don't trust AI because it has been wrong too often. I'm not saying AI is useless - I do most of my coding with an agent, but I don't trust it until I verify every line.
I did this for a while… and until Opus 4.5, I couldn't fully trust the model. But at this point, while it does make the occasional mistake, I don't need to scrutinize every line. Unit and integration tests catch the bugs we can imagine, and the bugs we can't imagine take us by surprise, which is how it has always been.
Even with 4.6 I find there are a lot of mistakes it makes that I won't allow. Though it is also really good at finding complex thread issues that would take me forever...
We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever.
> Nothing should go straight to prod ever, ever ever, ever.
I'm one-shotting AI code for my website without even looking at it. Straight to prod (well, github->cf worker). It is glorious.
Prod in this context doesn't refer to one person's website for their personal project. It refers to an environment where downtime has consequences, generally one that multiple people work on and that many people rely on.
It is not a personal project.
This is a bit of a no true Scotsman take but I agree with it anyway.
There's a middle ground here. Code for your website? Sure, whatever, I assume you're not Dell and the cost of your website being unavailable to some subset of users for a minute doesn't have 5 zeroes on the end of it. If you're writing code being used by something that matters though you better be getting that stuff reviewed because LLMs can and will make absolutely ridiculous mistakes.
> There's a middle ground here.
I'm responding to this statement: "Nothing should go straight to prod ever, ever ever, ever."
It's tough to not interpret this as "I don't care about my website". Do you not check the copy? Or what if AI one-shots something that will harm your reputation in the metadata?
Then I'll read the diffs after the fact and have fix AI it. ¯\_(ツ)_/¯
That sounds better. I assume the stakes are low enough that you are happy reviewing after the fact, but setting up a workflow to check the diffs before pushing to production shouldn't be too difficult
Of course. I could do a PR review process, but what's the point. It is just a static website.
That a personal website? Prod means different things in different contexts. Even then, I'd be a bit worried about prompt injection unless you control your context closely (no web access etc).
Prompt injection?! Give me an example.
Were people reviewing your hobby projects previously? Were you on-call for your hobby website? If not - then it sounds like nothing changed?
This is my business website.
[Note: It may be very risky to submit anything to this users site]
I'm not sure doing silly things, then advertizing it is a great way to do business, but to each their own.
So many assumptions.
It is a static website hosted on CF workers.
> Nothing should go straight to prod ever, ever ever, ever
Air Traffic Controller software - sure. 99% of other softwares around that are not mission-critical (like Facebook) just punch it to production - "move fast and break shit" has been cool way before "AI"
There's a lot of software in between Air Traffic Controller and Facebook. And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes? I'd think at this point that'd be considered a fairly severe incident.
Even if we ignore criticality, things just get really messy and confusing if you push a bunch of broken stuff and only try to start understanding what's actually going on after it's already causing issues.
> And honestly would Meta be okay with Instagram or Facebook going down even for just a few minutes?
sure, they coined the term “move fast and break things”
and not every “bug” brings the system down, there is bugs after bugs after bugs in both facebook and insta being pushed to production daily, it is fine… it is (almost) always fine. if you are at a place where “deploying to production” is a “thing” you better be at some super mission-critical-lives-at-stake project or you should find another project to work on.
> there is bugs after bugs after bugs
These are the bugs after bugs after bugs after bugs after bugs.
Simply put they are going through dev, QA, and UAT first before they are the bugs that we see. When you're running an organization using software of any size writing bugs that takes the software down is extremely easy, data corruption even easier.
I wholeheartedly agree. I just don't agree with:
> We live in a world where every line of code written by a human should be reviewed by another human. We can't even do that! Nothing should go straight to prod ever, ever ever, ever
Things should 100% go to prod whenever they need to go to prod. While this in theory makes sense, there is insane amount of ceremony in large number of places I have seen personally where it takes an act of congress to deploy to production all the while it is just ceremony, people are hunting other people with links to PR sent to various slack channels "hey anyone available to take a look at this" and then someone is like "I know nothing about that service/system but I'll look at approve." I would wager a high wager that this "we must review every line of code" - where actually implemented - is largely a ceremony. Today I deployed three services to production without anyone looking at what I did. Deploying to production should absolutely be a non-event in places that are ran well and where right people are doing their jobs.
I'm sure some companies do this poorly but there's lots of places where code review happens on every PR and there's processes and systems in place to make sure it's an easy process (or at least, as easy as it should be). Many large tech companies have things pushed to prod automatically many, many times per day and still have code review for all changes going out.
Even with code review, a well configured CI/CD system is going to include a wealth of automated unit and integration tests, and then also a complex deploy system involving canaries and ramp-up and blue/green deployment and flags and monitoring and alerts that's backed by a pager and on-call rotation with runbooks. Code review simply will never be perfect and catch 100% of issues, so systems are designed with that in mind.
So then then question is what's actually reasonable given today's code generating tools? 0% review seems foolish but 100% seems similarly unreal. Automated code review systems like CodeRabbit are, dare I even say, reasonable as a first line of defense these days. It all comes down too developer velocity balanced with system stability. Error budgets like Google's SRE org is able to enforce against (some) services they support are one way of accomplishing that, but those are hard to put into practice.
So then, as you say, it takes an act of Congress to get anything deployed.
So in the abstract, imo it all comes down to the quality of the automated CI/CD system, and developers being on call for their service so they feel the pain of service unreliability and don't just throw code over the wall. But it's all talk at this level of abstraction. The reality of a given company's office politics and the amount of leverage the platform teams and whatever passes for SRE there have vs the rest of the company make all the difference.
>sure, they coined the term “move fast and break things”
Yeah I'm aware, but as any company gets larger and has more and more traffic (and money) dependent on their existing systems working, keeping those systems working becomes more and more important.
There's lots of things worth protecting to ensure that people keep using your product that fall short of "lives are at stake". Of course it's a spectrum but lots of large enterprises that aren't saving lives but still care a lot about making sure their software keeps running.
How do you know which lines you need to review and which you don't?
Does it feel archaic because LLMs are clearly producing output of a quality that doesn't require any review, or because having to review all the code LLMs produce clips the productivity gains we can squeeze out of them?
It’s not archaic, it’s due diligence, until we can expect AI to reliably apply the same level of diligence — which we’re still pretty far off from.
You sound like you are working on unimportant stuff. Sure, go ahead, push.
Honestly a lot of useful software is ‘unimportant’ in the sense that the consequences of introducing a bug or bad code smell aren’t that significant, and can be addressed if needed. It might well be for many projects the time saved not reviewing is worth dealing with bugs that escape testing. Also, it’s entirely possible for software to be both well engineered and useless.
The article didn't say to read every line though. Just the interesting ones. If you don't know where the interesting ones are, you have already lost.
It's a conversation I've had many times in my career and I'm sure I'll have many more. We've got code that seems plausible on a surface level, at a glance it solves the problem it's meant to solve - why can't we just send it to prod and address whatever problems we find with it later?
The answer is that it's very easy for bad code to cause more problems than it solves. This:
> Then one day you turn around and want to add a new feature. But the architecture, which is largely booboos at this point, doesn't allow your army of agents to make the change in a functioning way.
is not a hypothetical, but a common failure mode which routinely happens today to teams who don't think carefully enough about what they're merging. I know a team of a half-dozen people who's been working for years to dig themselves out of that hole; because of bad code they shipped in the past, changes that should have taken a couple hours without agentic support take days or weeks even with agentic support.
> It’s just completely insane to me to look at the output of Claude code or Codex with frontier models and say “no, nothing that comes out of this can go straight to prod — I need to review every line.”
It's insane to me that someone can arrive at any other conclusion. LLMs very obviously put out bad code, and you have no idea where it is in their output. So you have to review it all.
You say it's borderline archaic. I say trusting agents enough to not look at every single line is an abdication of ethics, safety, and engineering. You're just absolving yourself of any problems. I hope you aren't working in medical devices or else we're going to get another Therac-25. Please have some sort of ethics. You are going to kill people with your attitude.
Almost nobody works on medical devices... And some of you lucky folks might be working with mega minds everyday, but the rest of us are but shadows and dust. I trust 5.4 or 4.6 more than most developers. Through applying specific pressure using tests and prompts I force it to built better code for my silly hobby game than I ever saw in real production software. Before those models I was still on the other side of the line but the writing is on the wall.
Depends on your prod.
For an early startup validating their idea, that prod can take it.
For a platform as a service used by millions, nope.
Not having a code review process is archaic engineering practice at this point(at any point in history, really), be it for human written or AI written code.
If you keep the scope small enough it can be production ready ootb, and with some stuff (eg. a throwaway React component) who really cares. But I think it's insane to look at the output of Claude Code or Codex with frontier models and say "yep, that looks good to me".
Fwiw OP isn't an agent skeptic, he wrote one of the most popular agent frameworks.
This assumes that only (AI/Agentic) stupidity comes into play, with no malice on sight. But if things go wrong because you didn't noticed the stupidity, malice will pass through too. And there is a a big profit opportunity, and a broad vulnerable market for malice. Is not just correctness or uptime what comes into play, but bigger risks for vulnerabilities or other malicious injected content.
> And I would like to suggest that slowing the fuck down is the way to go. Give yourself time to think about what you're actually building and why. Give yourself an opportunity to say, fuck no, we don't need this. Set yourself limits on how much code you let the clanker generate per day, in line with your ability to actually review the code.
This is a great point.
I have been avoiding LLM's for awhile now, but realized that I might want to try working on a small PDF book to Markdown conversion project[0]. I like the Claude code because command line. I'm realizing you really need to architect with good very precise language to avoid mistakes.
I didn't try to have a prompt do everything at once. I prompted Claude Code to do the conversion process section by section of the document. That seemed to reduce the mistake the agent would make
[0]: https://www.scottrlarson.com/publications/publication-my-fir...
> There were precursors like Aider and early Cursor, but they were more assistant than agent.
I use Aider on my private computers and Copilot at work. Both feel equally powerful when configured with a decent frontier model. Are they really generations apart? What am I missing?
I think before even being able to entertain the thought of slowing the fuck down, we need to seriously consider divorcing productivity. Or at least asking a break, so you can go for a walk in the park, meet some friends and reflect on how you are approaching development.
I think this is very good take on AI adoption: https://mitchellh.com/writing/my-ai-adoption-journey. I've had tremendous success with roughly following the ideas there.
> The point is: let the agent do the boring stuff, the stuff that won't teach you anything new, or try out different things you'd otherwise not have time for. Then you evaluate what it came up with, take the ideas that are actually reasonable and correct, and finalize the implementation.
That's partially true. I've also had instances where I could have very well done a simple change by myself, but by running it through an agent first I became aware of complexities I wasn't considering and I gained documentation updates for free.
Oh and the best part, if in three months I'm asked to compile a list of things I did, I can just look at my session history, cross with my development history on my repositories and paint a very good picture of what I've achieved. I can even rebuild the decision process with designing the solution.
It's always a win to run things through an agent.
Once again I appeal: who is shipping code they don't understand? Those who do so are creating the problem, not the coding agent.
I use agents all day, every single day. But I also push back, understand what was written, and ensure I read and understand everything I ship.
Does it slow me down? Uh, yup. You bet.
Yes, this article literally advocates for slowing the fuck down, but it also makes the coding agents out to be the problem, but they're not.
The problem is not the AI users who frequent this board and are shipping code they don't understand. It is the moronic MBA trained executives who can only think about speed, more speed, more revenue for less cost. Quality is an optional expense. A race where the finish line is the current fiscal quarter, to hell with everything after that. The "we can fix it later" Band-Aid over a tumor.
Sensible engineers who look AI as another (potentially powerful) tool in the toolbox "aren't forward looking enough". I watched this happen in real time at my previous company, where every discussion about quality was interpreted as slowing down progress, and the only thing that was looked on favorably was the idea of replacing developers with machines - because they are "cheaper and faster".
The logical minds here on HN are less prone to believing in magic and AI fairies, but they are often not the ones setting the rules. And the number of companies being run by people with critical thinking skills is getting smaller by the day.
It's a matter of affordances. The path of least resistance with agents is to let it commit whatever it wants. That's a natural outcome of the design and implementation of agents.
Yes, humans are accountable for the ultimate output. But so are the people who design and build these automation tools. As the saying goes, the purpose of a system is what it does.
Great take, spot on. Very similar to Armin's post the other day about things taking time. The need for speed and its ill effects are being rediscovered (again).
Reminds me of Carson Gross' very thoughtful post on AI also: https://htmx.org/essays/yes-and/
[Y]ou are going to fall into The Sorcerer’s Apprentice Trap, creating systems you don’t understand and can’t control.
This is what I call content based on 'garbage'. Because garbage is the random collection of peoples' stuff. You can try and make sense and commentary on a society through the garbage dump, but it's pretty superficial. It doesn't tell you a lot about any real person's motivations. So it's not a great basis for commenting on real people. OPs comments are on the collection of things that they happen to come across through news and social media. Sure it looks like a lot is happening, but look at any one person's or business's approach and it will make a lot more sense. Yes, I realize people are producing content that appeals to the 'garbage' mindset, but it's obviously theater. A system that writes 10,000 lines of code for you a week, is headline theater.
I think this post should be directed to every Typescript developer.
I think a lot of this is just Typescript developers. I bet if you removed them from the equation most of the problem he's writing about go away. Typescript developers didn't even understand what React was doing without agent, now they are just one-shot prompting features, web apps, clis, desktop apps and spitting it out to the world.
The prime example of this is literally Anthropic. They are pumping out features, apps, clis and EVERY single one of them release broken.
i like the article and what it says, but not sure why cursing was necessary
I am "playing" with both pi and Claude (in docker containers) with local llama.cpp and as an exercise, I asked both the same question and the results are in this gist:
https://gist.github.com/ontouchstart/d43591213e0d3087369298f...
(Note: pi was written by the author of the post.)
Now it is time to read them carefully without AI.
What I have leaned from the exercise above is that we paid more attention and spent more resources on "metadata" than real data. They are the rabbit holes that lead us to more metadata and forget what we really want.
We are all rabbits.
I for one look forward to rewriting the entirety of software after the chatbot era
I don't understand why we seem to always try to make things do more than what they were built for in the first place. Rather than waiting for modifications, we try to make the square fit the circle and then become disgusted when it doesn't work. I'm not in the 'slow down to be cautious' camp. I'm more in the 'slow down and find ways to work with what we actually have.' When you use the tools the way they were meant to be used, life does become easier, or at least mine has anyway.
Just looking at the LiteLLM disaster from yesterday and so much slop flowing around, I couldn’t agree more.
It’s time to slow the fuck down!
It's always been this way - the people that rise to the top are the people who never had to deeply understand something, so they can't even comprehend what that would look like or why it should be important. They're trying to automate the "understanding" part, with predictably disastrous consequences that those of us who aren't the "rise to the top" type could see coming. Agentic AI is just another symptom.
I really don't get the author's conclusion here. I agree with his premises: organizations using LLMs to churn out software are turning out terrible quality software. But the conclusion from that shouldn't be "slow down", it should be "this tool isn't currently fit for use, don't use it". It feels like the author starts from the premise of "I want to use AI" and is trying to figure out how to make that work, rather than "I want to make good software" and trying to figure out how to do that.
i just wish someone would explain why i prefer cline to claude code so much
It's not even the complexity which, you have to realize: many managers and business types think it's just fine to have code no one understands because AI will do it.
I don't agree, but bigger issue to me is many/most companies don't even know what they want or think about what the purpose is. So whereas in past devs coding something gave some throttle or sanity checks, now we'd just throw shit over wall even faster.
I'm seeing some LinkedIn lunatics brag about "my idea to production in an hour" and all I can think is: that is probably a terrible feature. No one I've worked with is that good or visionary where that speed even matters.
Eh I think its self-correcting problem
Companies will face the maintenance and availability consequences of these tools but it may take a while for the feedback loop to close
Every problem is self-correcting in that some new normal will emerge. Either through acceptance or because something is changed.
It’s very hard to say right now what happens at the other side of this change right now.
All these new growing pains are happening in many companies simultaneously and they are happening at elevated speed. While that change is taking place it can be quite disorienting and if you want to take a forward looking view it can be quite unclear of how you should behave.
Unfortunately, I think the lesson from recent history seems to be that outside of highly-regulated industries, customers and businesses will accept terrible quality as long as it's cheap.
Yes, every slack is optimized out of systems. If something has an ounce more quality than would suffice to obtain the same profit, it must be cut out. It's an inefficiency. A quality overhang. If people buy it even if it's crap, then the conclusion is that it has to be crap, else money is left on the table. It's a large scale coordination issue. This gives us a world where everything balances exactly near the border where it just barely works, for just barely enough time.
True but there is a limit, there are still levels of quality
Levels of enshittification, more often than not.
Nah, there is a quality floor that consumers are willing to accept. Once you get below that, where it's actually affecting their lives in a meaningful way, it will self-correct as companies will exploit the new market created for quality products.
I keep returning to this thought: Assuming our abstraction architecture is missing something fundamental, what is it?
My gut says something simple is missing that makes all of the difference.
One thought I had was that our problem lives between all the things taking something in and spitting something out. Perhaps 90% of the work writing a "function" should be to formally register it as taking in data type foo 1.54.32 and bar 4.5.2 then returning baz 42.0 The register will then tell you all the things you can make from baz 42.0 and the other data you have. A comment(?) above the function has a checksum that prevents anyone from changing it.
But perhaps the solution is something entirely different. Maybe we just need a good set of opcodes and have abstractions represent small groups of instructions that can be combined into larger groups until you have decent higher languages. With the only difference being that one can read what the abstraction actually does. The compiler can figure lots of things out but it wont do architecture.
There's more to a function than just types. It's not sufficient to know that the function outputs a baz 42.0. You have to understand which one. The oldest? The latest? The one that matches the foo and bar input parameters?
I think that's the part where it remains difficult. Someone has to convey clearly what the semantics and side effects of the function are. Consumers have to read and understand it. Failing that, you get breakage.
You seem to be describing a type system.
> My gut says something simple is missing that makes all of the difference.
We have too much code - languages to program machines.
We need a new different language now.
A plan.md, written in what... legalese English? Really? Am I back in 1897? People committing that to vcs, sheesh...
Fine to read a fellow countryman on HN :) "Dere!" I have disabled my coding agent by default. I first try to think, plan, code something myself and only when I get stuck or the code gets repetitive, only then I tell him to do the stuff. But I get what you are saying, and I agree ... I am clearly pro human on this debate, and the low bloat trash everywhere is annoying. I have come to the conclusion - if you find docs on something, and it is plain HTML - it will be probably of high quality. If you find docs with a flashy, dynamic, effectful and unnecessary 100mb js booboo, then you what you are about to read ...
I expected this to be yet another anti-AI rant, but the guy is actually right. You should guide the agents, and this is a full-time job where you have to think hard.
> While all of this is anecdotal, it sure feels like software has become a brittle mess
That may be the case where AI leaks into, but not every software developer uses or depends on AI. So not all software has become more brittle.
Personally I try to avoid any contact with software developers using AI. This may not be possible, but I don't want to waste my own time "interacting" with people who aren't really the ones writing code anymore.
It's 2026, the "fuck" modifier for post titles by "thought leaders" has been done already ad nauseam. Time to retire it and give us all a break.
If we're on the subject of tropes: https://theonion.com/report-stating-current-year-still-leadi...
hope my boss can see this
Oh look another anti AI article.
Oh they even swore in the title.
Oh and of course it's anti-economics and is probably going to hurt whoever actually follows it.
Three for three. It's not logical it's emotional.
If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
Integration is the key to the agents. Individual usages don't help AI much because it is confined within the domain of that individual.
We reduce jobs every time we e.g. fix a bug. Where do you stop?
I think there is a line somewhere people need to draw, when a technology such as AI invades into ALL areas, threatening to reduce a percentage of jobs so quickly, without the potential to creating new TYPES of jobs that can feed many. It is different from computers, and it is different from trains.
> If there is anyone who absolutely should slow down, it's the folks who are actively integrating company data with an agent -- you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term.
I'm one of those people and I'm not going to slow down. I want to move on from bullshit jobs.
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
If you don't want to slow down, maybe accelerating is the second better option for ordinary people.
> I want to move on from bullshit jobs.
So are you aiming for death poverty? Once those bullshit jobs go, we’re going to find a lot of people incapable of producing anything of value while still costing quite a bit to upkeep. These people will have to be gotten rid of somehow.
> and think we are going to run out of things to do, or run out of problems to create and solve.
There will be plenty of problems to solve. Like who will wipe the ass of the very people that hate you and want to subjugate you.
Name a single time doomers were right about anything. Doomers consistently overstate their expected outcome in every single domain and consistently fail to predict how society evolves and adapts.
Again:
The only people that fear what is coming are those that lack imagination and think we are going to run out of things to do, or run out of problems to create and solve.
Climate change would be a big one.
Also, there have been plenty of awful things caused by technological progress. Tons of death and poverty was created by the transition to factories and mechanization 150 years ago.
Did we come out the other end with higher living standards? Yes, but that doesn't make the decades of brutal transition period any less awful for those affected.
> Climate change would be a big one.
That's generous. Climate scientists were right, climate doomers were definitely wrong.
Society is mostly unchanged due to climate change. That's not to say climate has no effect, but it is certainly still not some doomer scenario that's played out. New York and Florida are most certainly not underwater as predicted by the famous "Inconvenient Truth". People still live in deserts just as they always have. Human lifespan is still increasing. We have less hunger worldwide than ever before, etc.
Climate change doomers conveniently leave out the part where climate has ALWAYS affected society and is one of the main inputs to our existence, therefore we are extremely adaptable to it.
Before "climate change" ever entered the general consciousness, climate wiped out civilizations MORE FREQUENTLY than it does now. All signs point to doomers being wrong and yet they all hold onto it stubbornly.
Doomers were never impressive because they got anything right, they are impressive because they have the unique skill of moving the goalpost when they are wrong. Any time you think the goalpost can't be moved further out, they prove it's possible.
The effects of climate change are just starting to happen. Ecosystems are dying. Very few "climate doomers" thought the world would be like the Day after Tomorrow.
The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
Climate scientists, and climate reporting, often UNDERSTATED the worst of these effects.
I think it'd be worth stating what your definition of doomerism is. For me, seeing the increases in forest fires, seeing the sky reddened and the air quality diminish and floods and hurricanes increase... I don't think being able to buy a big mac doesn't make that any less pessimistic.
> The earth is becoming more hostile to it's inhabitants. There are famines caused by climate change. We will undoubtedly within the next 20 years see mass migration from the areas hardest hit.
If this is true then how are there more people than ever, fewer famines than ever? Migrations due to climate has been a part of human history since the beginning of all of human, and animal history. It's almost as if that's the default state of being. Are people migrating more than ever? Yes, but not just because of climate change, because it's so god damn easy to do so in modern times.
We aren't walking across a sheet of ice to try to survive a drought. We are on boats with motors and a life vest at our worst, in first class getting wine and dined at our most hedonistic. Entire (illegal) migration pipelines have been made and turned into a black-market economy. There are government funded apps created to support these migration pipelines.
Again, you're a doomer that has failed to predict the impact of what you're observing, and it mostly comes down to the fact that you underestimate human creativeness and ingenuity, and human drive for progress.
You frame every scenario as if humans will just stare at impending doom like deer in headlights and let it wash over them, while at the same time arguing that mass amounts of people are so adaptable that they would be willing to traverse the entire globe to find a better life. Your model of reality contradicts itself from the very start.
The CO2 concentration continues to climb year after year, at an accelerating rate. The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
We're on a hothouse earth trajectory. All signs point to you not being aware of serious climate research and hanging on to a naive Steven Pinker "everything is always improving" outlook.
> The world hasn't ended yet because it's still 2026 but it doesn't mean it won't.
All signs point to you being a doomer that is excellent at moving the goal post. "If it doesn't happen tomorrow surely it will happen the next day."
You can do this until the end of time. A waste of brain cycles for anybody with a real job. This is the exact same pattern for every single kind of doomer and they are all wrong in the exact same way over and over. You still can't name a single doomer point of view that has played out to some kind of catastrophic society collapsing event accurately.
It's always "it's coming" eventually.
Running out of oil, overpopulation, financial system collapse that sends us back to the dark ages, climate change that causes everybody to move migrate to Colorado, a coronavirus that permanently makes us board up indoors. None of it ever plays out the way you doomers fantasize about it playing out.
When some kind of catastrophic society collapsing event happens it's most likely going to be because of something that is not in the mainstream consciousness.
If doomers were good at predicting these events and how it will play out they'd all be rich as hell, but no, they are for the most part a bunch of broke whiners. (Except for those doomers that have made their wealth off of scaring people)
> All signs point to you being a doomer that is excellent at moving the goal post.
All signs point to it being really easy for you to dismiss "doomers" as wrong and "scientists" as right retroactively. If someone was wrong about the direction of the climate crisis 20 years ago they were a doomer. If they were right they were a scientist. Easy!
You can apply this to anything that went to shit with the world in the past, not just the climate. If someone predicted the financial crisis of 2008, they were not a doomer, they were a particularly savvy financial analyst. All the others who keep predicting crises are wrong, until they're right, and then they're not a doomer, so your point always stands no matter what. Super convenient!
> If someone predicted the financial crisis of 2008, they were not a doomer, they were a particularly savvy financial analyst.
Zoom out buddy, the 2008 financial crisis is a blip. The world's financial system is almost exactly the same as it was pre-2008. Hardly the collapse that made the world stop spinning that doomers have a fetish for. That's not a good example to support your argument.
You fundamentally cannot grasp the concept of doomerism. Doomerism isn't simply observing some first order effect "The oceans will increase by 2 degrees".
Doomerism is observing that first order effect and trying to assert that we should change behavior at a societal level because they above everybody else, can predict what the secondary or tertiary+ effects are for society. "The oceans will increase by 2 degrees, all marine life will perish, hurricanes will make vast swaths of the world uninhabitable. Therefore we should stop eating beef!"
And they are wrong about it every - single - time. Do you need examples?
Society has a long history of ignoring doomers, and the impact? Society is right and Doomers are consistently wrong.
Society keeps going. We have all of history up until the current moment, but from that we understand so far, Doomers have never been right about how disruptive their observations are for society at large. If you want to provide a contradiction to this statement please do so.
Nuclear power doomers -> completely wrong. Fukashima was the latest that proved this
Covid doomers -> wrong. in 50 years covid will be as forgotten as the spanish flu was.
Climate doomers -> wrong. famines are down across the globe and population still growing, still no clear example of the disruption to society or world in a way that is new. For any disruption we can find historical disruptions of the same category with more impact to humans and the world. Floods? More people killed in historical floods and more societies extinguished from them >100 years ago. Fires? More people killed in fires and more cities completely burned down from them >100 years ago.
Overpopulation doomers -> wrong, population still growing, but leveling off and not collapsing
AI doomers -> wrong on both sides so far. no bubble pop, capabilities still advancing, humans are also still relevant
Peak oil doomers -> completely wrong, more oil being discovered, didn't account for technology, didn't account for other forms of energy
With this kind of track record, you'd think that doomers would have enough self reflection to realize that their model of reality is insufficient at predicting outcomes and shut the fuck up, but nope - they just keep on trying to force a square peg into a round hole while annoying everybody around them who are trying to do something to move the needle towards a better life that doesn't involve becoming a vegetable so the earth can heal or whatever.
Compare this against another model of reality: Whatever challenges humans face, when it's dire enough, we will adapt and overcome.
You can backtest this model against all of human history. It would be dishonest to say that this model isn't more accurate so far than whatever model you're using as a doomer.
No need for doomers to virtue signal and lecture everybody about their shitty model of reality that fails to backtest
>If doomers were good at predicting these events and how it will play out they'd all be rich as hell, but no, they are for the most part a bunch of broke whiners.
Oh, the classic "if you're so smart then why aren't you rich" non argument. I'm sure Carl Sagan was a just whiny loser because he didn't figure out how to become a billionaire from knowing how physics works. His prediction that the planet would warm several degrees by the mid to late 21st century failed to reward him what he was owed. By the way we haven't even gotten halfway there yet, so your "shifting goalposts" thesis is null.
People who push dangerous neoliberal propaganda like carbon capture or "infinite growth on a finite planet is possible" on the other hand do get very rich, and they don't even need to make good predictions. Such is the planet governed by pedophiles.
https://www.youtube.com/watch?v=Wp-WiNXH6hI
> People who push dangerous neoliberal propaganda like carbon capture or "infinite growth on a finite planet is possible"
Good thing we are not confined to a closed system in any practical sense. You act like we haven't already used space for economic growth. It's also a good thing that the concept of "growth" in this context is not limited by physical constraints. You're talking about growth of value, not growth in a physical sense. Did you think the valuation of every company was based on something physical 1:1? Do you live somewhere whose financial system is based on a gold standard or something? There are multiple levels where your idea falls apart.
Crazy to so confidently assert an idea which is conceptually flawed on a surface level.
You actually think the economy has reached the point of maximum growth due to the laws of thermodynamics? Please tell me you didn't formulate your entire worldview on this idea because it's unlikely that you can function in this society in a way that makes your life better or those around you better with this flawed model of reality.
Doomers are always hurting themselves first and foremost and then dragging everybody else around them down with them.
>You actually think the economy has reached the point of maximum growth due to the laws of thermodynamics?
Of course it hasn't. The real problem is that the atmosphere is being poisoned beyond repair, at an increasing pace, and that is tied to economic growth. That will eventually un-terraform the planet into a place hostile to agriculture, be it in 50 or 100 years. We're nowhere near being able to reverse this in any way, and there are no signs of it slowing down.
Are actuaries stupid doomers whose worldviews make them unable to function in society? You decide: https://actuaries.org.uk/media/ni4erlna/planetary-solvency.p...
>Good thing we are not confined to a closed system in any practical sense.You act like we haven't already used space for economic growth.
Oh, am I to believe space mining fantasies maybe? I'm sure we'll get there, just after AGI solves nuclear fusion for us in the next 5 years. Then we can have star trek replicators to go with them. I just wish it would happen sooner, that sea floor mining stuff is starting to gain traction and it isn't looking pretty.
>It's also a good thing that the concept of "growth" in this context is not limited by physical constraints
It actually is. The concept of "decoupling" of the economy from material resources has been debunked for a while now. Theoretically there can be efficiency gains that generate further growth, but those are usually quickly cannibalized by increasing demand, plus we're deep on the diminishing returns phase in a lot of fields.
I recommend this resource: https://eeb.org/wp-content/uploads/2019/07/Decoupling-Debunk...
> That will eventually un-terraform the planet into a place hostile to agriculture, be it in 50 or 100 years. We're nowhere near being able to reverse this in any way, and there are no signs of it slowing down.
yet another classic bullshit doomer prediction that never plays out where you'll conveniently not be around to admit you're wrong about.
> Oh, am I to believe space mining fantasies maybe?
You don't need to, we're already at the point where we are using space for economic growth, so it's not some kind of fantasy. We also have this celestial body that is used throughout the planet called "the Sun" which already makes your closed system argument fall apart, unless your worried about us being close to peak utilization of the sun too.
> It actually is. The concept of "decoupling" of the economy from material resources has been debunked for a while now.
Hilarious how something can be "debunked" yet it's exactly how the metric for "growth" that you're talking about functions today. Again if you add up the combined valuation for every company today did you think it's based on material resources? It's obviously not. Your first clue is that the valuation of companies is calculated and expressed as a dollar which is not backed by ANYTHING material. If the thing that you're using to measure growth in an economy is already an abstract concept with no basis on material resource, then it follows that "growth" is NOT CONSTRAINED by material resource.
Or did you think every time nvidia announces their quarterly results and the market puts a valuation on nvidia that we are allocating materials to nvidia? Again, your model of reality sucks. It doesn't fit.
Your closest peer is that guy that's a diehard fan of a sports team from their hometown that keeps losing but it doesn't matter because being a fan and supporting the hometown is more important than performing the sport well.
That's you. Your model of reality fails every single day but it doesn't matter because you're a fan of your bad model. The worst part? You keep telling everybody around you to place a bet on your model with their life savings despite never being able to produce an example where your model was right
> Name a single time doomers were right about anything.
- NFTs
- Surveillance schizos
- Global Pedophile Cabal schizos
- Anyone who didn’t believe we were a year out from Star Trek living when LLMs first started picking up steam
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
- Anyone amongst the sea of delusional democrats who did indeed believe Trump could win a second term.
All of those doomers were vindicated, and that’s just recently.
- NFTS doomers? I mean I appreciate the humor here.
- Surveillance schizos - Society still works
- Global Pedophile Cabal schizos - Again, funny use of 'doomers' but that's what the current society seems to be run by so I wouldn't say it's fitting for doomerism.
- People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
None of these things are that disruptive to our society at large. You will still be able to walk down the street and grab a Big Mac pretty much any day of the week. A large portion of society is going to look at all of what you're worried about and say "it's not that serious" while consuming their 20 second videos.What do you think is a valid doomer warning that came true? Or do you think literally everything that is pessimistic is doomerism?
You're asking the wrong person. I haven't seen a single example of a doomer warning that came true. Can you provide one? It seems like society still exists when I look out the window and the impact that doomers assert are greatly exaggerated in every instance.
So are disingenuous or just stupid? Of course society exists still, but what society?
Only the very dumbest think “doom” is some apocalyptic scene from a Hollywood film in which humans are nearly wiped out.
“Doom” is instead when swaths of Roman citizens with rights amidst a powerful, civically and technologically impressive hegemony, over time find themselves reduced to unfree serfs. They and their descendants would remain in that position for centuries until a horrific disease came through and killed so many of them that the serfdom became untenable.
> Only the very dumbest think “doom” is some apocalyptic scene from a Hollywood film in which humans are nearly wiped out.
So you're all just out here telling everybody they should stop what they are doing because of the doom, but the doom isn't that impactful in the grand scheme of things?
That checks out with my understanding of doomers. Just a bunch of useless whiners that produce a bunch of meaningless noise for everybody else.
> “Doom” is instead when swaths of Roman citizens with rights amidst a powerful, civically and technologically impressive hegemony, over time find themselves reduced to unfree serfs. They and their descendants would remain in that position for centuries until a horrific disease came through and killed so many of them that the serfdom became untenable.
And look at where we are now. Rome has been surpassed many times over. The quality of life for the average living person is FAR SURPASSED anything that anybody in Rome could dream of. Seems like it wasn't worth worrying about what happened in Rome. If you make "doom" some kind of local event that affects a small group of people in a short window of time while trying to tell everybody they should hit the brakes and pause - maybe you should reflect on how these two things contradict each other.
In other words, if the doom isn't that doomful in the grand scheme of things then your argument is just again, moving goalposts. There are clear examples for every doom scenario you're talking about where the world moved on and built bigger and better. I guess it's on you to wait until that's no longer true but until then the ball is in your court. Just realize that you should at some point reflect and realize that every swing and miss is just more evidence that doomers are consistently wrong about the impact of their observations.
> You will still be able to walk down the street and grab a Big Mac pretty much any day of the week.
Yeah while you’re on your shift break there.
> People who predicted the flood of people entering Software via bootcamps, etc. would never cause any problems because their god of software is consuming the world too quickly for supply and demand to ever be a real concern.
How was this group vindicated? It absolutely has caused problems at orgs and in the industry.
Just look at all the linkedin/twitter/youtube garbage of influencers trying to post boot camp tier advice and a sizable portion of new developers latching on to often questionable advice/viewpoints.
> How was this group vindicated? It absolutely has caused problems at orgs and in the industry.
I think you misread. In fairness, I arranged the sentence awkwardly, as I do often. I think my mind was conjuring the various dooms and then trying to rephrase the doom into the doomer.
What I mean is the people who warned against it were vindicated.
Of course vindicated may not the best word to use. If I say the world blows up tomorrow and you say it can never, and then it blown up, perhaps I’m not necessarily vindicated. But I certainly get a brief moment of schadenfreude
I was thinking the other day about why a "global pedophile cabal" would be a thing. I still think that phrase overstates it a bit, but not that much.
Committing a crime with someone bonds you to them.
First, it's a kind of shared social behavior, and it's one that is exclusive to you and your friends who commit the same kinds of crimes. Any shared experience bonds people, crimes included. Having a shared secret also bonds people.
Second, it creates an implied pact of mutually assured destruction. Everyone knows the skeletons in everyone else's closet, so it creates a web of trust. Anyone defecting could possibly be punished by selectively revealing their crimes, and vice versa. Game theoretically it overcomes tit-for-tat and enables all-cooperate interactions, at least to some extent, and even among people who otherwise don't like each other or don't have a lot in common.
Third, it separates the serious from the unserious. If you want to be a member of the club, do the bad thing. It's a form of high cost membership gating.
This works for other kinds of crimes too. It's not that unusual for criminal gangs to demand that initiates commit a crime and provide evidence, or commit a crime in front of existing members. These can be things like robbery, murder, and so on. Anyone not willing to do this probably isn't serious and can't be trusted. Once someone does do it, you know they're really in.
It naturally creates cabals. The crime comes first, the cabal second, but then the cabal can realize this and start using the crime as a gateway to admission.
Every mutual interest creates a community, but a secret criminal mutual interest creates a special kind of tight knit community. In a world that's increasingly atomized and divided, that's power. I think it neatly explains how the Epstein network could be so powerful and effective.
That's a mighty high horse you are riding there
Ah yes, me on a high horse. Not the person whose entire worldview depends on defying nash equilibrium. You're all wasting brain cycles to discuss some unrealistic cooperative agreement to slow down and sing 'kumbaya' and telling us that if we don't get to this state that we will on the streets homeless. If this is me on a horse then you are on top of an ivory tower managing my beast of burden.
Exactly. The amount of bs bloatwork anywhere I've ever worked is insane and growing. We need to move on.
> you are literally helping removing as many jobs as possible, from your colleagues, and from yourselves, not in the long term, but in the short term
Pull the bandaid off quickly, it hurts less.