I'm not trying to post this as a "gotcha", but did this shift in opinion also affect your stance on LLM-assisted-writing[1]?
I personally get the same enjoyment in the process of blogging than in that of coding, so I tend to avoid running anything I write through any kind of destructive transformation. So while I totally understand why the language barrier would make LLMs appealing for writing, I'm wondering if you still treat it as desirable after what you said in this post (not that there's anything wrong if you do).
AI-assisted development has helped me fall in love with coding again. This year, I’ve created more than 200 small applications and utilities that are delivering real value to the business, and I’m grateful that this work has been recognized with both a raise and a promotion.
You’re averaging a new application every business day? How does that even work deploying and maintenance? What happens if you leave and your 200 vibe coded apps become tech debt?
I’m including everything in that count: scripts, Python-compiled executables, development tooling, and custom software. On some days, I’ll build five or more small tools or scripts just to automate a single process. Working for a small business gives me the flexibility and freedom to explore new technologies and experiment.
> at least the piano doesn’t autocomplete my scales.
Oh just you wait!
—-
You can get the challenge back by designing something instead of coding it. Lots of wonderfully designed things are not actually that remarkable from the implementation / manufacturing standpoint.
Create a new board game. Completely unchallenging from a coding standpoint, vibe away. But the fast coding steps open up the ability to actually explore and adjust game play in real time. Start by replicating a favorite game.
Create your own organizational software tools. Whatever you would use and other tools dissappointed.
Those are just examples. Go creative on what a thing does, how it looks, etc.
Nintendo’s generations of game hardware are a repeated lesson in great design despite, even because of, modest internals.
Yea, I never get these types of "AI killed the joy of insert hobby" arguments. By virtue of it being a hobby, I can make the conscious choice not to use AI for it. Really, there should be very few technological advances that can ever kill something that is truly a hobby (for example, people still knit, do metalworking, glassblowing, etc.). Now, if you want to get paid for working inefficiently compared to others, then yes, that will never happen.
Where does it say anything about a hobby? The author is an entrepreneur. They're complaining that they're no longer enjoying a part of their job they used to enjoy, and your contribution is "it's a job, not a hobby, if you don't like it then boo hoo".
> > at least the piano doesn’t autocomplete my scales.
> Oh just you wait!
Yeah, most digital keyboards or midi controllers can “autocomplete” scales or arpeggios lol. Press one or two notes and it’ll play chords based on those nodes, sequenced however you like.
Anyways, I somewhat agree with the author. AI can generate a pretty solid solution to certain tasks. Then my work is how to review it for correctness and code quality, make sure it can be “operated” reasonably well, test it, etc. Those are not exactly the fun parts of programming. (The fun part for me is problem solving.)
But you make a solid point! Just hard to connect that to work sometimes.
I've seen a lot of these posts and my comment a few times has been that coding was difficult before, so when a challenge met your skill, this put you in the psychological state of "flow". When there is too much challenge and not enough skill, that creates stress. When there is too much skill and not enough challenge (what AI is now creating, by increasing your "skill") then you get boredom.
So you're "bored" now, and you need to increase the challenge to match the new "skill" AI has given you. So if before maybe you worked on a singular app that took a long time, now you might work on more apps (plural) that you complete quicker with AI.
Maybe an analogy exists with walking versus bicycle riding, although not entirely. You walked a mile somewhere and back and that felt like a good walk, but now with a bicycle 2 miles doesn't feel like much. You now need to bike the equivalent of that walk which might be like 5 miles each way to feel like you got a "real leisurely exercise" in. Riding a bike is also a different skill than walking, so you need to learn how to ride the bicycle.
It's totally valid to feel unhappy about the change, but I think if you find the right challenges you may go back to feeling the joy you had before.
It reinspired me, the things that i can now pull off with AI would have died a slow death previously. I would have needed to do so much learning, research, debug, i would not have had the patience to complete it. Now i can finally build those things that i never had the time nor patience to do. Currently building my own language, claude code does an excellent job.
So... Mindless coding is it? The best part of coding is doing research and learning. Coding, for the pure sake of finishing the project as soon as possible with the least involvement, sounds like doing unpleasant chores.
Far from mindless, just at a different abstraction level. What should the software do, what capabilities and features, how to slice it into small increments (product management work)? What user journeys, what should the screens look like (UX work)? What architecture and components, which libraries, what communication protocols, what layering, what kind of tests, what kind of data structure and data persistence (Architect/tech lead work)? For example, for the DSL syntax i'll have claude code suggest different options, and detail out one of the proposal i like best.
I think the solution here is to just code stuff where AI is not useful. Go write embedded code (not arduino), write a compiler, create a network protocol, write a game that runs on an actual gameboy etc. There are a lot of projects where AI is still of limited use (both silly and actually useful). Obviously, the downside is that all these projects are going to be much harder than "build a todo app" (and possibly require you having experience from doing easier projects first).
I don't think that "just don't use AI" is really a solution here. It can feel really pointless doing something the hard way when you know there is an easy way even if you prefer the hard way.
Clojure brought back my joy of programming after years in the Java framework driven world and the JS churn driven world.
It feels like gardening where slowly your art takes shape, and every single line of code does has a visible impact (no magic) that you can immediately see via the repl
That said, clojure done with AI feels like any other language done by AI. They are interchangeable and thus the language has become irrelevant.
This article makes sense it really does, but its not the full picture. I think there are different modalities to enjoying programming. I wrote a long post about this a couple of months ago that goes way more into detail than I could ever write up in a HN comment.
article: https://handmadeoasis.com/ai-and-software-engineering-the-co...
> I am no longer solving any mentally-stimulating problems.. I am just copy-pasting code from an AI assistant.
I'm using AI plenty but looking at my use with a different lens. I like to code. It's fun. It's rewarding. I produce things with it. But it is also practically a means to an end for me. My job isn't purely code but also analysis, strategy, etc.
So, having lots of fun zooming through code problems that slowed me down in the past. I have more time for the analysis/strategy/etc.
I'm not a professional dev but I would encourage author to find a similar lens in their work, if possible. Not saying its easy! And if that solution isn't helping or attainable, maybe it's time to move on?
This article makes no sense. Just don't use AI and code by hand if that makes you happy.
When I want to stimulate my brain coding, I do things like AOC or Euler, and when I want to test out a quick prototype app I have AI do all the grunt work of setting it up and getting it to a point to see if I even think it's a good idea or not.
The step you describe is the main use case I’ve found where vibe coding actually makes sense. Going from there to actually make a good version of the thing then becomes more hands on.
coding was never supposed to be fun, it was supposed to be instrumental
you interface with a machine to get the machine to do what you, a human, want it to do relevant to your human purposes
but out of necessity - turns out we need to control a lot of machines - we made the act ergonomic
this fit the aesthetic of some people. All to do with them and little to do with the act. Akin to alchemists or other purveyors of exotic knowledge, the relevance of their skill always had a countdown on it
all that's left is to circle back. Coding is instrumental. Now our alchemy is one more level abstracted from the artifice of the machine. Good. It's closer to people now - like management, now. That's bad for the current generation of alchemist but good for the world
earnest RIP. On the upside, there's always a next generation
I’ve done a lot of vibe coding and I just can’t understand these takes. Pure vibe coding is not going to get you to a good result, so the alchemist you describe is still very much essential, and as far as I can see will be for a very long time.
Not really. You have to pivot to a systems designer role and articulate in detail what you want to build, but the building of it is now effectively automated. Most programmers are not ready for this shift in mindset. Their perceptions of their own competence (hence value to organization, hence self-worth) are tied to writing the code. Hence why AI-assisted coding seems so unfun.
I write almost all of the code, but I love AI for getting boiler plate out of the way and getting docs. Chatgpt is way faster at giving me a switch statement for all Tif Tags than I could make myself, that's not mentally stimulating code.
I kind of have to disagree. I have really learned to love the explicitness of a big ole' switch statement. Its fast, no misdirection, all available in one place, and its easily readable where you need it. All of the "clean code" options for something like this I have seen that used more abstraction ultimately just split up the fact you have to keep a list of 100 something hex code values and some value there.
Can't tell what would be the best case (pun intended) for you, depends on the language you use. Yes, it will eventually end up as some sort of look-up table, though switch statement is the last option I'd use (assuming you gonna have more than 20 cases or so).
The good thing of programming without generative assistants is that it makes you think how to make the algorithm and the code better to avoid too much manual work. Laziness of engineers is crucial for automation and optimisation.
It's been a weird experience for sure. Last week I spent 20 minutes sorting wires while the AI did a refactor that would have taken 2-6 hours by hand.
I'm not sure how I feel about it. On one hand, in certain situations it speeds up my work 10x. On the other, it's way, way too easy to just stop thinking and have the AI iterate on a problem for an hour or two only to end up with utter gibberish, revert everything, and realize the fix was trivial with the slightest application of critical thinking.
I'm certainly a faster programmer with AI, but I'm not sure if I'm any more productive or producing better code instead of just more.
The one thing it's utterly amazing at is my most hated part: going from a blank page to something. Once there's some code and a hint of structure, it's much easier for me to get started.
I will say that I was shocked at how well codex handled "transform this react native typescript into a C++ library". I estimated a week or two of manual refactoring. Codex did it in half an hour (and used 25% of my weekly token budget).
Mr. Bryce died of cancer a couple years back, and the web site he had up hosting all his materials has since bit-rotted away, so I'm giving you a Wayback link.
Anyhoo, back when it came out, it fomented much discussion and anger among programmers, including here on Hackernews:
The problem is, Tim Bryce was absolutely, 100% correct. He stood by those words until his dying breath, and he related how programmers would be angry or offended, but management at the companies that employed programmers found the essay to be accurate.
You have to put the "Theory P" essay in broader context. Tim was a management consultant and salesperson for his father Milt, who developed the first software methodology for general commercial use in 1971: PRIDE (Profitable Information by Design). At the time, Milt already had about 20 years of experience in the software field, starting in the UNIVAC days. He was there since the very beginning of commercial computing. Milt discovered that one of the things we learned about LLMs applies to human programmers as well: unless you give them careful guidance, structured and detailed specifications, rigorous standards for how the code should be written, and hold them accountable to those specifications and standards, programmers will go off and develop the wrong thing, wasting the organization's time and money. Unless the programmers know what to build, they're worse than useless. Theory P is more about correcting the overvaluation of programmers from an organizational standpoint; in 2005, organizations particularly in the United States were still under the sway of the myth of the "genius programmer" who could solve the company's problems and make them lots of money if you just left them alone to hack code. This is explicitly not the case; building information systems needs standards, specifications, and process control just like any factory assembly line. (We are still struggling to learn the lessons Milt figured out in 1971. If you have heard of a data dictionary, or a "software bill of materials", those were concepts introduced by PRIDE. They are just two elements to a complete IRM solution.)
In light of this, PRIDE is incredibly comprehensive. The purpose of a methodology is to define WHO is to perform WHAT task, WHERE, WHEN, WHY, and HOW, and most modern "methodologies" fail in that regard; PRIDE defines these essentials organization wide, for everyone involved with an information system (including its users). PRIDE is actually an information systems methodology, not just a software methodology; under PRIDE, writing code is just one of the last technical steps of implementing an information system, which includes software and computers but is not coterminous with them. Business information systems also encompass telecommunications links, pieces of paper such as forms and correspondence, and the most important element: people, and the information they need or provide.
So the hard part of building an information system for an organization, so the Bryces found out, is not programming but systems analysis and design. Consequently, and unsurprisingly, PRIDE is a Big Design Up Front methodology, which makes programmers tetchy—done properly, a PRIDE project spends about 60% of its time in systems analysis and design and only about 15% actually coding. There is some, but not a lot, of room for iteration, but by the time the first line of code has been written the major decisions for how the software—or rather, information system—should function have been made, and not by a programmer. That information is encoded in the extensive documents and flowcharts that have been prepared by the systems analysts and approved by management.
Now enter AI. Tim Bryce also wrote about the maturity of an organization's information systems (https://web.archive.org/web/20160407164521fw_/http://phmains...) in terms of the Before Times, when organizations mainly used paper and ink, to the introduction of the computer in which organizations took a "tool-oriented approach": figuring out what the computer could do and applying it to various tasks without concern for when and how it is properly applied at the organizational level. Mature organizations, by contrast, take a "management-oriented approach", in which the information needs of the business are defined and codified by management, and computers and software are written specifically to address those needs. The problem is that programmers, not systems people, have dominated the information-systems departments of organizations since about the late 1960s, meaning that many of these are stuck in a "tool-oriented approach"! With AI, that last step, the final 15%—actually translating the specifications for the information system into code—can be automated. Most programmers are now obsolete. Nobody cares if you have fun coding; the good times for coders, when they were overvalued to the point of being made linchpins for billion+ dollar businesses, are now over. The relevant skill now becomes understanding what you want to build and laying it out for the AI in sufficient detail—exactly the role of the systems analyst in PRIDE! So, programmers who do not take a "management-oriented approach" and are unwilling to develop the high-level systems thinking and people skills it takes to work at an organizational level and communicate with users and management are going to have a very, very bad time indeed! Unless you have fun designing systems and seeing them rolled out to production, your job is going to be extremely unfun, if not eliminated entirely!
And that would be just fine with Tim Bryce. He hated programmers, and said as much. I'm kind of curious what he would say if he had lived just a few years more into the current era.
I remember early on in my last term of employment, I merely made a jest about lack of specs and the level of butthurt from my boss really just was an early signal my dismal future in this position.
I'm not trying to post this as a "gotcha", but did this shift in opinion also affect your stance on LLM-assisted-writing[1]?
I personally get the same enjoyment in the process of blogging than in that of coding, so I tend to avoid running anything I write through any kind of destructive transformation. So while I totally understand why the language barrier would make LLMs appealing for writing, I'm wondering if you still treat it as desirable after what you said in this post (not that there's anything wrong if you do).
[1] https://meysam.io/blog/hn-viral-flagged-submission/
AI-assisted development has helped me fall in love with coding again. This year, I’ve created more than 200 small applications and utilities that are delivering real value to the business, and I’m grateful that this work has been recognized with both a raise and a promotion.
You’re averaging a new application every business day? How does that even work deploying and maintenance? What happens if you leave and your 200 vibe coded apps become tech debt?
I’m including everything in that count: scripts, Python-compiled executables, development tooling, and custom software. On some days, I’ll build five or more small tools or scripts just to automate a single process. Working for a small business gives me the flexibility and freedom to explore new technologies and experiment.
That doesn't answer any of the concerns raised by the parent comment, only reinforces them.
Would you be willing to give an example of a typical app/tool like this that you made?
https://feelinggoodbot.com/tools/rapiddev-html/
https://feelinggoodbot.com/tools/textcompare/
“claude fix this tech det”
I have years old projects that have languished that I have resurrected due to AI-assisted coding.
- "embedded" (rpi) controller for a boxfan that runs in my lab
- VSR distributed consistency protocol library
- dead simple CQRS library
- OT library
I now have the CQRS library deployed to do accounting for a small SAAS that might generate revenue for me...
On the docket is:
- yard watering "embedded" (rpi) device
- fully personalized home thermostat
etc.
> at least the piano doesn’t autocomplete my scales.
Oh just you wait!
—-
You can get the challenge back by designing something instead of coding it. Lots of wonderfully designed things are not actually that remarkable from the implementation / manufacturing standpoint.
Create a new board game. Completely unchallenging from a coding standpoint, vibe away. But the fast coding steps open up the ability to actually explore and adjust game play in real time. Start by replicating a favorite game.
Create your own organizational software tools. Whatever you would use and other tools dissappointed.
Those are just examples. Go creative on what a thing does, how it looks, etc.
Nintendo’s generations of game hardware are a repeated lesson in great design despite, even because of, modest internals.
Yea, I never get these types of "AI killed the joy of insert hobby" arguments. By virtue of it being a hobby, I can make the conscious choice not to use AI for it. Really, there should be very few technological advances that can ever kill something that is truly a hobby (for example, people still knit, do metalworking, glassblowing, etc.). Now, if you want to get paid for working inefficiently compared to others, then yes, that will never happen.
There are some people who feel the hobby is meaningless if they know the machine is better at it.
And well, I entered the field professionally because I liked it, and I feel sort of like the rug was pulled under me. Sucks to be one of us I guess.
Those people aren’t pursuing a hobby, they’ve mentally signed themselves up for a competition with one entrant.
Where does it say anything about a hobby? The author is an entrepreneur. They're complaining that they're no longer enjoying a part of their job they used to enjoy, and your contribution is "it's a job, not a hobby, if you don't like it then boo hoo".
> > at least the piano doesn’t autocomplete my scales.
> Oh just you wait!
Yeah, most digital keyboards or midi controllers can “autocomplete” scales or arpeggios lol. Press one or two notes and it’ll play chords based on those nodes, sequenced however you like.
Anyways, I somewhat agree with the author. AI can generate a pretty solid solution to certain tasks. Then my work is how to review it for correctness and code quality, make sure it can be “operated” reasonably well, test it, etc. Those are not exactly the fun parts of programming. (The fun part for me is problem solving.)
But you make a solid point! Just hard to connect that to work sometimes.
I've seen a lot of these posts and my comment a few times has been that coding was difficult before, so when a challenge met your skill, this put you in the psychological state of "flow". When there is too much challenge and not enough skill, that creates stress. When there is too much skill and not enough challenge (what AI is now creating, by increasing your "skill") then you get boredom.
So you're "bored" now, and you need to increase the challenge to match the new "skill" AI has given you. So if before maybe you worked on a singular app that took a long time, now you might work on more apps (plural) that you complete quicker with AI.
Maybe an analogy exists with walking versus bicycle riding, although not entirely. You walked a mile somewhere and back and that felt like a good walk, but now with a bicycle 2 miles doesn't feel like much. You now need to bike the equivalent of that walk which might be like 5 miles each way to feel like you got a "real leisurely exercise" in. Riding a bike is also a different skill than walking, so you need to learn how to ride the bicycle.
It's totally valid to feel unhappy about the change, but I think if you find the right challenges you may go back to feeling the joy you had before.
[dead]
It reinspired me, the things that i can now pull off with AI would have died a slow death previously. I would have needed to do so much learning, research, debug, i would not have had the patience to complete it. Now i can finally build those things that i never had the time nor patience to do. Currently building my own language, claude code does an excellent job.
So... Mindless coding is it? The best part of coding is doing research and learning. Coding, for the pure sake of finishing the project as soon as possible with the least involvement, sounds like doing unpleasant chores.
Far from mindless, just at a different abstraction level. What should the software do, what capabilities and features, how to slice it into small increments (product management work)? What user journeys, what should the screens look like (UX work)? What architecture and components, which libraries, what communication protocols, what layering, what kind of tests, what kind of data structure and data persistence (Architect/tech lead work)? For example, for the DSL syntax i'll have claude code suggest different options, and detail out one of the proposal i like best.
Nice, but what does it mean? Are you gonna replace a group of specialists in their fields with prompts to LLM?
Some people love the act, which is fine. A good many other people just want the result.
Something something quarter-inch hole.
I agree. Vibe coding eventually just becomes repetitively QA testing the work of a bad engineer.
You can have AI write automated tests before writing the code, so it can QA itself.
You can try. What happens is it cheats you at every turn and finally admits it wasn’t testing anything when you ask why it’s still broken.
Yes, this actually happens.
So much this. I've written countless shell scripts / clis that does small things that I would not have done before.
I think the solution here is to just code stuff where AI is not useful. Go write embedded code (not arduino), write a compiler, create a network protocol, write a game that runs on an actual gameboy etc. There are a lot of projects where AI is still of limited use (both silly and actually useful). Obviously, the downside is that all these projects are going to be much harder than "build a todo app" (and possibly require you having experience from doing easier projects first).
I don't think that "just don't use AI" is really a solution here. It can feel really pointless doing something the hard way when you know there is an easy way even if you prefer the hard way.
Working on projects outside of work (personal blogs are cool again!) helps me enjoy some time away from the LLM.
If you don't have any content for your personal blog, work on that first, find some niche thing to obsess over and have something to say about.
Clojure brought back my joy of programming after years in the Java framework driven world and the JS churn driven world.
It feels like gardening where slowly your art takes shape, and every single line of code does has a visible impact (no magic) that you can immediately see via the repl
That said, clojure done with AI feels like any other language done by AI. They are interchangeable and thus the language has become irrelevant.
This article makes sense it really does, but its not the full picture. I think there are different modalities to enjoying programming. I wrote a long post about this a couple of months ago that goes way more into detail than I could ever write up in a HN comment. article: https://handmadeoasis.com/ai-and-software-engineering-the-co...
> I am no longer solving any mentally-stimulating problems.. I am just copy-pasting code from an AI assistant.
I'm using AI plenty but looking at my use with a different lens. I like to code. It's fun. It's rewarding. I produce things with it. But it is also practically a means to an end for me. My job isn't purely code but also analysis, strategy, etc.
So, having lots of fun zooming through code problems that slowed me down in the past. I have more time for the analysis/strategy/etc.
I'm not a professional dev but I would encourage author to find a similar lens in their work, if possible. Not saying its easy! And if that solution isn't helping or attainable, maybe it's time to move on?
This article makes no sense. Just don't use AI and code by hand if that makes you happy.
When I want to stimulate my brain coding, I do things like AOC or Euler, and when I want to test out a quick prototype app I have AI do all the grunt work of setting it up and getting it to a point to see if I even think it's a good idea or not.
The step you describe is the main use case I’ve found where vibe coding actually makes sense. Going from there to actually make a good version of the thing then becomes more hands on.
coding was never supposed to be fun, it was supposed to be instrumental
you interface with a machine to get the machine to do what you, a human, want it to do relevant to your human purposes
but out of necessity - turns out we need to control a lot of machines - we made the act ergonomic
this fit the aesthetic of some people. All to do with them and little to do with the act. Akin to alchemists or other purveyors of exotic knowledge, the relevance of their skill always had a countdown on it
all that's left is to circle back. Coding is instrumental. Now our alchemy is one more level abstracted from the artifice of the machine. Good. It's closer to people now - like management, now. That's bad for the current generation of alchemist but good for the world
earnest RIP. On the upside, there's always a next generation
I’ve done a lot of vibe coding and I just can’t understand these takes. Pure vibe coding is not going to get you to a good result, so the alchemist you describe is still very much essential, and as far as I can see will be for a very long time.
Not really. You have to pivot to a systems designer role and articulate in detail what you want to build, but the building of it is now effectively automated. Most programmers are not ready for this shift in mindset. Their perceptions of their own competence (hence value to organization, hence self-worth) are tied to writing the code. Hence why AI-assisted coding seems so unfun.
Rigorously designing and understanding the system does not sound like what you were referring to in the original post.
All of my heroes always said the difficult part is not in the writing of the code, but in the reading/understanding of it.
Check again, m8. I did not write the post you replied to. I'm offering my own thoughts.
I write almost all of the code, but I love AI for getting boiler plate out of the way and getting docs. Chatgpt is way faster at giving me a switch statement for all Tif Tags than I could make myself, that's not mentally stimulating code.
Maintaining a switch for all possible TIFF tags doesn't sound like a good programming practice, though.
I kind of have to disagree. I have really learned to love the explicitness of a big ole' switch statement. Its fast, no misdirection, all available in one place, and its easily readable where you need it. All of the "clean code" options for something like this I have seen that used more abstraction ultimately just split up the fact you have to keep a list of 100 something hex code values and some value there.
Can't tell what would be the best case (pun intended) for you, depends on the language you use. Yes, it will eventually end up as some sort of look-up table, though switch statement is the last option I'd use (assuming you gonna have more than 20 cases or so).
The good thing of programming without generative assistants is that it makes you think how to make the algorithm and the code better to avoid too much manual work. Laziness of engineers is crucial for automation and optimisation.
Sad. For me, AI-assisted coding has brought overhelming new enthusiasm in creating new application
It's been a weird experience for sure. Last week I spent 20 minutes sorting wires while the AI did a refactor that would have taken 2-6 hours by hand.
I'm not sure how I feel about it. On one hand, in certain situations it speeds up my work 10x. On the other, it's way, way too easy to just stop thinking and have the AI iterate on a problem for an hour or two only to end up with utter gibberish, revert everything, and realize the fix was trivial with the slightest application of critical thinking.
I'm certainly a faster programmer with AI, but I'm not sure if I'm any more productive or producing better code instead of just more.
The one thing it's utterly amazing at is my most hated part: going from a blank page to something. Once there's some code and a hint of structure, it's much easier for me to get started.
I will say that I was shocked at how well codex handled "transform this react native typescript into a C++ library". I estimated a week or two of manual refactoring. Codex did it in half an hour (and used 25% of my weekly token budget).
Not mine
I've had the exact opposite response: I only enjoy coding now that the AI writes ~100% of the code.
I really enjoy doing the dishes now that the dishwasher does ~100% percent of the work.
Why did you become a developer if you hate writing code?
Writing code can often just be the means to an end which is delivering value.
I’m guessing the money. I hate working with people with no passion for the craft.
I highly, highly recommend a reread of Tim Bryce's 20-year-old essay "Theory P: A Philosophy of Managing Programmers": https://web.archive.org/web/20160407111718fw_/http://phmains...
Mr. Bryce died of cancer a couple years back, and the web site he had up hosting all his materials has since bit-rotted away, so I'm giving you a Wayback link.
Anyhoo, back when it came out, it fomented much discussion and anger among programmers, including here on Hackernews:
https://news.ycombinator.com/item?id=990185
The problem is, Tim Bryce was absolutely, 100% correct. He stood by those words until his dying breath, and he related how programmers would be angry or offended, but management at the companies that employed programmers found the essay to be accurate.
You have to put the "Theory P" essay in broader context. Tim was a management consultant and salesperson for his father Milt, who developed the first software methodology for general commercial use in 1971: PRIDE (Profitable Information by Design). At the time, Milt already had about 20 years of experience in the software field, starting in the UNIVAC days. He was there since the very beginning of commercial computing. Milt discovered that one of the things we learned about LLMs applies to human programmers as well: unless you give them careful guidance, structured and detailed specifications, rigorous standards for how the code should be written, and hold them accountable to those specifications and standards, programmers will go off and develop the wrong thing, wasting the organization's time and money. Unless the programmers know what to build, they're worse than useless. Theory P is more about correcting the overvaluation of programmers from an organizational standpoint; in 2005, organizations particularly in the United States were still under the sway of the myth of the "genius programmer" who could solve the company's problems and make them lots of money if you just left them alone to hack code. This is explicitly not the case; building information systems needs standards, specifications, and process control just like any factory assembly line. (We are still struggling to learn the lessons Milt figured out in 1971. If you have heard of a data dictionary, or a "software bill of materials", those were concepts introduced by PRIDE. They are just two elements to a complete IRM solution.)
In light of this, PRIDE is incredibly comprehensive. The purpose of a methodology is to define WHO is to perform WHAT task, WHERE, WHEN, WHY, and HOW, and most modern "methodologies" fail in that regard; PRIDE defines these essentials organization wide, for everyone involved with an information system (including its users). PRIDE is actually an information systems methodology, not just a software methodology; under PRIDE, writing code is just one of the last technical steps of implementing an information system, which includes software and computers but is not coterminous with them. Business information systems also encompass telecommunications links, pieces of paper such as forms and correspondence, and the most important element: people, and the information they need or provide.
So the hard part of building an information system for an organization, so the Bryces found out, is not programming but systems analysis and design. Consequently, and unsurprisingly, PRIDE is a Big Design Up Front methodology, which makes programmers tetchy—done properly, a PRIDE project spends about 60% of its time in systems analysis and design and only about 15% actually coding. There is some, but not a lot, of room for iteration, but by the time the first line of code has been written the major decisions for how the software—or rather, information system—should function have been made, and not by a programmer. That information is encoded in the extensive documents and flowcharts that have been prepared by the systems analysts and approved by management.
Now enter AI. Tim Bryce also wrote about the maturity of an organization's information systems (https://web.archive.org/web/20160407164521fw_/http://phmains...) in terms of the Before Times, when organizations mainly used paper and ink, to the introduction of the computer in which organizations took a "tool-oriented approach": figuring out what the computer could do and applying it to various tasks without concern for when and how it is properly applied at the organizational level. Mature organizations, by contrast, take a "management-oriented approach", in which the information needs of the business are defined and codified by management, and computers and software are written specifically to address those needs. The problem is that programmers, not systems people, have dominated the information-systems departments of organizations since about the late 1960s, meaning that many of these are stuck in a "tool-oriented approach"! With AI, that last step, the final 15%—actually translating the specifications for the information system into code—can be automated. Most programmers are now obsolete. Nobody cares if you have fun coding; the good times for coders, when they were overvalued to the point of being made linchpins for billion+ dollar businesses, are now over. The relevant skill now becomes understanding what you want to build and laying it out for the AI in sufficient detail—exactly the role of the systems analyst in PRIDE! So, programmers who do not take a "management-oriented approach" and are unwilling to develop the high-level systems thinking and people skills it takes to work at an organizational level and communicate with users and management are going to have a very, very bad time indeed! Unless you have fun designing systems and seeing them rolled out to production, your job is going to be extremely unfun, if not eliminated entirely!
And that would be just fine with Tim Bryce. He hated programmers, and said as much. I'm kind of curious what he would say if he had lived just a few years more into the current era.
>https://news.ycombinator.com/item?id=990185
well worth a visit for the comments lol
I remember early on in my last term of employment, I merely made a jest about lack of specs and the level of butthurt from my boss really just was an early signal my dismal future in this position.