> it depends on what you enjoy: the journey or the destination
This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
I have coworkers who get itchy when they don't see their work on production, and super defensive in code review but I've never really cared. The goal is to solve the puzzle. If there's a better way to solve the puzzle, I want to know. If it takes a week to get through code review, what do I care, I'm already off to the next puzzle.
Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
I've been coding since I was about 15 and still love it. These days I mostly build tailored applications for small and medium companies, often alone and sometimes with small ad-hoc teams. I also do the sales myself, in person. For me, not using LLMs would mean giving up a lot of productivity. But the way I use them is very structured. Work on an application starts with requirements appraisal: identifying actors, defining use cases, and understanding the business constraints. Then I design the objects and flows. When possible, I formalize the system with fairly strict axioms and constraints.
Only after that do LLMs come in, mostly to help with the mechanical parts of implementation. In my experience it's still humans all the way down. The thinking, modeling, and responsibility for the system are human. The LLM just helps move the implementation faster.
I also suspect the segment I work in will be among the last affected by LLM-driven job displacement. My clients are small to medium companies that need tailored internal systems. They're not going to suddenly start vibe-coding their own software. What they actually need is someone to understand the business, define the model, and take responsibility for the system. LLMs help with the implementation, but that part was never the hard part of the job.
I’m doing the same as you and even though I was producing coding a lot of the actual products I estimated the coding part just to be about 20% of the work. The rest is figuring out what and how to build stuff and what stakeholders really need, and solving production issues in live event driven systems. Agentic coding is just faster at the 20% part, and I can always sit down and code the really hard stuff if I want to or feel I need to if the LLM gets stuck. If it produces something not understandable I either learn from it until I understand it og makes it do a pattern I know instead. So all in all, not so worried.
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
Quite a few of the projects I always wanted to do have components or dependencies I really don't want to do. And as a result, I never did them, unless they eventually became viable to do in a commercial setting where I then had some junior developer to make the annoying stuff go away.
Now with LLMs I have my own junior developer to handle the annoying stuff - and as a result, a lot of my fun stuff I was thinking about in the last 3 decades finally got done.
One example from just last week - I had a large C codebase from the 90s I always wanted to reuse, but modern compilers have a different idea of how C should look like. It's pretty obvious from the compiler errors what you need to do each case, but I wasn't really in the mood for manually going through hundreds of source files. So I just stuck a locally running qwen coder in yolo mode into a container, forgot about it for a week, and came back to a compiling code base. Diff is quick to review, only had a handful of cases where it needed manual intervention.
Note that you are able to choose freely what parts of the work get done by Claude, and what parts you do yourself. At work, many of us have no such luxury because bosses drunk on FOMO are forcing agent use.
Yeah, I've noticed at several customers that they're just trying to cram LLMs into everything, instead of maybe first thinking if it's sensible for that specific usecase.
I'm also doing some things where I don't think LLMs are not a good fit - but I'm doing it because I care to see about things like failure behaviour, how to identify when it is looping (which can be sometimes hard to see when using huge context models) and similar stuff - which results in more knowledge about when it makes sense to use LLMS. No such learnings visible at many customers, even if LLMs do something stupid.
> The fun part is in the building, it's in the understanding, the growth of me.
I agree with this sentiment as well. Without a doubt, my favorite part of the job is coming up with a solution that just 'feels right', especially when said solution is much cleaner than brute force/naive approach. It sounds cheesy, but it truly is one of my favorite sensations.
I'm the senior-most engineer on my team of about 15. I try to emphasize software craftsmanship, which resonates with some but not all. We have a few engineers who have seemingly become reliant on AI tooling, and I struggle with them. Some of them are trying to push code that they clearly don't understand and aren't reviewing, and I think they're setting themselves up for failure due to lack of growth.
Adapting the workflow to this new paradigm is a different sort of puzzle. I think that the folks whom enjoy agentic pair programming have found various satisfactory solutions. As a puzzle enthusiast myself, I have been particularly enjoying this pivotal moment in technology because of how many opportunities there are to create novel approaches to combat the lies and mistakes.
I came back into tech professionally over the last decade. Always been into computers, but the first decade or so of my career was in humanitarian amin. Super interesting sector, super boring day-to-day.
Getting back into code felt like coming home. I'm good at it, I really enjoy it, the problem-solving aspect totally lights up my brain in this amazing way.
I feel exactly the same way. Totally robbed of pleasure at work, with the added kicker of mass layoffs hanging over the sector.
At least OP is sixty, I've got 25 years of work left and I really don't know what to do. I hate it all so much.
Oh wow, that's exactly the opposite of how I feel, and conversely, I am that developer who gets itchy when his work doesn't go to prod quickly enough and gets defensive on code reviews.
Sure, part of the fun of programming is understanding how things work, mentally taking them apart and rebuilding them in the particular way that meets your needs. But this is usually reserved for small parts of the code, self-contained libraries or architectural backbones. And at that level I think human input and direction are still important. Then there is the grunt work of glueing all the parts together, or writing some obvious logic, often for the umpteenth time- these are things I can happily delegate. And finally there are the choices you make because you think of the final product and of the experience of those who will use it- this is not a puzzle to solve at all, this is creative work and there is no predefined result to reach. I'm happy to have tools that allow me to get there faster.
You still care about end result though: in your case, the end result being the puzzled you solved.
AI can make that process still enjoyable. For instance I had to build a very intricate cache handler for Next.js from scratch that worked in a very specific way by serializing JSON in chunks (instead of JSON.parse it all in memory). I knew the theory, but the API details and the other annoyances always made it daunting for me.
With AI I was able to thinker more about the theory of the problem and less about the technical implementation which made the process much more fun and doable.
Perhaps we're just climbing the ladder of abstraction: in the early days people were building their own garbage collection mechanisms, their own binary search algorithms, etc. Once we started using libraries, we had to find the fun in some higher level.
Perhaps in the future the fun will be about solving puzzles within the realm of requirement definitions and all the intricacies that stem from that.
I feel like I still get to solve the puzzles I like because I like the higher level architecture/design parts. I just don’t have to type as much because I can provide a stubbed out solution and tell it to fill I the rest.
I agree with you. I try to remember though that this is just the same situation that artists, musicians and (more recently) writers have been in for a long time. Unless you’re one of a very lucky few you’ll only get fulfillment in those pursuits if you enjoy the process rather than the output since it’s hard to get money or recognition for output anymore. Pure coding and lots of areas of code problem solving are going to end up in the same position.
From my understanding, you can instead use Claude in the following manner: understand and solve the problem, put up pseudo code, and then tell Claude to generate real code and maybe restructure it a bit. So you don't have to write the actual code but have solved the problem all by yourself.
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need.
I don't really see where he/she said that explicitly. My understanding is that he likes to solve problems but don't care about the final implementation. But I stand corrected.
If you worked in an office and your boss asked for 100 copies of their memo, they want you to use the copy machine.
If they saw you typing it out 100 times they’d tell you that you’re wasting time. It does t matter that you like to type and that you went to school to get a degree in typing.
Your company isn’t paying you to solve puzzles. If you aren’t putting things into production, what good are you as an employee?
> Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
Claude very much learns if you teach it and tell it to note things in the CLAUDE.md files you want it to remember. Claude is much better than any junior and most mid level ticket takers.
> Your company isn’t paying you to solve puzzles. If you aren’t putting things into production, what good are you as an employee?
No, the company is exactly paying their employees to solve puzzles, which company labels them as problems or requirements.
And when an employee focuses on solving puzzles and enjoys it, the code naturally ends up in production, and gets forgotten because the puzzle is solved well.
I care about exchanging labor for money to support my addiction to food and shelter.
My employer just like any other employer cares about keeping up with the competition and maximizing profit.
Customers don’t care about the “craftsmanship” of your code - aside from maybe the UI. But if you are a B2B company where the user is not the customer, they probably don’t even care about that.
I bet you most developers here are using the same set of Electron apps.
Now I work in consulting AWS + app dev as a staff consultant leading ironed and unless you work in the internal consulting department at AWS (been there done that as blue badge RSU earning employee) or GCP, it’s almost impossible to get a job as an American as anything but sales or a lead. It’s a race to the bottom with everyone hiring in LatAm if you are lucky (same time zone more willing to push back against bad idea and more able to handle ambiguity) or India.
Everything is a race to the bottom. The only way I can justify not being in presales is because I can now do the work of 3 people with AI.
There still is. In most enterprises, the tasks are usually to take some data somewhere, transform it to be the intake of another process. Or to made a tweak to an existing process. Most of the other types of problems (information storage, communication, accounting,..) have been solved already (and solved before the digital world).
People can see it as grind. But the pleasure comes in solving the meta problem instead of the one in front (the latter always create brittle systems). But I agree that it can becomes hell if there were no care in building the current systems.
And they are tasks with standardized best practices. I knew that I wanted to write an internal web app that allowed users to upload a file to S3 using Lambda and storing the CSV rows into Postgres.
I just told it to do it.
It got the “create S3 pre-signed url to upload it to” right. But then it did the naive implementation of download the file and do a bulk upsetting wrong instead of “use the AWS extension to Postgres and upload it to S3”. Once I told it to do that, it knew what to do.
But still I cared about systems and architecture and not whether it decided to use a for loop or while loop.
Knowing that or knowing how best to upload files to Redshift or other data engineering practices aren’t knew or novel
They aren’t. But there are a lot of mistakes that can happen, and until an AI workflow is proven that it can’t happen, it’s best to monitor it, and then the speed increase is moot. Hunans can make the same kind of mistakes, but they are incentivized not to (loosing reputation and their jobs). AI tools don’t have that lever against them.
And so are mid level developers. A mid level developer who didn’t have 8 years of experience with AWS would have made the same mistake without my advice.
I would have been just as responsible for their inefficient implementation in front of the customer (consulting) as I would be with Claude and it would have been on me to guide the mid level developer just like it was on me to guide Claude.
The mid level developer would never have been called out for it in front of my customer (consulting) or in front of my CTO when I was working for a product company. Either way I was the responsible individual
The reason your login is taking 45 seconds and your database is locking up with 10 concurrent users isn’t because developers didn’t write good code following the correct GOF pattern.
If companies cared about bloat and performance you wouldn’t see web apps with dozens of dependencies, cross platform mobile apps and Electron apps.
Putting solutions in to production. Not "things". Honestly I'm sick of dogshit companies wanting something done yesterday but are happy to spend the next 2 years having engineers debug the consequences.
I've just written the fifth from-scratch version of a component at work. The requirements have never changed (it's a client library for a proprietary server, which has barely ever changed). I'm the 5th developer at the company to write a version of it.
All because nobody gave engineers the breathing room to factor the solution in to well thought out, testable, reusable components. Every version before is a spaghetti soup of code, mixing up unrelated functionality in to a handful of files.
No well thought out interfaces. No automated end-to-end testing, and no automated regression testing. The whole thing is dire and no managers give a fuck.
AI cannot solve for a lack of engineering culture. It can however produce trash faster than ever at these toxic shops.
And this has nothing to do with AI like you said. On another project my vibe coded API that I designed I also didn’t look at a line of code besides the shell script I had Claude create to do the integration tests with curl.
On the other hand, AI doesn’t care about sloppy code. I haven’t done any serious web development since 2002, yet I created two decently featureful internal websites without looking at a line of code authenticated with Amazon Cognito. I doubt for the lifetime of this app, anyone will ever look at a line of code and make any changes using AI.
I enjoy the journey too. The journey is building systems, not coding. Coding was always the most tedious and least interesting part of it. Thinking about the system, thinking about its implementation details, iterating and making it better and better. Nothing has changed with AI. My ambition grew with the technology. Now I don't waste time on simple systems. I can get to work doing what I've always thought would be impossible, or take years. I can fail faster than ever and pivot sooner.
It's the best thing to happen to systems engineering.
My experience was exactly the opposite—I came from the other side entirely.
I had absolutely no programming knowledge, and until three weeks ago, I didn’t even know what a Parquet file was.
While reviewing a deep research project I had started, I stumbled upon an inefficiency: The USDA’s phytochemical database is
publicly accessible, but it’s spread across 16 CSV files with unclear
links. I had the idea to create a single flat table, enriched with data from PubMed, ChEMBL,
and patents. Normally, a project like this would have been
completely impossible for someone
like me—the programming hurdle is far too high for me.
With Claude Opus 4.6, I was actually able to focus entirely on the problem architecture:
which data, from where, in what form, for which target audience.
Every decision about the system was mine. Claude Opus
took care of the implementation.
I’m probably the person your debate about “journey vs. destination”
wasn’t meant for. For me, the destination
was previously unattainable. My journey became possible, because
the AI took over the part that I could never have implemented anyway.
I hear everyone say "the LLM lets me focus on the broader context and architecture", but in my experience the architecture is made of the small decisions in the individual components. If I'm writing a complex system part of getting the primitives and interfaces right is experiencing the friction of using them. If code is "free" I can write a bad system because I don't experience using it, the LLM abstracts away the rough edges.
I'm working with a team that was an early adopter of LLMs and their architecture is full of unknown-unknowns that they would have thought through if they actually wrote the code themselves. There are impedance mismatches everywhere but they can just produce more code to wrap the old code. It makes the system brittle and hard-to-maintain.
It's not a new problem, I've worked at places where people made these mistakes before. But as time goes on it seems like _most_ systems will accumulate multiple layers of slop because it's increasingly cheap to just add more mud to the ball of mud.
This matches my experience when building my first real project
with Claude. The architectural decisions were entirely up to me:
I researched which data sources, schema, and enrichment logic were suitable and which to use. But I
had no way of verifying whether these decisions were actually
good (no programming knowledge) until Claude Opus had implemented them.
The feedback loop is different when you don’t
write the code yourself. You describe a system to the AI, after a few lines of code the result appears, and then you
find out whether your own mental model was actually sound. In my first attempts, it definitely wasn’t. This friction, however, proved to be useful; it just wasn’t the friction I had expected at the beginning.
Maybe it is just my experience, because I'm not a system programmer, but instead learning it. I find that concepts in system programming are not really very hard to understand (e.g. log-based file system is the one I'm reading about today), but the implementation, the actual coding, the actual weaving of the system, is most of the fun/grit. Maybe it is just me, but I find that for system programming, I have to implement every part of it, before claiming that I understand it.
So much agreed. I'm constraining my AI, that always wants to add more dependencies, create unnecessary code, broaden test to the point they become useless. I have in mind what I want it to build, and now I have workflows to make sure it does so effectively.
I also ask it a lot of questions regarding my assumptions, and so "we" (me and the AI) find better solutions that either of us could make on our own.
14 years ago hearing Dan Pink talk on motivation (https://youtu.be/u6XAPnuFjJc) catalyzed the decision to change jobs.
One of the three motivators he mentions is mastery. And cites examples of why people waste hours with no pay learning to play instruments and other hobbies in their discretionary time. This has been very true for me as a coder.
That said, I enjoy the pursuit of mastery as a programmer less than I used to. Mastering a “simple” thing is rewarding. Trying to master much of modern software is not. Web programming rots your brain. Modern languages and software product motivations are all about gaining more money and mindshare. There is no mastering any stack, it changes to swiftly to matter. I view the necessity of using LLMs as an indictment against what working in and with information technology has become.
I wonder if the hope of mastering the agentic process, is what is rejuvenating some programmers. It’s a new challenge to get good at. I wonder what Pink would say today about the role of AI in “what motivates us”.
It would have been worth it if the frontier models were open weight. Right now, if you invest time in mastering tools like Claude Code or Google’s Antigravity, there is no guarantee that you won’t be removed from their ecosystems for any reason, which would make your efforts and skills useless.
If there is a skill to using LLM coding agents, I think it is mostly just developing an intuitive sense for how to prompt and the “jagged frontier” of LLMs.
IME, the tools are largely interchangeable. They are all slightly different, but the basics of prompting and the jaggedness of the frontier is more or less the same across all of them.
Switching from codex to claude code is orders of magnitude simpler than switching from c# to java or emacs to vim.
I'm not sure I understand... why not simply ignore AI and keep coding the way you always have? It's a bit like saying motorboats killed your passion for rowing.
But to push the analogy a bit. If you are rowing on a lake with motorboats, it is a totally different experience. Noisy, constant wake. We are part of an ecosystem, not isolated.
Growing up, the lakes in New England were filled with sailboats. There were sailing races. Now, its entirely pontoon boats. Not a sailboat to be found.
The lake is not however yours to dictate how others will move along. Imagine if the horse owners decided in such an analogy not to allow cars on the road because they noisy and "totally different experience".
You want a pre-AI experience? Feel free to code without it. It's definitely still doable.
nah. tell him, you're in a race. others are using motorboats. the last to reach the finish line loses their salary.
that's a better analogy, or at least what a lot of people think the analogy is.
My hypothesis around this and other peoples sentiments who dislike AI while citing similar reasons as the post is not simply that they enjoyed arriving at the destination.
Rather the issue is they believe they are GOOD at the "journey" and getting to the destination and could compare their journey to others. Another take is they could more readily share their journey or help their peers. Some really like that part.
Now who you are comparing to is not other people going through the same journey, so there is less comradery. Others no longer enjoy that same journey so it feels more "lonely" in a way.
Theres nothing stopping someone from still writing their own code for fun by hand, but the element of sharing the journey with others is diminishing.
He's not getting customers by rowing them across the river when the motorboats do it faster and cheaper. You compared a hobby to doing something "for a living".
I turned 59 this week. I am excited to go to work again. I use Claude every day. I check Claude. I learn new things from Claude.
I no longer need a "UI person" to get something demonstrable quickly. (I've never been a "UI guy"). I've also never been a guy coding during every waking moment of my life as that would have been disastrous for my mental health.
I am retiring in <=2 years, so I am having fun with this new associate of mine.
One pitfall I've managed to avoid all these 36 years I've been at it is not falling in love with the solution. I fall in love with the problems. Claude solves those problems far quicker than I ever could.
I turn 52 in a couple of months and I’ve only lasted this long from starting out as a hobbyist in 1986 by not being the old guy yelling at the clouds.
I got into “cloud” at 44, got my first job (and hopefully last) at BigTech at 46 and now I work in cloud consulting specializing in app dev leading projects at 51.
Every project I’ve done since late 2023 has involved integrating with LLMs and I usually have three terminal sessions up - one with Claude, one with Codex and one where I do command line stuff and testing.
I am motivated by the result, the design and on the system level.
I suppose in a way it's like saying diesel engines killed passions for sailing.
A career sailor on a sailing ship who finds meaning in rigging a ship just so with a team of shipmates in order to undertake a useful journey may find his love of sailing diminished somewhat when his life's skills and passions are abruptly reduced to a historical curiosity.
Other sailors may prefer their new "easier" jobs now they don't have to climb rigging all day or caulk decking (but now they have other problems, you need far fewer of them per tonne of cargo).
And the diesel engine mechanics are presumably cock-a-hoop at their new market.
(This analogy makes no claim as to the relative utility of AI compared to diesel ships over sailing vessels).
I agree, I’m an old dude too. For personal projects I do what I like. I also like carving stone and wood the hard way, just because.
At work though the hype sucks the life out of the last part of the job that some people found enjoyable, because complete control is enjoyable. Personally I think work is just doing what someone else wants, rather than pleasing yourself.
because my company is mandating that we use motorboats instead of rowboats.
i can continue to row as a hobby, but i've been very lucky in that my work has always been something i genuinely enjoyed. now that it's become something that's actively burning me out, it's far harder to find time for hobbies and interests.
Well, I do.
I do not dislike AI at all: I even find it fun, although different.
The thing is that I am not only enjoying the journey, although it is was I enjoy the most by far. But when everyone is able to reach the destination, the interest in the journey decreases (if this makes sense).
It is not a rant against AI: I use AI daily, it IS useful, it is just less fun since AI is around, and the only way I can explain this is the journey vs the destination.
It's just change. Yeah I also miss coding, but I also like the new things that AI allows me to access. And in times of change we can feel uncomfortable, because what we were used to is no more, and we cannot see yet what is to come. But if you're open to it, a new passion will arise in you, that you could never imagined before.
>It's a bit like saying motorboats killed your passion for rowing.
This is a real thing that happens and the analogy is clearly working against you! If you paddle a canoe or rowboat on a river or lake, your experience is made MARKEDLY worse by a motorboat zooming by and scaring the fish, rocking you with wake, smelling up the place with 2-stroke fumes, etc. Even when the motorboats aren't there, the built environment that supports them is bigger and more intrusive.
I was like this a few months back. You want to code and solve problems, but the AI can do all that for you. I got over it by moving the problem solving further down the chain. Treat the AI the team you are directing to solve the issue.
If I wanted to become a project manager I would have become one. AI has just exposed that many "engineers" are "temporarily embarrassed project managers", which is fine in the sense that it makes it clearer who actually enjoys making things and who just wants the end result regardless of how it's made.
> If I wanted to become a project manager I would have become one.
It's not so much a project manager. I have something I want to build, I create the plan and work slowly through it with Claude. Stopping at every piece and reviewing as I go.
I can confirm the code is good, but also when it takes a different approach I question why it took that approach. Occasionally I learn something.
> who just wants the end result regardless of how it's made.
Sometimes you want the app but don't care how it gets created, because its helping you focus on what you really want to do. For example I created a mindmap App in XCode on-par with XMind. Not every feature, but everything I use.
>AI has just exposed that many "engineers" are "temporarily embarrassed project managers", which is fine in the sense that it makes it clearer who actually enjoys making things and who just wants the end result regardless of how it's made.
AI has also exposed that many "engineers" are just "people who like fiddling with code" and that's fine in the sense that it makes it clear who are the actual engineers who are engineering solutions to real human problems and who just want to tinker with code.
Like imagine slandering a civil engineer "you just want a bridge that is safe and lasts for a century, you don't care about enjoying the journey of construction".
Haha! Your analogy doesn't work on multiple levels. Firstly, if you're outsourcing your work to AI you're not the engineer anymore. A civil engineer is different from a manager of a civil engineering project. Just like I wouldn't call myself an artist if I got AI to generate me some art, I wouldn't call myself a software engineer if I got AI to write all the code for me.
Secondly, it's not just about "enjoying the journey of construction", it's also about caring about the quality of the end results. Getting vibe coded software that is as stable as a "bridge that is safe and lasts for a century" is not a matter of careful engineering decisions, it's mostly a matter of luck, because you don't have the necessary oversight in the quality of the output unless you're doing extensive reviews of the generated code, at which point you greatly diminish the time you're supposedly saving.
False. If you "outsource your code" to a compiler and just write higher level language, you're not an engineer. You literally don't own any of your own code, just an abstraction of it written in human language. See how that works? An engineer can delegate -- period.
- "I wouldn't call myself a software engineer if I got AI to write all the code for me"
If all you do is write code you're not an engineer. I think you fundamentally don't know what engineering is. In a very real sense engineering is what you do when you're not coding. The civil engineer doesn't construct the bridge personally.
- "Secondly, it's not just about "enjoying the journey of construction", it's also about caring about the quality of the end results".
Codemonkeys DON'T CARE about the quality of the end result. They only care about their little corner of the zen garden. Writing real software for real users is by far the worst part of a codemonkeys job.
- "Getting vibe coded software that is as stable as a "bridge that is safe and lasts for a century" is not a matter of careful engineering decisions, it's mostly a matter of luck"
Nonsense. The engineer who spends 90% of his time architecting systems and testing them at a high level is making safer and more stable software than the codemonkey who spends 90% of his time tinkering with the details. Forest for the trees.
- "unless you're doing extensive reviews of the generated code, at which point you greatly diminish the time you're supposedly saving."
Who said anything about "saving time"? We're engineering high quality systems. Some of us spend our time at a higher level, thinking holistically about the system, testing multiple concepts and rapidly iterating. Others demand bespoke handwritten code and in the time allowed can barely finish a single concept with a questionable amount of polish. Whatever their first idea is will ship, and they'll have no real ability to justify the architecture other than vibes.
> Like imagine slandering a civil engineer "you just want a bridge that is safe and lasts for a century, you don't care about enjoying the journey of construction".
Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
> Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
Not a great comparison. I'd agree with you if it was straight up vibe coding.
But co-creating (which is what I do) I create plan, then step through it with Claude. Claude creates a small part of what I want, I review, tweak or ask Claude why it took that approach if its different.
I know the subject matter of what it is creating, so in this sense it is safe, as long as I am reviewing everything.
It gets dangerous if you just let it create something without any interaction or understanding of what is being created.
>Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
Of course. I've seen how sloppy and lazy humans are, and I already use the bridge, and if the safety truly came down to the output of single person, then the risk is already significant.
I must say, I got a chuckle at "using AI to do their calculations". Oh no, my agent is going to write a python script to do basic maths, and check their work against a series of automated tests, the sky is falling!
I'm 55 years old and been programming since I was 10.
Yes, programming has forever changed - especially since Opus 4.5 was released in late November. Programmers who don't use AI models and agents are obsolete in a professional context. It's not a question of journey versus destination. It's that the nature of the journey has changed and productive velocity has significantly increased. Embrace it just like you have presumably embraced every other productivity improvement over the decades. Most of us aren't coding assembly any more because pre-AI tools and languages accelerated our productivity. Now it's time to do that again, recognizing that the journey may be a different experience from the coding process you love, but we're not yet going from "Hello, World" to large complex production-ready systems in a single prompt.
I point out that if you're programming for personal satisfaction rather than professionally, then nothing has changed. Use AI or don't use AI; whatever works for you. You have the luxury of being a hobbyist.
I found my peace with AI aided coding during the last three months. I started development of an environment for programming games and agent simulations that has its own S-expression based DSL, as a private project. Think somewhere between Processing and StarLogo, with a functional style and a unique programming model.
I am having long design sessions with Claude Code and let it implement the resulting features and changes in version controlled increments.
But I am the one who writes the example games and simulations in the DSL to get a feel for where its design needs to change to improve the user experience. This way I can work on the fun and creative parts and let Claude do the footwork.
I let Claude simultaneously write code, tests and documentation for each increment, and I read it and suggest changes or ask for clarification. I find it a lot easier to dismiss an earlier design for a better idea than when I would have implemented every detail of the system myself, and I think so far the resulting product has largely benefited from this.
To me, now more than ever it is important to keep the love for programming alive by having a side project as a creative outlet, with no time pressure and my own acceptance criteria (like beautiful code or clever solutions) that would not be acceptable in a commercial environment.
Maybe this could work for some as a general recipe for how to collaborate with AI:
- Split up the work so that you write the high-level client code, and have AI write the library/framework code.
- Write some parts of your (client) code first.
- Write a first iteration of the library/framework so that your code runs, along with tests and documentation. This gives the AI information on the desired implementation style.
- Spend time designing/defining the interface (API, DSL or some other module boundary). Discuss the design with the AI and iterate until it feels good.
- For each design increment, let AI implement, test and document its part, then adapt your client code. Or, change your code first and have AI change its interface/implementation to make it work.
- Between iterations, study at least the generated tests, and discuss the implementation.
- Keep iterations small and commit completed features before you advance to the next change.
- Keep a TODO list and don't be afraid to dismiss an earlier design if it is no longer consistent with newer decisions. (This is a variation of the one-off program, but as a design tool.)
That way, there is a clear separation of the client code and the libraries/framework layer, and you own the former and the interface to the latter, just not the low-level implementation (which is true for all 3rd party code, or all code you did not write).
Of course this will not work for you if what you prefer is writing low-level code. But in a business context, where you have the detailed domain knowledge and communicate with the product department, it is a sensible division of labour. (And you keep designing the interface to the low-level code.)
At least for me this workflow works, as I like spending time on getting the design and the boundaries right, as it results in readable and intentional (sometimes even beautiful) client code. It also keeps the creative element in the process and does not reduce the engineer to a mere manager of AI coding agents.
As I commented in the other post, it killed mine at work, because my boss is pushing "AI" really hard on the devs. Fortunately, he's now seeing enough evidence to counteract the hype, but it's still going to be present and dragging down my work. But it my off time, I only experiment with LLMs to see if they're getting better. Spoiler alert: they aren't, at least not for the kind of things I want to do.
I agree here. It certainly has burned me out as of recently. The expectation to deliver faster and faster results purely by the use of AI.
AI poses many challenges from security to ensuring code safety. When paired and met with the same expectations before the hype, you could consider it good enough to shave off 8 hours of work. But this is just the first 8 hours of getting some code ready.
A savvy dev could easily just grab an existing template they made prior and stitch things together in a degree better than AI.
Now it is just massage the prompt and hope it adheres.
If you enjoyed coding for the sake of coding it hasn't gone anywhere. People still knit for themselves when they can go buy clothes off the rack. People still enjoy chess and Go even though none of them can beat a machine.
If you enjoyed that you could do something the rest of the world can't - well yeah some of that is somewhat gone. The "real programmers" who could time the execution of assembly instructions to the rotation speed of an early hard drive prob felt the same when compilers came around.
It has rekindled my joy however. Agentic development is so powerful but also so painful and it's the painful parts I love. The painful parts mean there is still so much to create and make better. We get to live in a world now where all of this power is on our home computers, where we can draw on all the world's resources to build in realtime, and where if we make anything cool it can propagate virally and instantly, and where there are blank spaces in every direction for individuals to innovate. Pretty cool in my view.
I’ve given AI a try and found the destination felt empty.
I’ve made the choice to not go full bore into AI as a result. I still use it to aid search or ask questions, but full on agentic coding isn’t for me, at least not for the projects I actually care about and want/need to support long-term.
I am also almost 60, and from my perspective, Claude Code solves a lifelong problem for me. I have always found coding to be quite tedious, requiring a savant level of syntax recall. It's not that I can't do it — I can — but I am better suited to managing the release, setting up the support, etc. With Claude Code, I am free to plan and create. I am able to envision a product and ship it all by myself, requiring 90% less time. Claude Code allows me to get to market. Working with Claude Code genuinely feels like having the creative partner I've waited 30 years for. Claude has made a dream come true for me.
I have decided that I will only write artisanal code. I’m even thinking of creating a consultancy agency where people can hire me to replace AI generated code.
Yes, it changes the nature of the work. Back when you started coding there were people experiencing the same thing about shifting to higher level languages. What some of them liked was the efficency and aesthetic of using just the right assembly language trick and good compilers with high quality instruction selection took that away. I'm sure there were programmers who missed the days of punching in hex into memory.
We start to understand those old fogeys who we blew past we were young once we get to their age. It's the way of the world.
The sad truth of life. This story reminded me of the time when I tried my first MMO - at first it felt like a fairy tale, something unknown, something that could still surprise you. And then you get familiar with all the mechanics, and the magic disappears. Now it’s just a “tool.”
For me (60 too) it's both, the journey and the destination. LLMs not only help me get around the boring stuff so I have more time for the things I really want to design and build, but they also open areas for me in which I always wanted to go but for which it was very time consuming or difficult to get the required knowledge (e.g. tons of academic papers which are neither well written nor complete and sometimes just wrong). The LLMs help me to explore and find the right way through those areas, so these adventures suddenly become affordable for me, without doing a PhD on each subject. And even if the LLM generates some code, it still needs a "guiding hand" and engineering experience. So for me, no, AI doesn't kill my passion, but offers a very attractive symbiosis, which makes me much more efficient and my spare-time work even more interesting. I find myself watching fewer and fewer streaming videos because exploring new areas and collaborating with the LLM is much more entertaining.
Doesn't AI just replace the coding that other people have done many times? That is, we don't have to do repetitive work because of AI. Yes, I don't know how to write a React app even though I can vibe code it quickly, but that's repetitive nonetheless. It's just that it is another person who has repeated the code before. That said, there are a ton of code to write by hand if we push the envelop. The 10 algorithms that I no one has build for production. This concurrency library that no one has built in my favorite language. That simulation that Claude Code just can’t get right no matter how much prompts/context engineering/harness engineering I do. The list can go on and on.
I’ve made a bunch of tools to help me get around file system limitations on modern Macs (APFS) and treating my entire legacy file
collection as CMS challenge and have cranked out more binaries in 3 months than in the 10 years before the arrival of these tools. If you know how to use these tools and how to think
like an architect and not a hobbyist Claude is truly in the technological lead.
I am a bit, but not much, younger than 60 and have been coding since Apple II days.
These tools are pretty close to HAL 9000 so of course GIGO as always has been the case with computer tech.
Almost everything is in Go except an image fingerprinting api server written in Swift. The most USEFUL thing I’ve written is a Go based APFS monitor that will help you not overfill your SSD and get pained into corner by Time Machine.
This is actually quite a refreshing view. I feel like all I read about is how magnificent AI is, and wether it will be for the better or worse. But you are absolutely right that, people who enjoy the actual work (eg. coding), it might take away some of the passion and joy. Much because AI has enabled so much more to be done in the same time. Kind-of now it gives you almost guilt if you don't use it, because every one else will get so far ahead and the work "should've/could've" been done much faster.
What do you all think about the "Solve It" method by the Answer dot AI folks?
It's more like iterating on the REPL with AI in the loop, meaning the user stays in control while benefitting from AI, so real growth happens.
Interesting thing to consider, in a couple of years, will there be a differentiator between people who are good at driving company-specific AI tools and those who are generally better by having built skills the hard way ground up with benefit of AI?
Have you considered that the people finding passion in vibe coding are enjoying the journey? I think what you are lamenting is that other people aren't on the same journey you took.
Your destination is only a point somewhere on what they perceive as the journey. You're saying, well, if they don't go where I did and stop when I did, it was no proper journey.
Plenty, but few of them can be solved by writing and deploying an app somewhere. Some of them are actually made worse by the latter. And how to make money is mostly orthogonal to solving the problems of the world.
I'm completely the opposite. 100% the opposite. I wrote code because it was the only way to make the lights blink. I saw code as an impediment to completing a project. There was a lot of friction between the design and the final result. AI reduces that friction substantially.
The remaining friction is fundamentally the same as that which existed when writing code manually. The gap between what you envision for your design/solution and the tools for implementing that vision. With code, the friction encountered when implementing your vision is substantial; with AI, that friction is significantly reduced, and what's left is in areas different from what past experience would lead you to expect.
I think this is really honest and the reason we see this big divide. On the one hand people that never enjoyed coding but enjoy to see their ideas realized are celebrating. But the group of people who enjoyed coding in itself and never saw it just as a means to a result feel cheated.
I think a lot of the polarization you seen online is revolving around this difference in character.
I am of two minds about this really. because I've certainly enjoyed the puzzle and the journey in the past. But I also enjoy to get to see ideas I've had for a while but never had the time for, realize quickly.
I also don't think its just a distinction between enjoying the journey or the destination, but I think its enjoying a very specific type of journey, e.g. slow methodical etc.
Whereas people with a different temperament find the speed and the ease of experimentation and the sometimes surprising results more appealing.
I think it's perfect understandable that some people feel dreaded while others feel excited. But whatever the outcome, we have to adapt.
For people who feel that AI kills a passion, I'd recommend finding another hobby. Especially at the age of 60, when you don't have to work, you can plan retirement -- the next 20+ years as if it is your second childhood, and do whatever you want. I encourage you to search for greater meanings. After all, programming is just a man-made wonder, and the universe is full of grandeur.
The friction is the feature for journey-focused builders. When AI removes the cognitive resistance—debugging a parser, wrestling with state management, naming things—it also removes the scaffolding that forces you to really understand what you're building. You end up in implementation details faster, which feels productive until you realize you're solving problems you wouldn't have encountered if you'd thought harder upfront.
Those who say they enjoy "building" with AI instead of coding are just outsourcing the coding part (while training the AI for outsourced company). It's nothing to enjoy, but yeah, you get the product, which is probably what people enjoy. Getting the product.
It's like buying a IKEA furniture and you think you made it by merely assembling it. If you don't know IKEA effect, it's having a greater value for something than it actually is, because you were partly 'involved' in creating/assmebling it.
With coding (by hand), there are two aspects of it. One is the pleasure of figuring out how to do things, and one is the pleasure of making something that didn't exist before. Building with AI gives you the second pleasure, but not the first.
Or maybe it still gives you the first, too. Maybe you get that from figuring out how to get the AI to produce the code you want, just like you got it from trying to get the compiler to produce the code you want.
Or maybe it depends on your personality and/or your view of your craft.
Anyway, the point is, people take pleasure in their work in different ways. Those who enjoy building with AI are probably not all lying. Some do enjoy it. And that is not a defect in them. It's fine for them to enjoy it. It's fine for you not to enjoy it.
In my field which involves large legacy codebases in C++ and complex numerical algorithms implemented by PhDs. LLMs have their place but improvements in productivity are not that great because current LLMs simply make too many mistakes in this context and mistakes are usually very costly.
Everyone `in the know' appreciates this, but equally in the current environment has to play along with the AI hype machine.
It is depressing, but the true value of the current wave of LLMs in coding will become more clear over time. I think it's going to take some serious advances in architecture to make the coding assistant reliable, rather than simply scaling what we have now.
I feel you and at a much younger age. I started programming professionally full time around 2011-2012. Documentation was good but practical applications where you could see what you want to achieve in action were very limited. At the time I found myself writing drivers for fiscal printers using RS-232. The documentation provided by the manufacturer was absolutely horrible. "0x0b -> init, 0x23 -> page", literally code -> single word. Although I hated having to effectively brute force it, the feeling at the end was amazing. I tried AI code on several occasions and I hate it with a passion: full of bugs, completely ignoring edge cases and horrible performance. And ultimately having to spend more time fixing the slop than it takes me to sit down, think it through and get it done right. And I see many "programmers" just throw in a prompt in insert AI company here and celebrate the slop, patting themselves on the back.
It's the programming equivalent of those tiktok videos split in half, top half being random stock videos, bottom being temple run and an AI narration of a mildly wtf reddit post.
In a way I am lucky that I work at a place where everyone gets to choose what they want to use and how they use it. So my weapons of choice are a slightly tweaked, almost vanilla zsh, vim and zed with 0 ai features. I have a number of friends/former coworkers working at places where the usage of ai is not just allowed or encouraged but mandated. And is part of their performance score - the more you slop-code, the better your performance.
There is some joy in catching the LLM in a house of cards misunderstanding of how something works. They’ll weave really convincing fictions about from a bit of debugging. Then the Socratic probing of that can be fun. In part because the LLM tries to use big words to often seem smart.
Then you hopefully capture that information somehow in a future prompt, documentation, test, or other agent guardrail.
So I finds fun in the knowledge engineering of it all. The meta practice of building up a knowledge base in how to solve problems like this codebase.
I think a lot of the push back comes from people who haven’t yet been directly affected. Those later in their careers often have little at stake and may not notice—or care—how it impacts juniors. People who feel it’s helping them now are just lucky that their roles and knowledge haven’t been disrupted yet. Eventually, it will catch up, and the discussion will shift to adapting or moving on—just like every other wave of "AI makes what I enjoy obsolete".
Do you stop gardening because farms exist? Do you stop playing chess because engines have surpassed every human grandmaster? Do you stop driving because cars can drive themselves?
The tool doesn’t invalidate the craft. If anything, what we’re mourning when AI “kills the passion” might be about identity.
Many programmers spent decades defining themselves as the person who knows how to do hard things
And it’s disorienting when that thing becomes easy.
Honestly this is the end of a long path to losing my passion. As tech became more professional, and less experimental my passion has smothered. More code reviews and test writing less figure out novel solutions. More meetings and and security reviews fewer tech strategy planning meeting. More cloud platforms and api cloud solutions less built functionality. More rental plans less open-source. Ai just gets rid of the final experimentation stages
Don't take it personally, but those are the worst kind of engineers in any real world business context. I watched those type of people ruining projects, companies by overengineering them to death.
On the bright side, such traits can make a positive impact in academic or research context.
I was never a coder, but I can understand this frustration. For some, the efficiency is probably really exciting, but for those who really enjoyed solving the puzzles, something is lost. For me personally, AI and vibe-coding have opened up a whole new world (I built a mobile app), but I get why it's killing the passion for some.
Well, I'm 55 - and have been a pentester for the past 15 years, but I am having a blast. CC is so enabling - I build something new most weekends (this is my best project so far - which is a site which collects and writes stories for all the latest AI security research: https://shortspan.ai/). All sorts of projects I have had on the back burner are now coming to life. I have 4 kids, so wouldn't have time without Codex/Claude Code. Maybe I have an hour here or there, and that is enough to make something or improve something
I'm 62 years old, and LLM coding agents had ignited my passion again. I'll soon unarchive older projects which were too hard to continue. With Opus this will be finally doable.
Adding a jit to perl, inlining, ssa, fixing raku,...
Endless possibilities.
Just fixing glibc or gcc is out, because people.
I just built two projects where one dogfooded the other and setup a fully working slack bot all in 1 hour. If you still want to manually do things you can. AI can answer questions about common topics way faster than searching docs. Idk why this kills peoples passions. Especially of you're old enough to not need a salary
I agree. I wanted a particular tool to support my development. The libraries are well known and understood by people who work in text editors, but this is not my area and I have a busy life. Simply working out what I needed to know produced enough inertia to stop it happening.
I finally made it with Claude. I've been writing code a while so I absolutely didn't let Claude loose and I still refactored stuff by hand as sometimes that was faster than trying to get Claude to do it. I also know what the whole thing does - I read all the diffs it presented.
I wouldn't fully trust it to go off and do its thing unsupervised especially in my areas of speciality. But the scaffolding work like command line arguments - typing all that out was never my passion and I had snippets in my editor for most of it.
Perhaps if your passion is the process of doing that kind of meticulously laying out of each file then I can understand. Although the journey for me is the problem solving. Nothing much excites me about any of the boilerplate parts.
Claude can certainly take a stab at the solution too and is best when it has some kind of test case to match or validation step. To me working out what those are was always the core of the job and without them Claude can make plenty of mistakes.
It's just a tool and I use it in ways to support what I enjoy.
My work situation is way more complicated. Bigcorp organizational dynamics nullify any marginal gain anywhere.
Abstractly, it's where log(forecast(assets)) exceeds log(anticipate(desires)) by log(safetymargin).
Molecular biologists are still searching for the pathways that govern the expression of assets, desires, and safetymargin.
Doctors and tax accountants are still arguing over whether the forecast and anticipation functions are learned or innate. And philosophers and used car salesmen can't even agree on where these functions sit on the cause/effect axis.
To me I think it just changed the destination. Now the journey is about the domain - experimenting with how ideas fit together. Yes I can type in a few words and have more working code than I did before, but not a product or suite or game or capability or whatever it is I’m building towards.
I'm even older, I've been coding for over 50 years. Now I just do it as a passtime and I use AI to complete lines of code I would have written anyway. It's the structure and the approach which interests me and it's all mine.
Why can you not enjoy both the journey and the destination? Surely you find some sub-disciplines of writing code more pleasant than others - can't you just keep doing those manually and offload/automate the rest to the machine?
Code to me has always been a solution. And I have always found problem definition the more interesting side of that coin. Luckily, it seems problem definition has become more important with Claude et al.
for me the journey is the things i can do through creating various things in various contexts and authoring the code itself is a small step in the journey.
if it the puzzle solving metaphor, i'm taking solved puzzles as pieces to solve a more meta puzzle ... and i enjoying the journey at that level.
i try to practice tracing all the way down the stack and learning about new things added to the stack, but i'm not in it for the sake of the stack or its vagaries and difficulties.
I think AI Coding allows more people to do more things, which is why for most people it "ignited their passion" because now they have the tools to build their ideas
My trick is to let AI help the journey but not hold on it. Use it for discussion, not implementation. It is even surprisingly good for kernel level projects.
I don't buy this journey vs destination binary I keep hearing. I always considered myself lucky to be 44 and still writing code all day. I love the journey - the mental satisfaction of creating something complex yet elegant. The perfectionism that leads you to ask yourself can this be simpler, faster etc. But I also now love creating things that frankly I was never going to find the time for.
Regarding the OP's dilemma. I am split. I enjoy both the process and the destination. With AI, the process is faster and less satisfying, but reaching the destination is satisfying in its own way, and enables certain professional ambitions.
I have always had other outlets for my "process" needs, and I believe I will spend more time on them in the future. Other hobbies. I love "artisanal coding" but that aspect was never really my job.
I don't really feel it's taking anything away, and in fact, it is providing me with something I've never done after 25+ years in IT doing everything BUT - development.
I've always done networking (isp, datacenter, large enterprise), security (networking plus firewall, vpn, endpoint), unix/linux admin, even windoze stuff with active directory, but never any development or programming directly. I was just never wired for it, though I can do ip/cidr/bgp/acl stuff in my head for days. I missed out on higher ed entirely after high school, and I learned along the way what I did after 45 years of tinkering with pc's from apple II on i life.
Right now I'm taking my network and security knowledge in writing an mcp to do network and security tasks enabling agents like claude-code, claude-desktop, codex, openclaw a means of accessing resources indirectly via my mcp, and it's something I could have never done before the advent of AI as I "don't do code". Now I can tell it intimately everything I need/want it to do, and it just literally does it. It's extremely effective too, if not at times aggravating/infuriating.
My biggest gripe is it does everything half ass, but nothing I haven't seen time and again from outsourcing. It feels like the usual contractor slop you might get hiring wipro/infosys or any other offshore development effort, but at least without human idiocy.
AI in general really needs a "don't do half-ass work" option, as it typically feels like I'll get "good enough for government work" sort of results until I kick and slap it at least twice to fix its shortcomings. It invariably feels like it always only gives me half of what I ask for. You can almost tell it's built to not give you everything up front, instead make you work for it.
Yet your passion to spread negativity is thriving. Here you thought negativity is a destination, but turns out it’s a whole new journey for you! Passion reignited indeed.
Earlier today I told my daughter that while AI might be a reason for my 8 year old grandson not to major in cmpsci some day, she should still encourage him to learn coding because (1) it is tremendous fun; and (2) develops problem solving skills.
It's why I spend the majority of my time coding every day at age 73.
This argument doesn't make sense to me. did airplanes kill road trips? AI lets you go faster, but you don't have to use it at all, or can use it in select ways to collab with. Unless you're somehow bothered by how other people code, you've only got more options now.
I very much enjoy the journeys I take with AI. It helps me undertake journeys I would never dare to without it. Because I took many before myslef and I know how long they are and how tired they made me feel. It lets me solo take the journeys I knew I'd need a team for. AI is a truck for my brain. I still enjoy walking the walk myself. But I take different journeys by walking than by a truck.
You do know you don't have to use AI, right? I don't use it for coding (or much at all outside of coding). I'm free to code just like I did before, and so are you.
It's not hard to pick holes in this approach by showing the code being generated is flawed. Also, code quality is different from code velocity, better to write 10 lines that more accurately describes functionality than 50.
PMs now expect that you can create a Java micro service that does basic REST/CRUD from a database and get it into production in a total of two days.
That is hard if you are working in Notepad and have to write your own class import statements and write your own Maven POM or Gradle file. It’s a lot quicker in an IDE with autocomplete and auto-generated Maven POMs. And with AI it’s even faster but at the risk of lower code maintainability.
> PMs now expect that you can create a Java micro service that does basic REST/CRUD from a database and get it into production in a total of two days.
Have you heard of malicious compliance? Give the PMs what they ask for, then show them how what they've asked for is flawed. Your job as an engineer is not to just take orders blindly, it's to push for a better engineered solution. It's really not hard to show that what these PMs are asking for is stupid.
A new micro-service in two days is easy with an IDE and autocomplete. But now with AI the PM will likely push to have it in production in a day. Which is possible, but quality will be questionable.
> A new micro-service in two days is easy with an IDE and autocomplete.
Is your username accurate, are you currently retired? I hope you know there's a big difference between something that is functional and something that is production ready.
Somewhat retired last year. Looking for something new to do. Basic Java micro service with Spring Boot ands it is three hours of coding to write and read from a database and expose over REST interface. Two hours for a tests. Rest of the time is to set up environments, coupling everything, documentation. Two days is do-able if you have a good CI/CD template and your Azure/AWS is setup correctly.
> You aren’t exposing those services to the internet.
You aren’t knowingly exposing those services to the internet.
FTFY. Furthermore, internal services can still be abused to get data that shouldn't be shared. For example, imagine if your imaginary API was for a HR system, and could be used to determine salary information for staff.
If you aren't considering API security, you're almost bound to make major mistakes, and I'd bet money that most APIs designed and implemented in 2 days have tons of security holes.
Ever since the dawn of time I've wanted to make my own games but always ended up wasting time on trying to make engines and frameworks and shit, because no development environment worked the way I wanted to, out of the box.
I don't trust AI enough to let it generate code out of thin air yet, and it's often wrong or inefficient when it does, so I just ask it to review my existing code.
I've been using Codex that way for the last couple months and it's helped me catch of lot of bugs that would have taken me ages on my own. All the code and ideas are still my own, and the AI's made me more productive without being lazy.
Maybe this time I will manage to finish making an actual game for once :')
> it depends on what you enjoy: the journey or the destination
I thought I enjoyed the journey more, but it turns out the destination is wild! There are still quite a few projects I keep for myself, pieces I want done in a specific way, that I now have time to do properly, while the dull stuff can get done elsewhere.
If it's a personal project, I just get AI to do the boring bits and write the fun bits myself. I enjoy coding in general, but every project has its share of boring boilerplate code, or utility functions that you've already written 100 times. Also, LLMS are pretty good at code review, which is very beneficial when you are working on something solo.
I've been using Claude for a few weeks. It's like the chatbots, but can directly lookup your code. This can be useful if you point it to the right code, but it's using text search; not sophisticated understanding of the structure like an IDE.
You're not missing much; don't believe the hype and liars.
I'm convinced that most programmers largely hate making software and solving user problems. They just like fiddling with code. And honestly it explains so much.
Hello, I cannot tell if this is true or not, since I have not been able to really test the ability of Claude AI to code.
I am looking for a web API I could use with CURL, and limited "public/testing" API keys. Anyone?
I am very interested in Claude code to test its ability to code assembly (x86_64/RISC-V) and to assist the ports of c++ code to plain and simple C (I read something from HN about this which seems to be promising).
it depends on what you enjoy: the journey or the destination
This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
I have coworkers who get itchy when they don't see their work on production, and super defensive in code review but I've never really cared. The goal is to solve the puzzle. If there's a better way to solve the puzzle, I want to know. If it takes a week to get through code review, what do I care, I'm already off to the next puzzle.
Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
reply
voxleone 9 hours ago | parent | next [–]
I've been coding since I was about 15 and still love it. These days I mostly build tailored applications for small and medium companies, often alone and sometimes with small ad-hoc teams. I also do the sales myself, in person. For me, not using LLMs would mean giving up a lot of productivity. But the way I use them is very structured. Work on an application starts with requirements appraisal: identifying actors, defining use cases, and understanding the business constraints. Then I design the objects and flows. When possible, I formalize the system with fairly strict axioms and constraints.
Only after that do LLMs come in, mostly to help with the mechanical parts of implementation. In my experience it's still humans all the way down. The thinking, modeling, and responsibility for the system are human. The LLM just helps move the implementation faster.
I also suspect the segment I work in will be among the last affected by LLM-driven job displacement. My clients are small to medium companies that need tailored internal systems. They're not going to suddenly start vibe-coding their own software. What they actually need is someone to understand the business, define the model, and take responsibility for the system. LLMs help with the implementation, but that part was never the hard part of the job.
reply
jantb 5 hours ago | root | parent | next [–]
I’m doing the same as you and even though I was producing coding a lot of the actual products I estimated the coding part just to be about 20% of the work. The rest is figuring out what and how to build stuff and what stakeholders really need, and solving production issues in live event driven systems. Agentic coding is just faster at the 20% part, and I can always sit down and code the really hard stuff if I want to or feel I need to if the LLM gets stuck. If it produces something not understandable I either learn from it until I understand it og makes it do a pattern I know instead. So all in all, not so worried.
reply
finaard 9 hours ago | parent | prev | next [–]
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
Quite a few of the projects I always wanted to do have components or dependencies I really don't want to do. And as a result, I never did them, unless they eventually became viable to do in a commercial setting where I then had some junior developer to make the annoying stuff go away.
Now with LLMs I have my own junior developer to handle the annoying stuff - and as a result, a lot of my fun stuff I was thinking about in the last 3 decades finally got done.
One example from just last week - I had a large C codebase from the 90s I always wanted to reuse, but modern compilers have a different idea of how C should look like. It's pretty obvious from the compiler errors what you need to do each case, but I wasn't really in the mood for manually going through hundreds of source files. So I just stuck a locally running qwen coder in yolo mode into a container, forgot about it for a week, and came back to a compiling code base. Diff is quick to review, only had a handful of cases where it needed manual intervention.
reply
throw-the-towel 8 hours ago | root | parent | next [–]
Note that you are able to choose freely what parts of the work get done by Claude, and what parts you do yourself. At work, many of us have no such luxury because bosses drunk on FOMO are forcing agent use.
reply
sktrdie 8 hours ago | parent | prev | next [–]
You still care about end result though: in your case, the end result being the puzzled you solved.
AI can make that process still enjoyable. For instance I had to build a very intricate cache handler for Next.js from scratch that worked in a very specific way by serializing JSON in chunks (instead of JSON.parse it all in memory). I knew the theory, but the API details and the other annoyances always made it daunting for me.
With AI I was able to thinker more about the theory of the problem and less about the technical implementation which made the process much more fun and doable.
Perhaps we're just climbing the ladder of abstraction: in the early days people were building their own garbage collection mechanisms, their own binary search algorithms, etc. Once we started using libraries, we had to find the fun in some higher level.
Perhaps in the future the fun will be about solving puzzles within the realm of requirement definitions and all the intricacies that stem from that.
reply
specproc 10 hours ago | parent | prev | next [–]
One hundred percent.
I came back into tech professionally over the last decade. Always been into computers, but the first decade or so of my career was in humanitarian amin. Super interesting sector, super boring day-to-day.
“So what," the Chelgrian asked, "is the point of me or anybody else writing a symphony, or anything else?"
The avatar raised its brows in surprise. "Well, for one thing, you do it, it's you who gets the feeling of achievement."
"Ignoring the subjective. What would be the point for those listening to it?"
"They'd know it was one of their own species, not a Mind, who created it."
"Ignoring that, too; suppose they weren't told it was by an AI, or didn't care."
"If they hadn't been told then the comparison isn't complete; information is being concealed. If they don't care, then they're unlike any group of humans I've ever encountered."
"But if you can—"
"Ziller, are concerned that Minds—AIs, if you like—can create, or even just appear to create, original works of art?"
"Frankly, when they're the sort of original works of art that I create, yes."
"Ziller, it doesn't matter. You have to think like a mountain climber."
"Oh, do I?"
"Yes. Some people take days, sweat buckets, endure pain and cold and risk injury and—in some cases—permanent death to achieve the summit of a mountain only to discover there a party of their peers freshly arrived by aircraft and enjoying a light picnic."
"If I was one of those climbers I'd be pretty damned annoyed."
"Well, it is considered rather impolite to land an aircraft on a summit which people are at that moment struggling up to the hard way, but it can and does happen. Good manners indicate that the picnic ought to be shared and that those who arrived by aircraft express awe and respect for the accomplishment of the climbers.
"The point, of course, is that the people who spent days and sweated buckets could also have taken an aircraft to the summit if all they'd wanted was to absorb the view. It is the struggle that they crave. The sense of achievement is produced by the route to and from the peak, not by the peak itself. It is just the fold between the pages." The avatar hesitated. It put its head a little to one side and narrowed its eyes. "How far do I have to take this analogy, Cr. Ziller?”
― Iain M. Banks, Look to Windward
> It is the struggle that they crave
And yet, it's hard to shake the despondent feeling you get looking at the helicopters hoovering around the peak
I'm 42 years old and it has re-ignited mine. I've spent my career troubleshooting and being a generalist, not really interested in writing code outside of for systems and networking usage. It's boring to type out (lots of fun to plan though!) and outdated as soon as it is written.
I've made and continue to make things that I've been thinking about for a while, but the juice was never worth the squeeze. Bluetooth troubleshooting for example -- 5 or 6 different programs will log different parts of the stack independently. I've made an app calling all of these apps, and grouping all of their calls based on mac address' and system time of the calls to correlate and pinpoint the exact issue.
Now I heard the neckbeards crack their knuckles, getting ready to bear down on their split keyboards and start telling me how the program doesn't work because AI made it, it isn't artistic enough for their liking, or whatever the current lie they comfort themselves with is. But it does work, and I've used it already to determine some of my bad devices are really bad.
But there are bugs, you exclaim! Sure, but have you seen human written code?? I've made my career in understanding these systems, programming languages, and people using the systems -- troubleshooting is the fun part and I guess lucky for me is that my favorite part is the thing that will continue to exist.
But what about QA? Humans are better? No. Please guys, stop lying to yourselves. Even if there was a benefit that Humans bring over AI in this arena, that lead is evaporating fast or is already gone. I think a lot of people in our industry take their knowledge and ability to gatekeep by having that knowledge as some sort of a good thing. If that was the only thing you were good at, then maybe it is good that the AI is going to do the thing they excel at and leave those folks to theirs.
It can leave humans to figure out how to maybe be more human? It is funny to type that since I have been on a computer 12h a day since like 1997...but there is a reason why we let calculators crunch large sums, and manufacturing robots have multiple articulating points in their arms making incredible items at insane speeds. I guess there were probably people who like using slide rules and were really good at it, pissed because their job was taken by a device that can do it better and faster. Diddnt the slide rule users take the job from people who did not have a tool like that at first but still had to do the job?
Did THEY complain about that change as well? Regardless, all of these people were left behind if all they are going to do is complain. If you only built one skill in your career, and that is writing code and nothing else, that is not the programs fault.
The journey exists for those who desire to build the knowledge that they lack and use these new incredible tools.
For everyone else, there is Hacker News and an overwhelmingly significant crowd that are ready to talk about the good ole days instead of seeing the opportunities in expanding your talents with software that helps you do your thing better than you have ever dreamed of.
I recently wanted to monitor my vehicle batteries with a cheap BLE battery monitor from AliExpress (by getting the data into HomeAssistant). I could have spent days digging through BlueZ on a Raspberry Pi, or I could use AI and have a working solution an hour later.
Yes, I gave up the chance to manually learn every layer of the stack. I’m fine with that. The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem. AI got me there faster - and let me move on to my next fun project.
> I could use AI and have a working solution an hour later.
That sounds really cool. You should share what you used.
> The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem.
I'm sympathetic to this view. It seems very pragmatic. After all, the reason we write software is not to move characters around a repo, but to solve problems, right?
But here's my concern. Like a lot of people, I starting programming to solve little problems my friends and I had. Stuff like manipulating game map files and scripting ftp servers. That lead me to a career that's meant building big systems that people depend on.
If everything bite-sized and self-contained is automated with llms, are people still going to make the jump to be able to build and maintain larger things?
To use your example of the BLE battery monitor, the AI built some automation on top of bluez, a 20+ year-old project representing thousands of hours of labor. If AI can replace 100% of programming, no-big-deal it can maintain bluez going forward, but what if it can't? In that case we've failed to nurture the cognitive skills we need to maintain the world we've built.
It has also led me to a career in software development.
I find myself chatting through architectural problems with ChatGPT as I drive (using voice mode). I've continued to learn that way. I don't bother learning little things that I know won't do much for me, but I still do deep research and prototyping (which I can do 5x faster now) using AI as a supplement. I still provide AI significant guidance on the architecture/language/etc of what I want built, and that has come from my 20+ years in software.
This is is the project I was talking about. I prefer using codex day-to-day.
> it depends on what you enjoy: the journey or the destination
This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
I have coworkers who get itchy when they don't see their work on production, and super defensive in code review but I've never really cared. The goal is to solve the puzzle. If there's a better way to solve the puzzle, I want to know. If it takes a week to get through code review, what do I care, I'm already off to the next puzzle.
Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
I've been coding since I was about 15 and still love it. These days I mostly build tailored applications for small and medium companies, often alone and sometimes with small ad-hoc teams. I also do the sales myself, in person. For me, not using LLMs would mean giving up a lot of productivity. But the way I use them is very structured. Work on an application starts with requirements appraisal: identifying actors, defining use cases, and understanding the business constraints. Then I design the objects and flows. When possible, I formalize the system with fairly strict axioms and constraints.
Only after that do LLMs come in, mostly to help with the mechanical parts of implementation. In my experience it's still humans all the way down. The thinking, modeling, and responsibility for the system are human. The LLM just helps move the implementation faster.
I also suspect the segment I work in will be among the last affected by LLM-driven job displacement. My clients are small to medium companies that need tailored internal systems. They're not going to suddenly start vibe-coding their own software. What they actually need is someone to understand the business, define the model, and take responsibility for the system. LLMs help with the implementation, but that part was never the hard part of the job.
I’m doing the same as you and even though I was producing coding a lot of the actual products I estimated the coding part just to be about 20% of the work. The rest is figuring out what and how to build stuff and what stakeholders really need, and solving production issues in live event driven systems. Agentic coding is just faster at the 20% part, and I can always sit down and code the really hard stuff if I want to or feel I need to if the LLM gets stuck. If it produces something not understandable I either learn from it until I understand it og makes it do a pattern I know instead. So all in all, not so worried.
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
Quite a few of the projects I always wanted to do have components or dependencies I really don't want to do. And as a result, I never did them, unless they eventually became viable to do in a commercial setting where I then had some junior developer to make the annoying stuff go away.
Now with LLMs I have my own junior developer to handle the annoying stuff - and as a result, a lot of my fun stuff I was thinking about in the last 3 decades finally got done.
One example from just last week - I had a large C codebase from the 90s I always wanted to reuse, but modern compilers have a different idea of how C should look like. It's pretty obvious from the compiler errors what you need to do each case, but I wasn't really in the mood for manually going through hundreds of source files. So I just stuck a locally running qwen coder in yolo mode into a container, forgot about it for a week, and came back to a compiling code base. Diff is quick to review, only had a handful of cases where it needed manual intervention.
Note that you are able to choose freely what parts of the work get done by Claude, and what parts you do yourself. At work, many of us have no such luxury because bosses drunk on FOMO are forcing agent use.
Yeah, I've noticed at several customers that they're just trying to cram LLMs into everything, instead of maybe first thinking if it's sensible for that specific usecase.
I'm also doing some things where I don't think LLMs are not a good fit - but I'm doing it because I care to see about things like failure behaviour, how to identify when it is looping (which can be sometimes hard to see when using huge context models) and similar stuff - which results in more knowledge about when it makes sense to use LLMS. No such learnings visible at many customers, even if LLMs do something stupid.
> The fun part is in the building, it's in the understanding, the growth of me.
I agree with this sentiment as well. Without a doubt, my favorite part of the job is coming up with a solution that just 'feels right', especially when said solution is much cleaner than brute force/naive approach. It sounds cheesy, but it truly is one of my favorite sensations.
I'm the senior-most engineer on my team of about 15. I try to emphasize software craftsmanship, which resonates with some but not all. We have a few engineers who have seemingly become reliant on AI tooling, and I struggle with them. Some of them are trying to push code that they clearly don't understand and aren't reviewing, and I think they're setting themselves up for failure due to lack of growth.
Adapting the workflow to this new paradigm is a different sort of puzzle. I think that the folks whom enjoy agentic pair programming have found various satisfactory solutions. As a puzzle enthusiast myself, I have been particularly enjoying this pivotal moment in technology because of how many opportunities there are to create novel approaches to combat the lies and mistakes.
One hundred percent.
I came back into tech professionally over the last decade. Always been into computers, but the first decade or so of my career was in humanitarian amin. Super interesting sector, super boring day-to-day.
Getting back into code felt like coming home. I'm good at it, I really enjoy it, the problem-solving aspect totally lights up my brain in this amazing way.
I feel exactly the same way. Totally robbed of pleasure at work, with the added kicker of mass layoffs hanging over the sector.
At least OP is sixty, I've got 25 years of work left and I really don't know what to do. I hate it all so much.
Oh wow, that's exactly the opposite of how I feel, and conversely, I am that developer who gets itchy when his work doesn't go to prod quickly enough and gets defensive on code reviews.
Sure, part of the fun of programming is understanding how things work, mentally taking them apart and rebuilding them in the particular way that meets your needs. But this is usually reserved for small parts of the code, self-contained libraries or architectural backbones. And at that level I think human input and direction are still important. Then there is the grunt work of glueing all the parts together, or writing some obvious logic, often for the umpteenth time- these are things I can happily delegate. And finally there are the choices you make because you think of the final product and of the experience of those who will use it- this is not a puzzle to solve at all, this is creative work and there is no predefined result to reach. I'm happy to have tools that allow me to get there faster.
You still care about end result though: in your case, the end result being the puzzled you solved.
AI can make that process still enjoyable. For instance I had to build a very intricate cache handler for Next.js from scratch that worked in a very specific way by serializing JSON in chunks (instead of JSON.parse it all in memory). I knew the theory, but the API details and the other annoyances always made it daunting for me.
With AI I was able to thinker more about the theory of the problem and less about the technical implementation which made the process much more fun and doable.
Perhaps we're just climbing the ladder of abstraction: in the early days people were building their own garbage collection mechanisms, their own binary search algorithms, etc. Once we started using libraries, we had to find the fun in some higher level.
Perhaps in the future the fun will be about solving puzzles within the realm of requirement definitions and all the intricacies that stem from that.
"wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time" SO TRUE!
This dev thinks that it knows everything /s
I feel like I still get to solve the puzzles I like because I like the higher level architecture/design parts. I just don’t have to type as much because I can provide a stubbed out solution and tell it to fill I the rest.
I agree with you. I try to remember though that this is just the same situation that artists, musicians and (more recently) writers have been in for a long time. Unless you’re one of a very lucky few you’ll only get fulfillment in those pursuits if you enjoy the process rather than the output since it’s hard to get money or recognition for output anymore. Pure coding and lots of areas of code problem solving are going to end up in the same position.
This is me as well. I am actively seeking out a different line of work, because talking to a chat bot gives me 0 joy.
From my understanding, you can instead use Claude in the following manner: understand and solve the problem, put up pseudo code, and then tell Claude to generate real code and maybe restructure it a bit. So you don't have to write the actual code but have solved the problem all by yourself.
"So you dont have to" - I suspect the person you are replying to likes to write code.
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need.
I don't really see where he/she said that explicitly. My understanding is that he likes to solve problems but don't care about the final implementation. But I stand corrected.
If you worked in an office and your boss asked for 100 copies of their memo, they want you to use the copy machine.
If they saw you typing it out 100 times they’d tell you that you’re wasting time. It does t matter that you like to type and that you went to school to get a degree in typing.
Your company isn’t paying you to solve puzzles. If you aren’t putting things into production, what good are you as an employee?
> Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
Claude very much learns if you teach it and tell it to note things in the CLAUDE.md files you want it to remember. Claude is much better than any junior and most mid level ticket takers.
> Your company isn’t paying you to solve puzzles. If you aren’t putting things into production, what good are you as an employee?
No, the company is exactly paying their employees to solve puzzles, which company labels them as problems or requirements.
And when an employee focuses on solving puzzles and enjoys it, the code naturally ends up in production, and gets forgotten because the puzzle is solved well.
And if they could solve the problem faster with AI?
But there is no “puzzle” to solving most enterprise problems as far as code - just grind.
And code doesn’t magically go from dev to production without a lot of work and coordination in between.
> And if they could solve the problem faster with AI
It's such a shame that everyone only cares about "faster" and not "better"
What a shameful mentality. Absolutely zero respect for quality or craftsmanship, only speed
I care about exchanging labor for money to support my addiction to food and shelter.
My employer just like any other employer cares about keeping up with the competition and maximizing profit.
Customers don’t care about the “craftsmanship” of your code - aside from maybe the UI. But if you are a B2B company where the user is not the customer, they probably don’t even care about that.
I bet you most developers here are using the same set of Electron apps.
Yes, what you are describing is both true and also highlights how bankrupt we are as a society
Just because things are this way doesn't mean they should be or that we should just accept that they must always be this way
Now I work in consulting AWS + app dev as a staff consultant leading ironed and unless you work in the internal consulting department at AWS (been there done that as blue badge RSU earning employee) or GCP, it’s almost impossible to get a job as an American as anything but sales or a lead. It’s a race to the bottom with everyone hiring in LatAm if you are lucky (same time zone more willing to push back against bad idea and more able to handle ambiguity) or India.
Everything is a race to the bottom. The only way I can justify not being in presales is because I can now do the work of 3 people with AI.
A Russian word for this is "пофигизм" -- the cynical belief that everything is fucked, so why bother.
There still is. In most enterprises, the tasks are usually to take some data somewhere, transform it to be the intake of another process. Or to made a tweak to an existing process. Most of the other types of problems (information storage, communication, accounting,..) have been solved already (and solved before the digital world).
People can see it as grind. But the pleasure comes in solving the meta problem instead of the one in front (the latter always create brittle systems). But I agree that it can becomes hell if there were no care in building the current systems.
And they are tasks with standardized best practices. I knew that I wanted to write an internal web app that allowed users to upload a file to S3 using Lambda and storing the CSV rows into Postgres.
I just told it to do it.
It got the “create S3 pre-signed url to upload it to” right. But then it did the naive implementation of download the file and do a bulk upsetting wrong instead of “use the AWS extension to Postgres and upload it to S3”. Once I told it to do that, it knew what to do.
But still I cared about systems and architecture and not whether it decided to use a for loop or while loop.
Knowing that or knowing how best to upload files to Redshift or other data engineering practices aren’t knew or novel
They aren’t. But there are a lot of mistakes that can happen, and until an AI workflow is proven that it can’t happen, it’s best to monitor it, and then the speed increase is moot. Hunans can make the same kind of mistakes, but they are incentivized not to (loosing reputation and their jobs). AI tools don’t have that lever against them.
And so are mid level developers. A mid level developer who didn’t have 8 years of experience with AWS would have made the same mistake without my advice.
I would have been just as responsible for their inefficient implementation in front of the customer (consulting) as I would be with Claude and it would have been on me to guide the mid level developer just like it was on me to guide Claude.
The mid level developer would never have been called out for it in front of my customer (consulting) or in front of my CTO when I was working for a product company. Either way I was the responsible individual
> Your company isn’t paying you to solve puzzles.
Actually they are, but it's also true that you need to put solutions in to production.
No your company is very much paying you to put things in production. Thats all they care about
No the company wants its problems to be solved and needs to be addressed.
When things are put to production as soon as possible without respect to quality, we see what's happening all the time.
Bloat, performance problems, angry customers, Windows 11...
You get the idea.
The reason your login is taking 45 seconds and your database is locking up with 10 concurrent users isn’t because developers didn’t write good code following the correct GOF pattern.
If companies cared about bloat and performance you wouldn’t see web apps with dozens of dependencies, cross platform mobile apps and Electron apps.
Putting solutions in to production. Not "things". Honestly I'm sick of dogshit companies wanting something done yesterday but are happy to spend the next 2 years having engineers debug the consequences.
I've just written the fifth from-scratch version of a component at work. The requirements have never changed (it's a client library for a proprietary server, which has barely ever changed). I'm the 5th developer at the company to write a version of it.
All because nobody gave engineers the breathing room to factor the solution in to well thought out, testable, reusable components. Every version before is a spaghetti soup of code, mixing up unrelated functionality in to a handful of files.
No well thought out interfaces. No automated end-to-end testing, and no automated regression testing. The whole thing is dire and no managers give a fuck.
AI cannot solve for a lack of engineering culture. It can however produce trash faster than ever at these toxic shops.
And this has nothing to do with AI like you said. On another project my vibe coded API that I designed I also didn’t look at a line of code besides the shell script I had Claude create to do the integration tests with curl.
On the other hand, AI doesn’t care about sloppy code. I haven’t done any serious web development since 2002, yet I created two decently featureful internal websites without looking at a line of code authenticated with Amazon Cognito. I doubt for the lifetime of this app, anyone will ever look at a line of code and make any changes using AI.
I enjoy the journey too. The journey is building systems, not coding. Coding was always the most tedious and least interesting part of it. Thinking about the system, thinking about its implementation details, iterating and making it better and better. Nothing has changed with AI. My ambition grew with the technology. Now I don't waste time on simple systems. I can get to work doing what I've always thought would be impossible, or take years. I can fail faster than ever and pivot sooner.
It's the best thing to happen to systems engineering.
My experience was exactly the opposite—I came from the other side entirely. I had absolutely no programming knowledge, and until three weeks ago, I didn’t even know what a Parquet file was.
While reviewing a deep research project I had started, I stumbled upon an inefficiency: The USDA’s phytochemical database is publicly accessible, but it’s spread across 16 CSV files with unclear links. I had the idea to create a single flat table, enriched with data from PubMed, ChEMBL, and patents. Normally, a project like this would have been completely impossible for someone like me—the programming hurdle is far too high for me.
With Claude Opus 4.6, I was actually able to focus entirely on the problem architecture: which data, from where, in what form, for which target audience. Every decision about the system was mine. Claude Opus took care of the implementation.
I’m probably the person your debate about “journey vs. destination” wasn’t meant for. For me, the destination was previously unattainable. My journey became possible, because the AI took over the part that I could never have implemented anyway.
I hear everyone say "the LLM lets me focus on the broader context and architecture", but in my experience the architecture is made of the small decisions in the individual components. If I'm writing a complex system part of getting the primitives and interfaces right is experiencing the friction of using them. If code is "free" I can write a bad system because I don't experience using it, the LLM abstracts away the rough edges.
I'm working with a team that was an early adopter of LLMs and their architecture is full of unknown-unknowns that they would have thought through if they actually wrote the code themselves. There are impedance mismatches everywhere but they can just produce more code to wrap the old code. It makes the system brittle and hard-to-maintain.
It's not a new problem, I've worked at places where people made these mistakes before. But as time goes on it seems like _most_ systems will accumulate multiple layers of slop because it's increasingly cheap to just add more mud to the ball of mud.
This matches my experience when building my first real project with Claude. The architectural decisions were entirely up to me: I researched which data sources, schema, and enrichment logic were suitable and which to use. But I had no way of verifying whether these decisions were actually good (no programming knowledge) until Claude Opus had implemented them.
The feedback loop is different when you don’t write the code yourself. You describe a system to the AI, after a few lines of code the result appears, and then you find out whether your own mental model was actually sound. In my first attempts, it definitely wasn’t. This friction, however, proved to be useful; it just wasn’t the friction I had expected at the beginning.
Maybe it is just my experience, because I'm not a system programmer, but instead learning it. I find that concepts in system programming are not really very hard to understand (e.g. log-based file system is the one I'm reading about today), but the implementation, the actual coding, the actual weaving of the system, is most of the fun/grit. Maybe it is just me, but I find that for system programming, I have to implement every part of it, before claiming that I understand it.
So much agreed. I'm constraining my AI, that always wants to add more dependencies, create unnecessary code, broaden test to the point they become useless. I have in mind what I want it to build, and now I have workflows to make sure it does so effectively.
I also ask it a lot of questions regarding my assumptions, and so "we" (me and the AI) find better solutions that either of us could make on our own.
14 years ago hearing Dan Pink talk on motivation (https://youtu.be/u6XAPnuFjJc) catalyzed the decision to change jobs.
One of the three motivators he mentions is mastery. And cites examples of why people waste hours with no pay learning to play instruments and other hobbies in their discretionary time. This has been very true for me as a coder.
That said, I enjoy the pursuit of mastery as a programmer less than I used to. Mastering a “simple” thing is rewarding. Trying to master much of modern software is not. Web programming rots your brain. Modern languages and software product motivations are all about gaining more money and mindshare. There is no mastering any stack, it changes to swiftly to matter. I view the necessity of using LLMs as an indictment against what working in and with information technology has become.
I wonder if the hope of mastering the agentic process, is what is rejuvenating some programmers. It’s a new challenge to get good at. I wonder what Pink would say today about the role of AI in “what motivates us”.
(Edited, author name correction)
> mastering the agentic process
It would have been worth it if the frontier models were open weight. Right now, if you invest time in mastering tools like Claude Code or Google’s Antigravity, there is no guarantee that you won’t be removed from their ecosystems for any reason, which would make your efforts and skills useless.
If there is a skill to using LLM coding agents, I think it is mostly just developing an intuitive sense for how to prompt and the “jagged frontier” of LLMs.
IME, the tools are largely interchangeable. They are all slightly different, but the basics of prompting and the jaggedness of the frontier is more or less the same across all of them.
Switching from codex to claude code is orders of magnitude simpler than switching from c# to java or emacs to vim.
Great insightful comment !
I'm not sure I understand... why not simply ignore AI and keep coding the way you always have? It's a bit like saying motorboats killed your passion for rowing.
But to push the analogy a bit. If you are rowing on a lake with motorboats, it is a totally different experience. Noisy, constant wake. We are part of an ecosystem, not isolated.
Growing up, the lakes in New England were filled with sailboats. There were sailing races. Now, its entirely pontoon boats. Not a sailboat to be found.
The lake is not however yours to dictate how others will move along. Imagine if the horse owners decided in such an analogy not to allow cars on the road because they noisy and "totally different experience".
You want a pre-AI experience? Feel free to code without it. It's definitely still doable.
In the town where I grew up in they banned cars and now you are only allowed to ride a horse. So your analogy is actually happening in real life.
Yes indeed, but when you code in your room, you are free to follow the AI debates - or ignore them.
nah. tell him, you're in a race. others are using motorboats. the last to reach the finish line loses their salary. that's a better analogy, or at least what a lot of people think the analogy is.
My hypothesis around this and other peoples sentiments who dislike AI while citing similar reasons as the post is not simply that they enjoyed arriving at the destination.
Rather the issue is they believe they are GOOD at the "journey" and getting to the destination and could compare their journey to others. Another take is they could more readily share their journey or help their peers. Some really like that part.
Now who you are comparing to is not other people going through the same journey, so there is less comradery. Others no longer enjoy that same journey so it feels more "lonely" in a way.
Theres nothing stopping someone from still writing their own code for fun by hand, but the element of sharing the journey with others is diminishing.
He's not getting customers by rowing them across the river when the motorboats do it faster and cheaper. You compared a hobby to doing something "for a living".
I turned 59 this week. I am excited to go to work again. I use Claude every day. I check Claude. I learn new things from Claude.
I no longer need a "UI person" to get something demonstrable quickly. (I've never been a "UI guy"). I've also never been a guy coding during every waking moment of my life as that would have been disastrous for my mental health.
I am retiring in <=2 years, so I am having fun with this new associate of mine.
One pitfall I've managed to avoid all these 36 years I've been at it is not falling in love with the solution. I fall in love with the problems. Claude solves those problems far quicker than I ever could.
I turn 52 in a couple of months and I’ve only lasted this long from starting out as a hobbyist in 1986 by not being the old guy yelling at the clouds.
I got into “cloud” at 44, got my first job (and hopefully last) at BigTech at 46 and now I work in cloud consulting specializing in app dev leading projects at 51.
Every project I’ve done since late 2023 has involved integrating with LLMs and I usually have three terminal sessions up - one with Claude, one with Codex and one where I do command line stuff and testing.
I am motivated by the result, the design and on the system level.
I suppose in a way it's like saying diesel engines killed passions for sailing.
A career sailor on a sailing ship who finds meaning in rigging a ship just so with a team of shipmates in order to undertake a useful journey may find his love of sailing diminished somewhat when his life's skills and passions are abruptly reduced to a historical curiosity.
Other sailors may prefer their new "easier" jobs now they don't have to climb rigging all day or caulk decking (but now they have other problems, you need far fewer of them per tonne of cargo).
And the diesel engine mechanics are presumably cock-a-hoop at their new market.
(This analogy makes no claim as to the relative utility of AI compared to diesel ships over sailing vessels).
I agree, I’m an old dude too. For personal projects I do what I like. I also like carving stone and wood the hard way, just because.
At work though the hype sucks the life out of the last part of the job that some people found enjoyable, because complete control is enjoyable. Personally I think work is just doing what someone else wants, rather than pleasing yourself.
because my company is mandating that we use motorboats instead of rowboats.
i can continue to row as a hobby, but i've been very lucky in that my work has always been something i genuinely enjoyed. now that it's become something that's actively burning me out, it's far harder to find time for hobbies and interests.
Well, I do. I do not dislike AI at all: I even find it fun, although different. The thing is that I am not only enjoying the journey, although it is was I enjoy the most by far. But when everyone is able to reach the destination, the interest in the journey decreases (if this makes sense). It is not a rant against AI: I use AI daily, it IS useful, it is just less fun since AI is around, and the only way I can explain this is the journey vs the destination.
It's just change. Yeah I also miss coding, but I also like the new things that AI allows me to access. And in times of change we can feel uncomfortable, because what we were used to is no more, and we cannot see yet what is to come. But if you're open to it, a new passion will arise in you, that you could never imagined before.
Can't ride my horse and buggy in the city anymore.
>It's a bit like saying motorboats killed your passion for rowing.
This is a real thing that happens and the analogy is clearly working against you! If you paddle a canoe or rowboat on a river or lake, your experience is made MARKEDLY worse by a motorboat zooming by and scaring the fish, rocking you with wake, smelling up the place with 2-stroke fumes, etc. Even when the motorboats aren't there, the built environment that supports them is bigger and more intrusive.
I was like this a few months back. You want to code and solve problems, but the AI can do all that for you. I got over it by moving the problem solving further down the chain. Treat the AI the team you are directing to solve the issue.
If I wanted to become a project manager I would have become one. AI has just exposed that many "engineers" are "temporarily embarrassed project managers", which is fine in the sense that it makes it clearer who actually enjoys making things and who just wants the end result regardless of how it's made.
> If I wanted to become a project manager I would have become one.
It's not so much a project manager. I have something I want to build, I create the plan and work slowly through it with Claude. Stopping at every piece and reviewing as I go.
I can confirm the code is good, but also when it takes a different approach I question why it took that approach. Occasionally I learn something.
> who just wants the end result regardless of how it's made.
Sometimes you want the app but don't care how it gets created, because its helping you focus on what you really want to do. For example I created a mindmap App in XCode on-par with XMind. Not every feature, but everything I use.
>AI has just exposed that many "engineers" are "temporarily embarrassed project managers", which is fine in the sense that it makes it clearer who actually enjoys making things and who just wants the end result regardless of how it's made.
AI has also exposed that many "engineers" are just "people who like fiddling with code" and that's fine in the sense that it makes it clear who are the actual engineers who are engineering solutions to real human problems and who just want to tinker with code.
Like imagine slandering a civil engineer "you just want a bridge that is safe and lasts for a century, you don't care about enjoying the journey of construction".
Haha! Your analogy doesn't work on multiple levels. Firstly, if you're outsourcing your work to AI you're not the engineer anymore. A civil engineer is different from a manager of a civil engineering project. Just like I wouldn't call myself an artist if I got AI to generate me some art, I wouldn't call myself a software engineer if I got AI to write all the code for me.
Secondly, it's not just about "enjoying the journey of construction", it's also about caring about the quality of the end results. Getting vibe coded software that is as stable as a "bridge that is safe and lasts for a century" is not a matter of careful engineering decisions, it's mostly a matter of luck, because you don't have the necessary oversight in the quality of the output unless you're doing extensive reviews of the generated code, at which point you greatly diminish the time you're supposedly saving.
The analogy works fine! You're just being obtuse.
- Outsourcing
False. If you "outsource your code" to a compiler and just write higher level language, you're not an engineer. You literally don't own any of your own code, just an abstraction of it written in human language. See how that works? An engineer can delegate -- period.
- "I wouldn't call myself a software engineer if I got AI to write all the code for me"
If all you do is write code you're not an engineer. I think you fundamentally don't know what engineering is. In a very real sense engineering is what you do when you're not coding. The civil engineer doesn't construct the bridge personally.
- "Secondly, it's not just about "enjoying the journey of construction", it's also about caring about the quality of the end results".
Codemonkeys DON'T CARE about the quality of the end result. They only care about their little corner of the zen garden. Writing real software for real users is by far the worst part of a codemonkeys job.
- "Getting vibe coded software that is as stable as a "bridge that is safe and lasts for a century" is not a matter of careful engineering decisions, it's mostly a matter of luck"
Nonsense. The engineer who spends 90% of his time architecting systems and testing them at a high level is making safer and more stable software than the codemonkey who spends 90% of his time tinkering with the details. Forest for the trees.
- "unless you're doing extensive reviews of the generated code, at which point you greatly diminish the time you're supposedly saving."
Who said anything about "saving time"? We're engineering high quality systems. Some of us spend our time at a higher level, thinking holistically about the system, testing multiple concepts and rapidly iterating. Others demand bespoke handwritten code and in the time allowed can barely finish a single concept with a questionable amount of polish. Whatever their first idea is will ship, and they'll have no real ability to justify the architecture other than vibes.
> Like imagine slandering a civil engineer "you just want a bridge that is safe and lasts for a century, you don't care about enjoying the journey of construction".
Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
> Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
Not a great comparison. I'd agree with you if it was straight up vibe coding.
But co-creating (which is what I do) I create plan, then step through it with Claude. Claude creates a small part of what I want, I review, tweak or ask Claude why it took that approach if its different.
I know the subject matter of what it is creating, so in this sense it is safe, as long as I am reviewing everything.
It gets dangerous if you just let it create something without any interaction or understanding of what is being created.
>Would you currently trust a bridge designed by a civil engineer using AI for all of their calculations ?
Of course. I've seen how sloppy and lazy humans are, and I already use the bridge, and if the safety truly came down to the output of single person, then the risk is already significant.
I must say, I got a chuckle at "using AI to do their calculations". Oh no, my agent is going to write a python script to do basic maths, and check their work against a series of automated tests, the sky is falling!
I'm the cohost of the Practical AI podcast, which is one of the world's most popular AI podcasts, and we regularly discuss this topic in our shows.
https://practicalai.fm
Here's my two cents...
I'm 55 years old and been programming since I was 10.
Yes, programming has forever changed - especially since Opus 4.5 was released in late November. Programmers who don't use AI models and agents are obsolete in a professional context. It's not a question of journey versus destination. It's that the nature of the journey has changed and productive velocity has significantly increased. Embrace it just like you have presumably embraced every other productivity improvement over the decades. Most of us aren't coding assembly any more because pre-AI tools and languages accelerated our productivity. Now it's time to do that again, recognizing that the journey may be a different experience from the coding process you love, but we're not yet going from "Hello, World" to large complex production-ready systems in a single prompt.
I point out that if you're programming for personal satisfaction rather than professionally, then nothing has changed. Use AI or don't use AI; whatever works for you. You have the luxury of being a hobbyist.
I found my peace with AI aided coding during the last three months. I started development of an environment for programming games and agent simulations that has its own S-expression based DSL, as a private project. Think somewhere between Processing and StarLogo, with a functional style and a unique programming model.
I am having long design sessions with Claude Code and let it implement the resulting features and changes in version controlled increments.
But I am the one who writes the example games and simulations in the DSL to get a feel for where its design needs to change to improve the user experience. This way I can work on the fun and creative parts and let Claude do the footwork.
I let Claude simultaneously write code, tests and documentation for each increment, and I read it and suggest changes or ask for clarification. I find it a lot easier to dismiss an earlier design for a better idea than when I would have implemented every detail of the system myself, and I think so far the resulting product has largely benefited from this.
To me, now more than ever it is important to keep the love for programming alive by having a side project as a creative outlet, with no time pressure and my own acceptance criteria (like beautiful code or clever solutions) that would not be acceptable in a commercial environment.
Maybe this could work for some as a general recipe for how to collaborate with AI:
- Split up the work so that you write the high-level client code, and have AI write the library/framework code.
- Write some parts of your (client) code first.
- Write a first iteration of the library/framework so that your code runs, along with tests and documentation. This gives the AI information on the desired implementation style.
- Spend time designing/defining the interface (API, DSL or some other module boundary). Discuss the design with the AI and iterate until it feels good.
- For each design increment, let AI implement, test and document its part, then adapt your client code. Or, change your code first and have AI change its interface/implementation to make it work.
- Between iterations, study at least the generated tests, and discuss the implementation.
- Keep iterations small and commit completed features before you advance to the next change.
- Keep a TODO list and don't be afraid to dismiss an earlier design if it is no longer consistent with newer decisions. (This is a variation of the one-off program, but as a design tool.)
That way, there is a clear separation of the client code and the libraries/framework layer, and you own the former and the interface to the latter, just not the low-level implementation (which is true for all 3rd party code, or all code you did not write).
Of course this will not work for you if what you prefer is writing low-level code. But in a business context, where you have the detailed domain knowledge and communicate with the product department, it is a sensible division of labour. (And you keep designing the interface to the low-level code.)
At least for me this workflow works, as I like spending time on getting the design and the boundaries right, as it results in readable and intentional (sometimes even beautiful) client code. It also keeps the creative element in the process and does not reduce the engineer to a mere manager of AI coding agents.
As I commented in the other post, it killed mine at work, because my boss is pushing "AI" really hard on the devs. Fortunately, he's now seeing enough evidence to counteract the hype, but it's still going to be present and dragging down my work. But it my off time, I only experiment with LLMs to see if they're getting better. Spoiler alert: they aren't, at least not for the kind of things I want to do.
"at least not for the kind of things I want to do."
Can you share?
I agree here. It certainly has burned me out as of recently. The expectation to deliver faster and faster results purely by the use of AI.
AI poses many challenges from security to ensuring code safety. When paired and met with the same expectations before the hype, you could consider it good enough to shave off 8 hours of work. But this is just the first 8 hours of getting some code ready.
A savvy dev could easily just grab an existing template they made prior and stitch things together in a degree better than AI.
Now it is just massage the prompt and hope it adheres.
If you enjoyed coding for the sake of coding it hasn't gone anywhere. People still knit for themselves when they can go buy clothes off the rack. People still enjoy chess and Go even though none of them can beat a machine.
If you enjoyed that you could do something the rest of the world can't - well yeah some of that is somewhat gone. The "real programmers" who could time the execution of assembly instructions to the rotation speed of an early hard drive prob felt the same when compilers came around.
It has rekindled my joy however. Agentic development is so powerful but also so painful and it's the painful parts I love. The painful parts mean there is still so much to create and make better. We get to live in a world now where all of this power is on our home computers, where we can draw on all the world's resources to build in realtime, and where if we make anything cool it can propagate virally and instantly, and where there are blank spaces in every direction for individuals to innovate. Pretty cool in my view.
An annoying aspect today is that I can never share my code publicly without some AI company stealing it to train their models, regardless of license.
I’ve given AI a try and found the destination felt empty.
I’ve made the choice to not go full bore into AI as a result. I still use it to aid search or ask questions, but full on agentic coding isn’t for me, at least not for the projects I actually care about and want/need to support long-term.
I am also almost 60, and from my perspective, Claude Code solves a lifelong problem for me. I have always found coding to be quite tedious, requiring a savant level of syntax recall. It's not that I can't do it — I can — but I am better suited to managing the release, setting up the support, etc. With Claude Code, I am free to plan and create. I am able to envision a product and ship it all by myself, requiring 90% less time. Claude Code allows me to get to market. Working with Claude Code genuinely feels like having the creative partner I've waited 30 years for. Claude has made a dream come true for me.
I have decided that I will only write artisanal code. I’m even thinking of creating a consultancy agency where people can hire me to replace AI generated code.
Yes, it changes the nature of the work. Back when you started coding there were people experiencing the same thing about shifting to higher level languages. What some of them liked was the efficency and aesthetic of using just the right assembly language trick and good compilers with high quality instruction selection took that away. I'm sure there were programmers who missed the days of punching in hex into memory.
We start to understand those old fogeys who we blew past we were young once we get to their age. It's the way of the world.
The sad truth of life. This story reminded me of the time when I tried my first MMO - at first it felt like a fairy tale, something unknown, something that could still surprise you. And then you get familiar with all the mechanics, and the magic disappears. Now it’s just a “tool.”
For me (60 too) it's both, the journey and the destination. LLMs not only help me get around the boring stuff so I have more time for the things I really want to design and build, but they also open areas for me in which I always wanted to go but for which it was very time consuming or difficult to get the required knowledge (e.g. tons of academic papers which are neither well written nor complete and sometimes just wrong). The LLMs help me to explore and find the right way through those areas, so these adventures suddenly become affordable for me, without doing a PhD on each subject. And even if the LLM generates some code, it still needs a "guiding hand" and engineering experience. So for me, no, AI doesn't kill my passion, but offers a very attractive symbiosis, which makes me much more efficient and my spare-time work even more interesting. I find myself watching fewer and fewer streaming videos because exploring new areas and collaborating with the LLM is much more entertaining.
Doesn't AI just replace the coding that other people have done many times? That is, we don't have to do repetitive work because of AI. Yes, I don't know how to write a React app even though I can vibe code it quickly, but that's repetitive nonetheless. It's just that it is another person who has repeated the code before. That said, there are a ton of code to write by hand if we push the envelop. The 10 algorithms that I no one has build for production. This concurrency library that no one has built in my favorite language. That simulation that Claude Code just can’t get right no matter how much prompts/context engineering/harness engineering I do. The list can go on and on.
I’ve made a bunch of tools to help me get around file system limitations on modern Macs (APFS) and treating my entire legacy file collection as CMS challenge and have cranked out more binaries in 3 months than in the 10 years before the arrival of these tools. If you know how to use these tools and how to think like an architect and not a hobbyist Claude is truly in the technological lead.
I am a bit, but not much, younger than 60 and have been coding since Apple II days.
These tools are pretty close to HAL 9000 so of course GIGO as always has been the case with computer tech.
Almost everything is in Go except an image fingerprinting api server written in Swift. The most USEFUL thing I’ve written is a Go based APFS monitor that will help you not overfill your SSD and get pained into corner by Time Machine.
Are your tools open source? They sound kinda cool.
This is actually quite a refreshing view. I feel like all I read about is how magnificent AI is, and wether it will be for the better or worse. But you are absolutely right that, people who enjoy the actual work (eg. coding), it might take away some of the passion and joy. Much because AI has enabled so much more to be done in the same time. Kind-of now it gives you almost guilt if you don't use it, because every one else will get so far ahead and the work "should've/could've" been done much faster.
What do you all think about the "Solve It" method by the Answer dot AI folks?
It's more like iterating on the REPL with AI in the loop, meaning the user stays in control while benefitting from AI, so real growth happens.
Interesting thing to consider, in a couple of years, will there be a differentiator between people who are good at driving company-specific AI tools and those who are generally better by having built skills the hard way ground up with benefit of AI?
Have you considered that the people finding passion in vibe coding are enjoying the journey? I think what you are lamenting is that other people aren't on the same journey you took.
Your destination is only a point somewhere on what they perceive as the journey. You're saying, well, if they don't go where I did and stop when I did, it was no proper journey.
I feel you. For me it was a love of problem solving. Now that’s been taken away from me and people keep telling me how happy I should be about it.
The world still has problems to solve?
Plenty, but few of them can be solved by writing and deploying an app somewhere. Some of them are actually made worse by the latter. And how to make money is mostly orthogonal to solving the problems of the world.
I'm completely the opposite. 100% the opposite. I wrote code because it was the only way to make the lights blink. I saw code as an impediment to completing a project. There was a lot of friction between the design and the final result. AI reduces that friction substantially.
The remaining friction is fundamentally the same as that which existed when writing code manually. The gap between what you envision for your design/solution and the tools for implementing that vision. With code, the friction encountered when implementing your vision is substantial; with AI, that friction is significantly reduced, and what's left is in areas different from what past experience would lead you to expect.
I think this is really honest and the reason we see this big divide. On the one hand people that never enjoyed coding but enjoy to see their ideas realized are celebrating. But the group of people who enjoyed coding in itself and never saw it just as a means to a result feel cheated. I think a lot of the polarization you seen online is revolving around this difference in character. I am of two minds about this really. because I've certainly enjoyed the puzzle and the journey in the past. But I also enjoy to get to see ideas I've had for a while but never had the time for, realize quickly. I also don't think its just a distinction between enjoying the journey or the destination, but I think its enjoying a very specific type of journey, e.g. slow methodical etc. Whereas people with a different temperament find the speed and the ease of experimentation and the sometimes surprising results more appealing.
Very well said. Slow code for me. I fully understand what I build. The AI magic bus will come crashing down at some point.
I think it's perfect understandable that some people feel dreaded while others feel excited. But whatever the outcome, we have to adapt.
For people who feel that AI kills a passion, I'd recommend finding another hobby. Especially at the age of 60, when you don't have to work, you can plan retirement -- the next 20+ years as if it is your second childhood, and do whatever you want. I encourage you to search for greater meanings. After all, programming is just a man-made wonder, and the universe is full of grandeur.
The friction is the feature for journey-focused builders. When AI removes the cognitive resistance—debugging a parser, wrestling with state management, naming things—it also removes the scaffolding that forces you to really understand what you're building. You end up in implementation details faster, which feels productive until you realize you're solving problems you wouldn't have encountered if you'd thought harder upfront.
Those who say they enjoy "building" with AI instead of coding are just outsourcing the coding part (while training the AI for outsourced company). It's nothing to enjoy, but yeah, you get the product, which is probably what people enjoy. Getting the product. It's like buying a IKEA furniture and you think you made it by merely assembling it. If you don't know IKEA effect, it's having a greater value for something than it actually is, because you were partly 'involved' in creating/assmebling it.
With coding (by hand), there are two aspects of it. One is the pleasure of figuring out how to do things, and one is the pleasure of making something that didn't exist before. Building with AI gives you the second pleasure, but not the first.
Or maybe it still gives you the first, too. Maybe you get that from figuring out how to get the AI to produce the code you want, just like you got it from trying to get the compiler to produce the code you want.
Or maybe it depends on your personality and/or your view of your craft.
Anyway, the point is, people take pleasure in their work in different ways. Those who enjoy building with AI are probably not all lying. Some do enjoy it. And that is not a defect in them. It's fine for them to enjoy it. It's fine for you not to enjoy it.
In my field which involves large legacy codebases in C++ and complex numerical algorithms implemented by PhDs. LLMs have their place but improvements in productivity are not that great because current LLMs simply make too many mistakes in this context and mistakes are usually very costly.
Everyone `in the know' appreciates this, but equally in the current environment has to play along with the AI hype machine.
It is depressing, but the true value of the current wave of LLMs in coding will become more clear over time. I think it's going to take some serious advances in architecture to make the coding assistant reliable, rather than simply scaling what we have now.
I feel you and at a much younger age. I started programming professionally full time around 2011-2012. Documentation was good but practical applications where you could see what you want to achieve in action were very limited. At the time I found myself writing drivers for fiscal printers using RS-232. The documentation provided by the manufacturer was absolutely horrible. "0x0b -> init, 0x23 -> page", literally code -> single word. Although I hated having to effectively brute force it, the feeling at the end was amazing. I tried AI code on several occasions and I hate it with a passion: full of bugs, completely ignoring edge cases and horrible performance. And ultimately having to spend more time fixing the slop than it takes me to sit down, think it through and get it done right. And I see many "programmers" just throw in a prompt in insert AI company here and celebrate the slop, patting themselves on the back.
It's the programming equivalent of those tiktok videos split in half, top half being random stock videos, bottom being temple run and an AI narration of a mildly wtf reddit post.
In a way I am lucky that I work at a place where everyone gets to choose what they want to use and how they use it. So my weapons of choice are a slightly tweaked, almost vanilla zsh, vim and zed with 0 ai features. I have a number of friends/former coworkers working at places where the usage of ai is not just allowed or encouraged but mandated. And is part of their performance score - the more you slop-code, the better your performance.
There is some joy in catching the LLM in a house of cards misunderstanding of how something works. They’ll weave really convincing fictions about from a bit of debugging. Then the Socratic probing of that can be fun. In part because the LLM tries to use big words to often seem smart.
Then you hopefully capture that information somehow in a future prompt, documentation, test, or other agent guardrail.
So I finds fun in the knowledge engineering of it all. The meta practice of building up a knowledge base in how to solve problems like this codebase.
I can totally get what you are feeling so much, that I actually wrote a whole novel about it
https://leanpub.com/we-mourn-our-craft
I think a lot of the push back comes from people who haven’t yet been directly affected. Those later in their careers often have little at stake and may not notice—or care—how it impacts juniors. People who feel it’s helping them now are just lucky that their roles and knowledge haven’t been disrupted yet. Eventually, it will catch up, and the discussion will shift to adapting or moving on—just like every other wave of "AI makes what I enjoy obsolete".
Do you stop gardening because farms exist? Do you stop playing chess because engines have surpassed every human grandmaster? Do you stop driving because cars can drive themselves?
The tool doesn’t invalidate the craft. If anything, what we’re mourning when AI “kills the passion” might be about identity.
Many programmers spent decades defining themselves as the person who knows how to do hard things
And it’s disorienting when that thing becomes easy.
Honestly this is the end of a long path to losing my passion. As tech became more professional, and less experimental my passion has smothered. More code reviews and test writing less figure out novel solutions. More meetings and and security reviews fewer tech strategy planning meeting. More cloud platforms and api cloud solutions less built functionality. More rental plans less open-source. Ai just gets rid of the final experimentation stages
> I have always enjoyed the journey
Don't take it personally, but those are the worst kind of engineers in any real world business context. I watched those type of people ruining projects, companies by overengineering them to death.
On the bright side, such traits can make a positive impact in academic or research context.
Can't you continue to do what you've been doing? What specifically about the existence of AI is killing the passion out of curiosity?
Reading article after article and anecdote after anecdote proclaiming writing code by hand "artisinal" and "outdated" wears on you though.
I was never a coder, but I can understand this frustration. For some, the efficiency is probably really exciting, but for those who really enjoyed solving the puzzles, something is lost. For me personally, AI and vibe-coding have opened up a whole new world (I built a mobile app), but I get why it's killing the passion for some.
Well, I'm 55 - and have been a pentester for the past 15 years, but I am having a blast. CC is so enabling - I build something new most weekends (this is my best project so far - which is a site which collects and writes stories for all the latest AI security research: https://shortspan.ai/). All sorts of projects I have had on the back burner are now coming to life. I have 4 kids, so wouldn't have time without Codex/Claude Code. Maybe I have an hour here or there, and that is enough to make something or improve something
I'm 62 years old, and LLM coding agents had ignited my passion again. I'll soon unarchive older projects which were too hard to continue. With Opus this will be finally doable.
Adding a jit to perl, inlining, ssa, fixing raku,... Endless possibilities. Just fixing glibc or gcc is out, because people.
I just built two projects where one dogfooded the other and setup a fully working slack bot all in 1 hour. If you still want to manually do things you can. AI can answer questions about common topics way faster than searching docs. Idk why this kills peoples passions. Especially of you're old enough to not need a salary
I agree. I wanted a particular tool to support my development. The libraries are well known and understood by people who work in text editors, but this is not my area and I have a busy life. Simply working out what I needed to know produced enough inertia to stop it happening.
I finally made it with Claude. I've been writing code a while so I absolutely didn't let Claude loose and I still refactored stuff by hand as sometimes that was faster than trying to get Claude to do it. I also know what the whole thing does - I read all the diffs it presented.
I wouldn't fully trust it to go off and do its thing unsupervised especially in my areas of speciality. But the scaffolding work like command line arguments - typing all that out was never my passion and I had snippets in my editor for most of it.
Perhaps if your passion is the process of doing that kind of meticulously laying out of each file then I can understand. Although the journey for me is the problem solving. Nothing much excites me about any of the boilerplate parts.
Claude can certainly take a stab at the solution too and is best when it has some kind of test case to match or validation step. To me working out what those are was always the core of the job and without them Claude can make plenty of mistakes.
It's just a tool and I use it in ways to support what I enjoy.
My work situation is way more complicated. Bigcorp organizational dynamics nullify any marginal gain anywhere.
>Idk why this kills peoples passions
Corporate management is full on FOMO and pushes agents down onto teams
How old is that exactly?
Abstractly, it's where log(forecast(assets)) exceeds log(anticipate(desires)) by log(safetymargin).
Molecular biologists are still searching for the pathways that govern the expression of assets, desires, and safetymargin.
Doctors and tax accountants are still arguing over whether the forecast and anticipation functions are learned or innate. And philosophers and used car salesmen can't even agree on where these functions sit on the cause/effect axis.
>= 60 apparently.
It depends on what you enjoy more — building the pieces or putting the pieces together in clever ways.
I'm more on the second group so LLMs let me get to that part faster without having to get bogged on in the "small stuff".
But I do get the people that enjoy the craftsmanship of the finer details instead.
To me I think it just changed the destination. Now the journey is about the domain - experimenting with how ideas fit together. Yes I can type in a few words and have more working code than I did before, but not a product or suite or game or capability or whatever it is I’m building towards.
Claude code made a much slower coder for a good reason.
Now I can find out the gaps, corner cases and motivates me more on craftmanship and perfecting the artifacts i delivered.
I'm even older, I've been coding for over 50 years. Now I just do it as a passtime and I use AI to complete lines of code I would have written anyway. It's the structure and the approach which interests me and it's all mine.
Why can you not enjoy both the journey and the destination? Surely you find some sub-disciplines of writing code more pleasant than others - can't you just keep doing those manually and offload/automate the rest to the machine?
Code to me has always been a solution. And I have always found problem definition the more interesting side of that coin. Luckily, it seems problem definition has become more important with Claude et al.
for me the journey is the things i can do through creating various things in various contexts and authoring the code itself is a small step in the journey.
if it the puzzle solving metaphor, i'm taking solved puzzles as pieces to solve a more meta puzzle ... and i enjoying the journey at that level.
i try to practice tracing all the way down the stack and learning about new things added to the stack, but i'm not in it for the sake of the stack or its vagaries and difficulties.
The other post was obviously astroturf bullshit, you can tell because they always mention CLAUDE CODE specifically like it's fucking Coca Cola.
I think AI Coding allows more people to do more things, which is why for most people it "ignited their passion" because now they have the tools to build their ideas
My trick is to let AI help the journey but not hold on it. Use it for discussion, not implementation. It is even surprisingly good for kernel level projects.
The original post: https://news.ycombinator.com/item?id=47282777
I don't buy this journey vs destination binary I keep hearing. I always considered myself lucky to be 44 and still writing code all day. I love the journey - the mental satisfaction of creating something complex yet elegant. The perfectionism that leads you to ask yourself can this be simpler, faster etc. But I also now love creating things that frankly I was never going to find the time for.
I will offer an orthogonal take: Spend less time on social media, news, forums etc. They are giving you a skewed view of what's important.
Good advice, thank you for the reminder.
Regarding the OP's dilemma. I am split. I enjoy both the process and the destination. With AI, the process is faster and less satisfying, but reaching the destination is satisfying in its own way, and enables certain professional ambitions.
I have always had other outlets for my "process" needs, and I believe I will spend more time on them in the future. Other hobbies. I love "artisanal coding" but that aspect was never really my job.
I don't really feel it's taking anything away, and in fact, it is providing me with something I've never done after 25+ years in IT doing everything BUT - development.
I've always done networking (isp, datacenter, large enterprise), security (networking plus firewall, vpn, endpoint), unix/linux admin, even windoze stuff with active directory, but never any development or programming directly. I was just never wired for it, though I can do ip/cidr/bgp/acl stuff in my head for days. I missed out on higher ed entirely after high school, and I learned along the way what I did after 45 years of tinkering with pc's from apple II on i life.
Right now I'm taking my network and security knowledge in writing an mcp to do network and security tasks enabling agents like claude-code, claude-desktop, codex, openclaw a means of accessing resources indirectly via my mcp, and it's something I could have never done before the advent of AI as I "don't do code". Now I can tell it intimately everything I need/want it to do, and it just literally does it. It's extremely effective too, if not at times aggravating/infuriating.
My biggest gripe is it does everything half ass, but nothing I haven't seen time and again from outsourcing. It feels like the usual contractor slop you might get hiring wipro/infosys or any other offshore development effort, but at least without human idiocy.
AI in general really needs a "don't do half-ass work" option, as it typically feels like I'll get "good enough for government work" sort of results until I kick and slap it at least twice to fix its shortcomings. It invariably feels like it always only gives me half of what I ask for. You can almost tell it's built to not give you everything up front, instead make you work for it.
It's a trap, as a wise man once said.
Yet your passion to spread negativity is thriving. Here you thought negativity is a destination, but turns out it’s a whole new journey for you! Passion reignited indeed.
Its just a different rate of travel.
Using AI is not mandatory.
Earlier today I told my daughter that while AI might be a reason for my 8 year old grandson not to major in cmpsci some day, she should still encourage him to learn coding because (1) it is tremendous fun; and (2) develops problem solving skills.
It's why I spend the majority of my time coding every day at age 73.
Just… keep coding?
This argument doesn't make sense to me. did airplanes kill road trips? AI lets you go faster, but you don't have to use it at all, or can use it in select ways to collab with. Unless you're somehow bothered by how other people code, you've only got more options now.
I very much enjoy the journeys I take with AI. It helps me undertake journeys I would never dare to without it. Because I took many before myslef and I know how long they are and how tired they made me feel. It lets me solo take the journeys I knew I'd need a team for. AI is a truck for my brain. I still enjoy walking the walk myself. But I take different journeys by walking than by a truck.
You do know you don't have to use AI, right? I don't use it for coding (or much at all outside of coding). I'm free to code just like I did before, and so are you.
The problem is that since AI employers now expect developers to write code faster.
Similar to when IDEs and autocomplete became common.
It's not hard to pick holes in this approach by showing the code being generated is flawed. Also, code quality is different from code velocity, better to write 10 lines that more accurately describes functionality than 50.
PMs now expect that you can create a Java micro service that does basic REST/CRUD from a database and get it into production in a total of two days.
That is hard if you are working in Notepad and have to write your own class import statements and write your own Maven POM or Gradle file. It’s a lot quicker in an IDE with autocomplete and auto-generated Maven POMs. And with AI it’s even faster but at the risk of lower code maintainability.
> PMs now expect that you can create a Java micro service that does basic REST/CRUD from a database and get it into production in a total of two days.
Have you heard of malicious compliance? Give the PMs what they ask for, then show them how what they've asked for is flawed. Your job as an engineer is not to just take orders blindly, it's to push for a better engineered solution. It's really not hard to show that what these PMs are asking for is stupid.
A new micro-service in two days is easy with an IDE and autocomplete. But now with AI the PM will likely push to have it in production in a day. Which is possible, but quality will be questionable.
> A new micro-service in two days is easy with an IDE and autocomplete.
Is your username accurate, are you currently retired? I hope you know there's a big difference between something that is functional and something that is production ready.
Somewhat retired last year. Looking for something new to do. Basic Java micro service with Spring Boot ands it is three hours of coding to write and read from a database and expose over REST interface. Two hours for a tests. Rest of the time is to set up environments, coupling everything, documentation. Two days is do-able if you have a good CI/CD template and your Azure/AWS is setup correctly.
I hope the companies you worked for had someone else taking care of security, as what you've described is a ransomware writer's wet dream.
You have a gateway / platform for that. You aren’t exposing those services to the internet.
> You aren’t exposing those services to the internet.
You aren’t knowingly exposing those services to the internet.
FTFY. Furthermore, internal services can still be abused to get data that shouldn't be shared. For example, imagine if your imaginary API was for a HR system, and could be used to determine salary information for staff.
If you aren't considering API security, you're almost bound to make major mistakes, and I'd bet money that most APIs designed and implemented in 2 days have tons of security holes.
AI is a tool like any other.
Ever since the dawn of time I've wanted to make my own games but always ended up wasting time on trying to make engines and frameworks and shit, because no development environment worked the way I wanted to, out of the box.
I don't trust AI enough to let it generate code out of thin air yet, and it's often wrong or inefficient when it does, so I just ask it to review my existing code.
I've been using Codex that way for the last couple months and it's helped me catch of lot of bugs that would have taken me ages on my own. All the code and ideas are still my own, and the AI's made me more productive without being lazy.
Maybe this time I will manage to finish making an actual game for once :')
> it depends on what you enjoy: the journey or the destination
I thought I enjoyed the journey more, but it turns out the destination is wild! There are still quite a few projects I keep for myself, pieces I want done in a specific way, that I now have time to do properly, while the dull stuff can get done elsewhere.
If it's a personal project, I just get AI to do the boring bits and write the fun bits myself. I enjoy coding in general, but every project has its share of boring boilerplate code, or utility functions that you've already written 100 times. Also, LLMS are pretty good at code review, which is very beneficial when you are working on something solo.
I love AI but never used Claude Code.
I just use the chat interface to study and do one-off scripts.
I love having 4-5 bots open and spam the same questions then reading the answers. For everything. Feels like I am doing something, like video games.
It has elevated my wardrobe and music tastes but I still had to have a baseline ofc. They are way too agreeable still.
I've been using Claude for a few weeks. It's like the chatbots, but can directly lookup your code. This can be useful if you point it to the right code, but it's using text search; not sophisticated understanding of the structure like an IDE.
You're not missing much; don't believe the hype and liars.
just eat taco bell it help me
Test
I'm convinced that most programmers largely hate making software and solving user problems. They just like fiddling with code. And honestly it explains so much.
Hello, I cannot tell if this is true or not, since I have not been able to really test the ability of Claude AI to code.
I am looking for a web API I could use with CURL, and limited "public/testing" API keys. Anyone?
I am very interested in Claude code to test its ability to code assembly (x86_64/RISC-V) and to assist the ports of c++ code to plain and simple C (I read something from HN about this which seems to be promising).
it depends on what you enjoy: the journey or the destination This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me.
I have coworkers who get itchy when they don't see their work on production, and super defensive in code review but I've never really cared. The goal is to solve the puzzle. If there's a better way to solve the puzzle, I want to know. If it takes a week to get through code review, what do I care, I'm already off to the next puzzle.
Being forced to use Claude at work, it really just took away everything that was enjoyable. Instead of solving puzzles I'm wrangling a digital junior dev that doesn't really learn from its mistakes, and lies all the time.
reply
voxleone 9 hours ago | parent | next [–]
I've been coding since I was about 15 and still love it. These days I mostly build tailored applications for small and medium companies, often alone and sometimes with small ad-hoc teams. I also do the sales myself, in person. For me, not using LLMs would mean giving up a lot of productivity. But the way I use them is very structured. Work on an application starts with requirements appraisal: identifying actors, defining use cases, and understanding the business constraints. Then I design the objects and flows. When possible, I formalize the system with fairly strict axioms and constraints. Only after that do LLMs come in, mostly to help with the mechanical parts of implementation. In my experience it's still humans all the way down. The thinking, modeling, and responsibility for the system are human. The LLM just helps move the implementation faster.
I also suspect the segment I work in will be among the last affected by LLM-driven job displacement. My clients are small to medium companies that need tailored internal systems. They're not going to suddenly start vibe-coding their own software. What they actually need is someone to understand the business, define the model, and take responsibility for the system. LLMs help with the implementation, but that part was never the hard part of the job.
reply
jantb 5 hours ago | root | parent | next [–]
I’m doing the same as you and even though I was producing coding a lot of the actual products I estimated the coding part just to be about 20% of the work. The rest is figuring out what and how to build stuff and what stakeholders really need, and solving production issues in live event driven systems. Agentic coding is just faster at the 20% part, and I can always sit down and code the really hard stuff if I want to or feel I need to if the LLM gets stuck. If it produces something not understandable I either learn from it until I understand it og makes it do a pattern I know instead. So all in all, not so worried. reply
finaard 9 hours ago | parent | prev | next [–]
> This has been 100% my experience. I enjoy the puzzle solving and the general joy of organizing and pulling things together. I could really care less about the end result to meet some business need. The fun part is in the building, it's in the understanding, the growth of me. Quite a few of the projects I always wanted to do have components or dependencies I really don't want to do. And as a result, I never did them, unless they eventually became viable to do in a commercial setting where I then had some junior developer to make the annoying stuff go away.
Now with LLMs I have my own junior developer to handle the annoying stuff - and as a result, a lot of my fun stuff I was thinking about in the last 3 decades finally got done.
One example from just last week - I had a large C codebase from the 90s I always wanted to reuse, but modern compilers have a different idea of how C should look like. It's pretty obvious from the compiler errors what you need to do each case, but I wasn't really in the mood for manually going through hundreds of source files. So I just stuck a locally running qwen coder in yolo mode into a container, forgot about it for a week, and came back to a compiling code base. Diff is quick to review, only had a handful of cases where it needed manual intervention.
reply
throw-the-towel 8 hours ago | root | parent | next [–]
Note that you are able to choose freely what parts of the work get done by Claude, and what parts you do yourself. At work, many of us have no such luxury because bosses drunk on FOMO are forcing agent use. reply
sktrdie 8 hours ago | parent | prev | next [–]
You still care about end result though: in your case, the end result being the puzzled you solved. AI can make that process still enjoyable. For instance I had to build a very intricate cache handler for Next.js from scratch that worked in a very specific way by serializing JSON in chunks (instead of JSON.parse it all in memory). I knew the theory, but the API details and the other annoyances always made it daunting for me.
With AI I was able to thinker more about the theory of the problem and less about the technical implementation which made the process much more fun and doable.
Perhaps we're just climbing the ladder of abstraction: in the early days people were building their own garbage collection mechanisms, their own binary search algorithms, etc. Once we started using libraries, we had to find the fun in some higher level.
Perhaps in the future the fun will be about solving puzzles within the realm of requirement definitions and all the intricacies that stem from that.
reply
specproc 10 hours ago | parent | prev | next [–]
One hundred percent. I came back into tech professionally over the last decade. Always been into computers, but the first decade or so of my career was in humanitarian amin. Super interesting sector, super boring day-to-day.
“So what," the Chelgrian asked, "is the point of me or anybody else writing a symphony, or anything else?"
The avatar raised its brows in surprise. "Well, for one thing, you do it, it's you who gets the feeling of achievement."
"Ignoring the subjective. What would be the point for those listening to it?"
"They'd know it was one of their own species, not a Mind, who created it."
"Ignoring that, too; suppose they weren't told it was by an AI, or didn't care."
"If they hadn't been told then the comparison isn't complete; information is being concealed. If they don't care, then they're unlike any group of humans I've ever encountered."
"But if you can—"
"Ziller, are concerned that Minds—AIs, if you like—can create, or even just appear to create, original works of art?"
"Frankly, when they're the sort of original works of art that I create, yes."
"Ziller, it doesn't matter. You have to think like a mountain climber."
"Oh, do I?"
"Yes. Some people take days, sweat buckets, endure pain and cold and risk injury and—in some cases—permanent death to achieve the summit of a mountain only to discover there a party of their peers freshly arrived by aircraft and enjoying a light picnic."
"If I was one of those climbers I'd be pretty damned annoyed."
"Well, it is considered rather impolite to land an aircraft on a summit which people are at that moment struggling up to the hard way, but it can and does happen. Good manners indicate that the picnic ought to be shared and that those who arrived by aircraft express awe and respect for the accomplishment of the climbers.
"The point, of course, is that the people who spent days and sweated buckets could also have taken an aircraft to the summit if all they'd wanted was to absorb the view. It is the struggle that they crave. The sense of achievement is produced by the route to and from the peak, not by the peak itself. It is just the fold between the pages." The avatar hesitated. It put its head a little to one side and narrowed its eyes. "How far do I have to take this analogy, Cr. Ziller?”
― Iain M. Banks, Look to Windward
> It is the struggle that they crave
And yet, it's hard to shake the despondent feeling you get looking at the helicopters hoovering around the peak
I'm 42 years old and it has re-ignited mine. I've spent my career troubleshooting and being a generalist, not really interested in writing code outside of for systems and networking usage. It's boring to type out (lots of fun to plan though!) and outdated as soon as it is written.
I've made and continue to make things that I've been thinking about for a while, but the juice was never worth the squeeze. Bluetooth troubleshooting for example -- 5 or 6 different programs will log different parts of the stack independently. I've made an app calling all of these apps, and grouping all of their calls based on mac address' and system time of the calls to correlate and pinpoint the exact issue.
Now I heard the neckbeards crack their knuckles, getting ready to bear down on their split keyboards and start telling me how the program doesn't work because AI made it, it isn't artistic enough for their liking, or whatever the current lie they comfort themselves with is. But it does work, and I've used it already to determine some of my bad devices are really bad.
But there are bugs, you exclaim! Sure, but have you seen human written code?? I've made my career in understanding these systems, programming languages, and people using the systems -- troubleshooting is the fun part and I guess lucky for me is that my favorite part is the thing that will continue to exist.
But what about QA? Humans are better? No. Please guys, stop lying to yourselves. Even if there was a benefit that Humans bring over AI in this arena, that lead is evaporating fast or is already gone. I think a lot of people in our industry take their knowledge and ability to gatekeep by having that knowledge as some sort of a good thing. If that was the only thing you were good at, then maybe it is good that the AI is going to do the thing they excel at and leave those folks to theirs.
It can leave humans to figure out how to maybe be more human? It is funny to type that since I have been on a computer 12h a day since like 1997...but there is a reason why we let calculators crunch large sums, and manufacturing robots have multiple articulating points in their arms making incredible items at insane speeds. I guess there were probably people who like using slide rules and were really good at it, pissed because their job was taken by a device that can do it better and faster. Diddnt the slide rule users take the job from people who did not have a tool like that at first but still had to do the job?
Did THEY complain about that change as well? Regardless, all of these people were left behind if all they are going to do is complain. If you only built one skill in your career, and that is writing code and nothing else, that is not the programs fault.
The journey exists for those who desire to build the knowledge that they lack and use these new incredible tools.
For everyone else, there is Hacker News and an overwhelmingly significant crowd that are ready to talk about the good ole days instead of seeing the opportunities in expanding your talents with software that helps you do your thing better than you have ever dreamed of.
I agree with this.
I recently wanted to monitor my vehicle batteries with a cheap BLE battery monitor from AliExpress (by getting the data into HomeAssistant). I could have spent days digging through BlueZ on a Raspberry Pi, or I could use AI and have a working solution an hour later.
Yes, I gave up the chance to manually learn every layer of the stack. I’m fine with that. The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem. AI got me there faster - and let me move on to my next fun project.
> I could use AI and have a working solution an hour later.
That sounds really cool. You should share what you used.
> The goal was not to become a Bluetooth archaeologist. The goal was to solve the problem.
I'm sympathetic to this view. It seems very pragmatic. After all, the reason we write software is not to move characters around a repo, but to solve problems, right?
But here's my concern. Like a lot of people, I starting programming to solve little problems my friends and I had. Stuff like manipulating game map files and scripting ftp servers. That lead me to a career that's meant building big systems that people depend on.
If everything bite-sized and self-contained is automated with llms, are people still going to make the jump to be able to build and maintain larger things?
To use your example of the BLE battery monitor, the AI built some automation on top of bluez, a 20+ year-old project representing thousands of hours of labor. If AI can replace 100% of programming, no-big-deal it can maintain bluez going forward, but what if it can't? In that case we've failed to nurture the cognitive skills we need to maintain the world we've built.
It has also led me to a career in software development.
I find myself chatting through architectural problems with ChatGPT as I drive (using voice mode). I've continued to learn that way. I don't bother learning little things that I know won't do much for me, but I still do deep research and prototyping (which I can do 5x faster now) using AI as a supplement. I still provide AI significant guidance on the architecture/language/etc of what I want built, and that has come from my 20+ years in software.
This is is the project I was talking about. I prefer using codex day-to-day.
https://github.com/klinquist/HomeAssistant-Vehicle-Battery-M...
This is another fun project I recently built using AI:
https://github.com/klinquist/machinemon
Yup they are so mad that sysadmin + CC/AI is perfect just like how they where going to take over with "devops"