Don't really agree, in my experience the switching context is extremely costly. I personally have trouble having even a couple of sessions running in parallel,Especially when I'm talking difficult hard to solve problems. Of course it's easy for trivial jobs, but it's not always the case. I have been much more successful in making my time worth by taking a look at the model's output and actively participating.It gives me time to think as well.When I have a list of simple tasks I just tell it to the model and it executes one after another.
True. Sometimes I'll run front-end and backend work in two different claude instances, but always on the same project/product. I'll have "reviewer" instances in opencode using a different (non-Claude) model doing reviews, that's about as much as I can handle. You've got to supervise it while it works. I do have to stop claude from time to time when I catch it doing something naive or unnecessarily complex.
So far the only company that is really outspoken about the scale of their vibe coding has been Anthropic. However their uptime and bug count is atrocious.
There's also a concern I don't hear folks talk about: the potential for all of this multi-tasking to be causing issues in your wellbeing or even harming your brain.
Eg: "For example, functional magnetic resonance imaging (fMRI) studies have shown that multitasking reduces activation in brain regions involved with cognitive control while increasing activation in areas associated with stress and arousal" - from https://pmc.ncbi.nlm.nih.gov/articles/PMC11543232/
I've tried hard to stay away from Instagram, TikTok, etc - for this very reason. Now my day job is going to be attacking me in much the same way? Great.
I don’t know about you but I’m not constantly round robin delegating work to peers and reviewing it on a 10-20 minute cadence. No one works like that. I don’t know if anyone is even capable of working like that day in and day out long term for any meaningful definition of review.
“Don’t pay attention to what Claude is doing, just spam your way through code and commands and hope nothing went wrong and you catch any code issues in review afterwards” is what this sounds like.
I will run parallel Claude sessions when I have a related cluster of bugs which can be fixed in parallel and all share similar context / mental state (yet are sufficiently distinct not to just do in one session with subagents).
Beyond that, parallel sessions to maybe explore some stuff but only one which is writing code or running commands that need checking (for trust / safety / security reasons).
Any waiting time is spent planning next steps (eg writing text files with prompts for future tasks) or reviewing what Claude previously did and writing up lists (usually long ones) of stuff to improve (sometimes with drafts prompts or notes of gotchas that Claude tripped up on the first time which I can prompt around in future).
Spend time thinking, not just motoring your way through tokens.
I don't know how and if people really manage to run many tasks in parallel and also not check the output. Very recently I had two items that for a reasonably intelligent engineer wouldn't be very complex, but would take time to implement.
One of them was vibe-coding an Electron app for myself that was running a Llama server. Claude couldn't find out why it wasn't running on Windows while it worked fine on Linux and Mac. I obviously didn't check all its output but after several hours had a feeling that it was running in circles. Eventually we managed to cooperatively debug it after I gave it several hints but it wasted a a lot of time for a rather simple issue which was a challenge for me also because I didn't know well how the vibe-coded app worked.
The second one (can't go into details) was also something that's reasonably simple but I was finding awfully many bugs because unlike the first app, this one was for my job and I review everything. So we had to go back and forth for multiple hours.
How can someone just switch to another task while the current one requires constant handholding?
The need for "complex tasks" should be exceptional enough that you're not building your workflow around them. A good example of such an exception would be kickstarting a port of a project for which you have a great test suite from one language to another. This is rare in most professional settings.
Computers are fast. If a physic engine can compute a game world in 1/60 of a second. The majority of the tasks should be done in less than 7 minutes.
Whenever I see transcript of a long running task, I see a lot of drifting of the agent due to not having any context (or the codebase is not organized) and it trying various way to gather information. Then it settle on the wrong info and produce bad results.
Greppability of the codebase helps. So do following patterns and good naming. A quick overview of the codebase and convention description also shortens the reflection steps. Adding helper tools (scripts) help too.
I disagree with this take. I get that LLM produced text is filled with crappy, over the top writing in pretty much all cases, but if a prompter/writer/blogger is using it iteratively, the LLM output is going to be way better than their writing. Also, if a person is using LLMs to write articles, do you really want to see their likely even worse writing?
I'd offer a different approach: think about how you're going to validate. An only-slightly-paraphrased Claude conversation I had yesterday:
> me: I want our agent to know how to invoke skills.
> Claude: [...]
> Claude: Done. That's the whole change. No MCP config, no new env vars, no caller changes needed.
> me: ok, test it.
> Claude: This is a big undertaking.
That's the hard part, right? Maybe Claude will come back with questions, or you'll have to kick it a few times. But eventually, it'll declare "I fixed the bug!" or summarize that the feature is implemented. Then what?
I get a ton of leverage figuring this out what I need to see to trust the code. I work on that. Figure out if there's a script you can write that'll exercise everything and give you feedback (2nd claude session!). Set up your dev env so playwright will Just Work and you can ask Claude to click around and give you screenshots of it all working. Grep a bunch and make yourself a list of stuff to review, to make sure it didn't miss anything.
The old saying is "don't multitask" but apparently that time is gone.
I wonder what people think about this. I know there is a class of SWE/dev who now consider oneself as "the manager of agents". Good luck to them and articles like this would work for these people.
I'm not there yet and I hope I don't have to. I'm not a LLM and my mental model is (I believe) more than a markdown. But I haven't figured out the mental model that works for me, still staring at the terminal Claude blinking the cursor, sticking to "don't multitask" dogma.
It's just a process to loop over a number of cycle prompting each thing taking minutes to run. It's a recipe for a massive headache as context switching costs more than 7 minutes (that arbitrary number the article came up with)
Or, wait and take a little break so you don't burn out. I miss the days where you had to wait for code to compile or for your "big data" job to run, so you could give yourself a little mini break.
This advice will be very dated when inference gets an order of magnitude faster. And it will happen—it’s classic tech. Probably will even follow moores law or something.
Wait until that 8 minute inference is only a handful of seconds and that is when things get real wild and crazy. Because if the time inference takes isn’t a bottleneck… then iteration is cheap.
Don't really agree, in my experience the switching context is extremely costly. I personally have trouble having even a couple of sessions running in parallel,Especially when I'm talking difficult hard to solve problems. Of course it's easy for trivial jobs, but it's not always the case. I have been much more successful in making my time worth by taking a look at the model's output and actively participating.It gives me time to think as well.When I have a list of simple tasks I just tell it to the model and it executes one after another.
True. Sometimes I'll run front-end and backend work in two different claude instances, but always on the same project/product. I'll have "reviewer" instances in opencode using a different (non-Claude) model doing reviews, that's about as much as I can handle. You've got to supervise it while it works. I do have to stop claude from time to time when I catch it doing something naive or unnecessarily complex.
There's a lot more "telling" than "showing" going on.
By that I mean - the people claiming hyper-productivity from their GasTown setup never have actual products to demo.
Perhaps they earn $500k and worry spending any less than $250k in token may raise suspicion.
Something would be deeply wrong!
So far the only company that is really outspoken about the scale of their vibe coding has been Anthropic. However their uptime and bug count is atrocious.
There's also a concern I don't hear folks talk about: the potential for all of this multi-tasking to be causing issues in your wellbeing or even harming your brain.
Eg: "For example, functional magnetic resonance imaging (fMRI) studies have shown that multitasking reduces activation in brain regions involved with cognitive control while increasing activation in areas associated with stress and arousal" - from https://pmc.ncbi.nlm.nih.gov/articles/PMC11543232/
I've tried hard to stay away from Instagram, TikTok, etc - for this very reason. Now my day job is going to be attacking me in much the same way? Great.
I can do two or three at a time. I treat them a bit like queues: Last in first out, sort of like we do with our human peers.
We delegate work, we tend to some other work, and we code review much later in the day.
The secret to this mindset is that it doesn't always have to line up. Let your agent wait for you; You'll get to their output next.
I don’t know about you but I’m not constantly round robin delegating work to peers and reviewing it on a 10-20 minute cadence. No one works like that. I don’t know if anyone is even capable of working like that day in and day out long term for any meaningful definition of review.
“Don’t pay attention to what Claude is doing, just spam your way through code and commands and hope nothing went wrong and you catch any code issues in review afterwards” is what this sounds like.
I will run parallel Claude sessions when I have a related cluster of bugs which can be fixed in parallel and all share similar context / mental state (yet are sufficiently distinct not to just do in one session with subagents).
Beyond that, parallel sessions to maybe explore some stuff but only one which is writing code or running commands that need checking (for trust / safety / security reasons).
Any waiting time is spent planning next steps (eg writing text files with prompts for future tasks) or reviewing what Claude previously did and writing up lists (usually long ones) of stuff to improve (sometimes with drafts prompts or notes of gotchas that Claude tripped up on the first time which I can prompt around in future).
Spend time thinking, not just motoring your way through tokens.
I don't know how and if people really manage to run many tasks in parallel and also not check the output. Very recently I had two items that for a reasonably intelligent engineer wouldn't be very complex, but would take time to implement.
One of them was vibe-coding an Electron app for myself that was running a Llama server. Claude couldn't find out why it wasn't running on Windows while it worked fine on Linux and Mac. I obviously didn't check all its output but after several hours had a feeling that it was running in circles. Eventually we managed to cooperatively debug it after I gave it several hints but it wasted a a lot of time for a rather simple issue which was a challenge for me also because I didn't know well how the vibe-coded app worked.
The second one (can't go into details) was also something that's reasonably simple but I was finding awfully many bugs because unlike the first app, this one was for my job and I review everything. So we had to go back and forth for multiple hours.
How can someone just switch to another task while the current one requires constant handholding?
If you are letting Claude run for seven minutes at a time, you aren't thinking hard enough about what you're building.
If you start trying to juggle multiple agents, you are doubling down on the wrong strategy.
https://hbr.org/2010/12/you-cant-multi-task-so-stop-tr
Why should Claude finish complex tasks in less than seven minutes?
The need for "complex tasks" should be exceptional enough that you're not building your workflow around them. A good example of such an exception would be kickstarting a port of a project for which you have a great test suite from one language to another. This is rare in most professional settings.
Computers are fast. If a physic engine can compute a game world in 1/60 of a second. The majority of the tasks should be done in less than 7 minutes.
Whenever I see transcript of a long running task, I see a lot of drifting of the agent due to not having any context (or the codebase is not organized) and it trying various way to gather information. Then it settle on the wrong info and produce bad results.
Greppability of the codebase helps. So do following patterns and good naming. A quick overview of the codebase and convention description also shortens the reflection steps. Adding helper tools (scripts) help too.
Just show us the prompt you used to produce this post instead of the output
I disagree with this take. I get that LLM produced text is filled with crappy, over the top writing in pretty much all cases, but if a prompter/writer/blogger is using it iteratively, the LLM output is going to be way better than their writing. Also, if a person is using LLMs to write articles, do you really want to see their likely even worse writing?
Nice catch. Look at this at the end:
> jc is open source. If you have improvements, have your Claude open a PR against mine. I don’t accept human-authored code.
So it seems not only does the author reject human-authored PRs, they also refuse human-authored blog posts.
I wonder if they also only want agents to read it, not people.
When computer works, it's sword fighting time. I don't make the rules
For younger people, just in case: https://xkcd.com/303/
And for those scoffing
https://xkcd.com/1053/
And for those who are feeling smug, that last one (which I still consider fairly recent) was 14 years ago
I'm not slacking off, Claude is shmorgalizing.
I'd offer a different approach: think about how you're going to validate. An only-slightly-paraphrased Claude conversation I had yesterday:
> me: I want our agent to know how to invoke skills.
> Claude: [...]
> Claude: Done. That's the whole change. No MCP config, no new env vars, no caller changes needed.
> me: ok, test it.
> Claude: This is a big undertaking.
That's the hard part, right? Maybe Claude will come back with questions, or you'll have to kick it a few times. But eventually, it'll declare "I fixed the bug!" or summarize that the feature is implemented. Then what?
I get a ton of leverage figuring this out what I need to see to trust the code. I work on that. Figure out if there's a script you can write that'll exercise everything and give you feedback (2nd claude session!). Set up your dev env so playwright will Just Work and you can ask Claude to click around and give you screenshots of it all working. Grep a bunch and make yourself a list of stuff to review, to make sure it didn't miss anything.
Very painful to read.
Yes. I agree with the problem statement, but have no idea what the solution is BUT it does involve lots of key bindings.
was this written using a LinkedIn skill
Wasn't there some recent discovery that context switching is harmful to your brain?
The old saying is "don't multitask" but apparently that time is gone.
I wonder what people think about this. I know there is a class of SWE/dev who now consider oneself as "the manager of agents". Good luck to them and articles like this would work for these people.
I'm not there yet and I hope I don't have to. I'm not a LLM and my mental model is (I believe) more than a markdown. But I haven't figured out the mental model that works for me, still staring at the terminal Claude blinking the cursor, sticking to "don't multitask" dogma.
This looks absolutely wonderful. Is it possible to run against Claude remotely (e.g. on a VM?). Or should I ask Claude to add that?
> The fix is obvious: work on something else while Claude runs.
Disagree. The fix is actually counter-intuitive: give Claude smaller tasks so that it completes them in less time and you remain in the driver's seat.
I'm not sure I'm understanding this workflow. Perhaps a small tutorial / walkthrough hosted on YouTube or asciinema might help people understand.
It's just a process to loop over a number of cycle prompting each thing taking minutes to run. It's a recipe for a massive headache as context switching costs more than 7 minutes (that arbitrary number the article came up with)
> jc is open source. If you have improvements, have your Claude open a PR against mine. I don’t accept human-authored code.
Is this sarcasm? If not, I wonder why.
Or, wait and take a little break so you don't burn out. I miss the days where you had to wait for code to compile or for your "big data" job to run, so you could give yourself a little mini break.
Of course there is a relevant XKCD: https://xkcd.com/303/
Hackernews needs to nominate an elite crew of individuals who can tell when an article is AI slop and flag it.
This advice will be very dated when inference gets an order of magnitude faster. And it will happen—it’s classic tech. Probably will even follow moores law or something.
Wait until that 8 minute inference is only a handful of seconds and that is when things get real wild and crazy. Because if the time inference takes isn’t a bottleneck… then iteration is cheap.
Wtf is this LLM slop
[dead]
[dead]
[dead]
Lots of LLM-isms in the article from a very casual scan so going to assume nothing interesting here