The real loss isn't discipline—it's the feedback loop. When you write code by hand, the friction in the process forces you to internalize it. You sense when something is wrong before you can say why. AI eliminates that friction, so you read the result instead of writing it, and reading is always easier than it should be. The specification/architectural solution works, but it's an external constraint replacing an internal one. A more profound solution might be to proactively find ways to bring the friction back-writing tests before reading the AI's output (maybe with AI support), retyping critical sections (with AI or not), anything that forces the writer to engage in the loop.
Have you tried using Ralph Wiggum Loops to reach convergence? I use the Ralph-O-Matic on Github to run automated, iterative refinement loops on my codebases. It works very well!
Me too! I'm really interested in seeing an LLM coding platform treat those documents (spec, architecture, decision logs) as first-class objects: collaborative, persistent, and outside the codebase itself.
Right now they live in repos and it's easy for them to get lost later, and harder to share them.
The real loss isn't discipline—it's the feedback loop. When you write code by hand, the friction in the process forces you to internalize it. You sense when something is wrong before you can say why. AI eliminates that friction, so you read the result instead of writing it, and reading is always easier than it should be. The specification/architectural solution works, but it's an external constraint replacing an internal one. A more profound solution might be to proactively find ways to bring the friction back-writing tests before reading the AI's output (maybe with AI support), retyping critical sections (with AI or not), anything that forces the writer to engage in the loop.
Have you tried using Ralph Wiggum Loops to reach convergence? I use the Ralph-O-Matic on Github to run automated, iterative refinement loops on my codebases. It works very well!
[dead]
The only thing that’s helped me is adding hard constraints: spec, architecture, small verifiable steps, and explicit decision logs.
Me too! I'm really interested in seeing an LLM coding platform treat those documents (spec, architecture, decision logs) as first-class objects: collaborative, persistent, and outside the codebase itself.
Right now they live in repos and it's easy for them to get lost later, and harder to share them.
Author here: I wrote this peice after trying to use AI to build an internal tool and running into a failure mode I didn't expect.
Curious if others have his the same "expansion without convergence" issue with using LLMs for coding.
agree, I also used Ralph loop /umputun/ralphex on GitHub
How has Ralph loop helped you?
[dead]
[dead]
[dead]