i mostly use LLMs inside a reasoning shell i built — like a lightweight semantic OS where every input gets recorded as a logic node (with ΔS and λ_observe vectors) and stitched into a persistent memory tree.
it solved a bunch of silent failures i kept running into with tools like RAG and longform chaining:
drift across hops (multi-step collapse)
hallucination on high-similarity chunks
forgetting prior semantic commitments across calls
the shell is plain-text only (no install), MIT licensed, and backed by tesseract.js’s creator.
i’ll drop the link if anyone’s curious — not pushing, just realized most people don’t know this class of tools exists yet.
LLM's are best when you know exactly how to implement something and can describe it fully, but it would take longer to actually write everything yourself. They're also good at rigorous attention to detail in domains that are well-established and the rules are deterministic and not subtle.
I use the terms "LLM" or "AI" (as in, "I used an LLM/AI to write a <insert task> helper") as a quick hint to ignore articles/links/etc in the same way I've previously use the terms "You won't believe what happened next" or "they hate this one trick" to avoid spam bait article links, or "shocked face overlay" to avoid bullshit YouTube videos.
So, thankyou for that AI techbros. Keep telling us loudly and proudly that you're using "AI" to write your slop, it makes it much easier to know what to avoid when skimming titles.
i mostly use LLMs inside a reasoning shell i built — like a lightweight semantic OS where every input gets recorded as a logic node (with ΔS and λ_observe vectors) and stitched into a persistent memory tree.
it solved a bunch of silent failures i kept running into with tools like RAG and longform chaining:
the shell is plain-text only (no install), MIT licensed, and backed by tesseract.js’s creator. i’ll drop the link if anyone’s curious — not pushing, just realized most people don’t know this class of tools exists yet.LLM's are best when you know exactly how to implement something and can describe it fully, but it would take longer to actually write everything yourself. They're also good at rigorous attention to detail in domains that are well-established and the rules are deterministic and not subtle.
I use LLMs mostly for learning and understanding.
When a book doesn’t explain something clearly, I ask for a deeper explanation — with examples, and sometimes exercises.
It’s like having a quiet teacher nearby who never gets frustrated if I don’t get it right away. No magic. Just thinking.
I also started building my own terminal-based GPT client (in C, of course). That’s a whole journey in itself — and it’s only just begun.
If the problem is simple enough for top-down design, I write signatures along with comments with i/o examples.
But if I need to explore stuff, I ask for some example and then re-prompt it top-down.
I use the terms "LLM" or "AI" (as in, "I used an LLM/AI to write a <insert task> helper") as a quick hint to ignore articles/links/etc in the same way I've previously use the terms "You won't believe what happened next" or "they hate this one trick" to avoid spam bait article links, or "shocked face overlay" to avoid bullshit YouTube videos.
So, thankyou for that AI techbros. Keep telling us loudly and proudly that you're using "AI" to write your slop, it makes it much easier to know what to avoid when skimming titles.