When you have a flow well defined, like transactions going on, it simply doesn't scale. But AI can then be used for analysis, alerts and investigating failures of such processes very nicely. Agents can also be used to prepare a transaction package that needs more human input, like a customer service case, but again with clearly defined outcomes. At least that's what I've seen in my limited experience consulting for a local online retailer.
I look at the traces of agent execution, and use that as a feedback to extract common patterns. The comment patterns are extracted out as Scripts, or Skills.
So Agent doesnt have to figure out how to do things from scratch, saving considerable amount of tokens and latency.
Everything is based on the requirements and available resources. One of our clients decided that calling the AI so often takes time, and money, and this does not work for him.
AI can give suggestions, not decisions. IF you want decisions and responsibility to be taken, use real people.
I tend to draw the line at automating the LLM to respond to things. If it's responding to some sort of external source, that source is usually somewhat consistent to the point I'd rather have the LLM create a script to parse the data and do that automatically. I've got a job search tool that I built recently using Claude Code. CC created scripts to scrape certain websites and scheduled them using native OS schedulers. The results get parsed and dropped into a sqlite database. No LLM is involved in the automated portion of this process. I've got some general status scripts which push details about the current health state of my servers and apps and also will alert me when job listings reach some defined threshold. At that point I use the LLM to look through the new jobs and categorize them based on work I'd find interesting giving me a prioritized list.
If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup. If anthropic went out of business tomorrow, I'd lose nothing switching to another provider because I don't have to "trust" agentic operations in automated processes. They are always overseen by me and they are rarely creating things I couldn't have created myself. It's just much faster to iterate on it with these tools.
> If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup.
This is a really pragmatic philosophy and I think it's underappreciated. Using the LLM as a development accelerator rather than a runtime dependency gives the best of both worlds.
When you have a flow well defined, like transactions going on, it simply doesn't scale. But AI can then be used for analysis, alerts and investigating failures of such processes very nicely. Agents can also be used to prepare a transaction package that needs more human input, like a customer service case, but again with clearly defined outcomes. At least that's what I've seen in my limited experience consulting for a local online retailer.
That's exactly my process I follow now.
I look at the traces of agent execution, and use that as a feedback to extract common patterns. The comment patterns are extracted out as Scripts, or Skills.
So Agent doesnt have to figure out how to do things from scratch, saving considerable amount of tokens and latency.
I also came across this paper recently: https://arxiv.org/abs/2603.25158
Which does exactly the same. Extracts traces and converts them into skills for agents to use.
Everything is based on the requirements and available resources. One of our clients decided that calling the AI so often takes time, and money, and this does not work for him.
AI can give suggestions, not decisions. IF you want decisions and responsibility to be taken, use real people.
I tend to draw the line at automating the LLM to respond to things. If it's responding to some sort of external source, that source is usually somewhat consistent to the point I'd rather have the LLM create a script to parse the data and do that automatically. I've got a job search tool that I built recently using Claude Code. CC created scripts to scrape certain websites and scheduled them using native OS schedulers. The results get parsed and dropped into a sqlite database. No LLM is involved in the automated portion of this process. I've got some general status scripts which push details about the current health state of my servers and apps and also will alert me when job listings reach some defined threshold. At that point I use the LLM to look through the new jobs and categorize them based on work I'd find interesting giving me a prioritized list.
If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup. If anthropic went out of business tomorrow, I'd lose nothing switching to another provider because I don't have to "trust" agentic operations in automated processes. They are always overseen by me and they are rarely creating things I couldn't have created myself. It's just much faster to iterate on it with these tools.
> If all LLM tools disappeared tomorrow, all of my scripts and processes developed with an LLM will continue to work without hiccup.
This is a really pragmatic philosophy and I think it's underappreciated. Using the LLM as a development accelerator rather than a runtime dependency gives the best of both worlds.