You still need to connect to Anthropic and obtain an authorization token.
The isolation here refers to the workspace. Since you run the CLI in a container, the process can only access what you have mapped inside. This is helpful if you want to avoid issues like this: https://hackaday.com/2025/07/23/vibe-coding-goes-wrong-as-ai...
Ok. Thanks for the clarification. Still a good project, and many people like to use online services.
I prefer local models. All I use and used on the local model could be on an online, no secrets here. The speed is more than acceptable for a low end cpu+gpu.
I stil use Perplexity sometimes for more complex questions.
Suggestion, to use --network=host so you can control to where it connects.
But if it is "a completely isolated environment" why does it need to login and get a token? It defeats isolation.
This should work like any other model, like we do with Ollama, download a model and it runs strict local with no network connections or tokens.
You still need to connect to Anthropic and obtain an authorization token.
The isolation here refers to the workspace. Since you run the CLI in a container, the process can only access what you have mapped inside. This is helpful if you want to avoid issues like this: https://hackaday.com/2025/07/23/vibe-coding-goes-wrong-as-ai...
Ok. Thanks for the clarification. Still a good project, and many people like to use online services.
I prefer local models. All I use and used on the local model could be on an online, no secrets here. The speed is more than acceptable for a low end cpu+gpu.
I stil use Perplexity sometimes for more complex questions.
This would be ideal if claude code supported local models, but is not the case right now
This is a great way for security conscious developers to start using Claude