I can attest to everything. Using Tidewave MCP to give your agent access to the runtime via REPL is a superpower, especially with Elixir being functional. It's able to proactively debug and get runtime feedback on your modular code as it's being written. It can also access the DB via your ORM Ecto modules. It's a perfect fit and incredibly productive workflow.
> In Elixir tests, each test runs in a database transaction that rolls back at the end. Tests run async without hitting each other. No test data persists.
And it confuses Claude.
This way of running tests is also what Rails does, and AFAIK Django too. Tests are isolated and can be run in random order. Actually, Rails randomizes the order so if the are tests that for any reason depend on the order of execution, they will eventually fail. To help debug those cases, it prints the seed and it can be used to rerun those tests deterministically, including the calls to methods returning random values.
I thought that this is how all test frameworks work in 2026.
I did too, and I've had a challenging time convincing people outside of those ecosystems that this is possible, reasonable, we've been doing it for over a decade.
It seems like the 100% vibe coded is an exaggeration given that Claude fails at certain tasks.
The new generation of code assistants are great. But when I dogmatically try to only let the AI work on a project it usually fails and shots itself in its proverbial feet.
If this is indeed 100% vibe coded, then there is some magic I would love to learn!
It's interesting that Claude is able to effectively write Elixir, even if it isn't super idiomatic without established styles in the codebase, considering Elixir is a pretty niche and relatively recent language.
What I'd really like to see though is experiments on whether you can few shot prompt an AI to in-context-learn a new language with any level of success.
I tried different LLMs with various languages so far: Python, C++, Julia, Elixir and JavaScript.
The SOTA models come do a great job for all of them, but if I had to rank the capabilities for each language it would look like this:
JavaScript, Julia > Elixir > Python > C++
That's just a sample size of one, but I suspect, that for all but the most esoteric programming languages there is more than enough code in the training data.
It's certainly helpful, but has a tendency to go for very non idiomatic patterns (like using exceptions for control flow).
Plus, it has issues which I assume are the effect of reinforcement learning - it struggles with letting things crash and tends to silence things that should never fail silently.
You can accurately describe elixir syntax in a few paragraphs, and the semantics are pretty straightforward. I’d imagine doing complex supervision trees falls flat.
I can attest to everything. Using Tidewave MCP to give your agent access to the runtime via REPL is a superpower, especially with Elixir being functional. It's able to proactively debug and get runtime feedback on your modular code as it's being written. It can also access the DB via your ORM Ecto modules. It's a perfect fit and incredibly productive workflow.
Which models are you using? I’ve had mixed luck with GPT 5.2.
I've been using Opus 4.5 via Claude Code
Great article that concretizes a lot of intuitions I've had while vibe coding in Elixir.
We don't 100% AI it but this very much matches our experience, especially the bits about defensiveness.
Going to do some testing this week to see if a better agents file can't improve some of the author's testing struggles.
> In Elixir tests, each test runs in a database transaction that rolls back at the end. Tests run async without hitting each other. No test data persists.
And it confuses Claude.
This way of running tests is also what Rails does, and AFAIK Django too. Tests are isolated and can be run in random order. Actually, Rails randomizes the order so if the are tests that for any reason depend on the order of execution, they will eventually fail. To help debug those cases, it prints the seed and it can be used to rerun those tests deterministically, including the calls to methods returning random values.
I thought that this is how all test frameworks work in 2026.
I did too, and I've had a challenging time convincing people outside of those ecosystems that this is possible, reasonable, we've been doing it for over a decade.
It seems like the 100% vibe coded is an exaggeration given that Claude fails at certain tasks.
The new generation of code assistants are great. But when I dogmatically try to only let the AI work on a project it usually fails and shots itself in its proverbial feet.
If this is indeed 100% vibe coded, then there is some magic I would love to learn!
It's interesting that Claude is able to effectively write Elixir, even if it isn't super idiomatic without established styles in the codebase, considering Elixir is a pretty niche and relatively recent language.
What I'd really like to see though is experiments on whether you can few shot prompt an AI to in-context-learn a new language with any level of success.
I tried different LLMs with various languages so far: Python, C++, Julia, Elixir and JavaScript.
The SOTA models come do a great job for all of them, but if I had to rank the capabilities for each language it would look like this:
JavaScript, Julia > Elixir > Python > C++
That's just a sample size of one, but I suspect, that for all but the most esoteric programming languages there is more than enough code in the training data.
I would argue effectiveness point.
It's certainly helpful, but has a tendency to go for very non idiomatic patterns (like using exceptions for control flow).
Plus, it has issues which I assume are the effect of reinforcement learning - it struggles with letting things crash and tends to silence things that should never fail silently.
You can accurately describe elixir syntax in a few paragraphs, and the semantics are pretty straightforward. I’d imagine doing complex supervision trees falls flat.
Unless that new language has truly esoteric concepts, it's trivial to pattern-match it to regular programming constructs (loops, functions, ...)