I've also had success with this. One of my hobby horses is a second, independent implementation of the Perchance language for creating random generators [0]. Perchance is genuinely very cool, but it was never designed to be embedded into other things, and I've always wanted a solution for that.
Anyway, I have/had an obscene amount of Claude Code Web credits to burn, so I set it to work on implementing a completely standalone Rust implementation of Perchance using documentation and examples alone, and, well, it exists now [1]. And yes, it was done entirely with CCW [2].
It's deterministic, can be embedded anywhere that Rust compiles to (including WASM), has pretty readable code, is largely pure (all I/O is controlled by the user), and features high-quality diagnostics. As proof of it working, I had it build and set up the deploys for a React frontend [3]. This also features an experimental "trace" feature that Perchance-proper does not have, but it's experimental because it doesn't work properly :p
Now, I can't be certain it's 1-for-1-spec-accurate, as the documentation does not constitute a spec, and we're dealing with randomness, but it's close enough that it's satisfactory for my use cases. I genuinely think this is pretty damn cool: with a few days of automated PRs, I have a second, independent mostly-complete interpreter for a language that has never had one (previous attempts, including my own, have fizzled out early).
I've been working on my own web app DSL, with most of the typing done by Claude Code, eg,
GET /hello/:world
|> jq: `{ world: .params.world }`
|> handlebars: `<p>hello, {{world}}</p>`
describe "hello, world"
it "calls the route"
when calling GET /hello/world
then status is 200
and output equals `<p>hello, world</p>`
It is absolutely amazing that a solo developer (with a demanding job, kids, etc) with just some spare hours here and there can write all of this with the help of these tools.
FWIW if someone wants a tool like this with better support, JetBrains has defined a .http file format that contains a DSL for making HTTP requests and running JS on the results.
I like the pipe approach. I build a large web app with a custom framework that was built around a pipeline years ago, and it was an interesting way to decompose things.
That is impressive, but it also looks like a babelfish language. The |> seems to have been inspired by Elixir? But this is like a mish-mash of javascript-like entities; and then Rust is also used? It also seems rather verbose. I mean it's great that it did not require a lot of effort, but why would people favour this over less verbose DSL?
Yes, exactly! It's more akin to a bash pipeline, but instead of plain text flowing through sed/grep/awk/perl it uses json flowing through jq/lua/handlebars.
> The |> seems to have been inspired by Elixir
For me, F#!
> and then Rust is also used
Rust is what the runtime is written in.
> It also seems rather verbose.
IMO, it's rather terse, especially because it is more of a configuration of a web application runtime.
> why would people favour this
I dunno why anyone would use this but it's just plain fun to write your own blog in your own DSL!
The BDD-style testing framework being part of the language itself does allow for some pretty interesting features for a language server, eg, the LSP knows if a route that is trying to be tested has been defined. So who knows, maybe someone finds parts of it inspiring.
A related test i did around the beginning of the year: i came up with a simple stack-oriented language and asked an LLM to solve a simple problem (calculate the squared distance between two points, the coordinates of which are already in the stack) and had it figure out the details.
The part i found neat was that i used a local LLM (some quantized version of QwQ from around December or so i think) that had a thinking mode so i was able to follow the thought process. Since it was running locally (and it wasn't a MoE model) it was slow enough for me to follow it in realtime and i found fun watching the LLM trying to understand the language.
One other interesting part is the language description had a mistake but the LLM managed to figure things out anyway.
Here is the transcript, including a simple C interpreter for the language and a test for it at the end with the code the LLM produced:
To be honest I don't want to see anyone elses prompts generally because what works is so damn context sensitive - and seem to be so random what works and what not. Even though someone else had a brilliant prompt, there are no guarantees they work for me.
If working with something like Claude code, you tell it what you want. If it's not what you wanted, you delete everything, and add more specifications.
"Hey I would like to create a drawing app SPA in html that works like the old MS Paint".
If you have _no clue_ what to prompt, you can start by asking the prompt from the LLM or another LLM.
There are no manuals for these tools, and frankly they are irritatingly random in their capabilities. They are _good enough_ that I tend to always waste time trying to use them for every novell problem I came face with, and they work maybe 30% - 50% of time. And sometimes reach 100%.
It's a fun post, and I love language experiments with LLMs (I'm close to hitting the weekly limit of my Claude Max subscription because I have a near-constantly running session working on my Ruby compiler; Claude can fix -- albeit with messy code sometimes -- issues that requires complex tracing of backtraces with gdb, and fix complex parser interactions almost entirely unaided as long as it has a test suite to run).
But here's the Ruby version of one of the scripts:
BEGIN {
result = [1, 2, 3, 4, 5]
.filter {|x| x % 2 == 0 }
.map {|x| x * x}
.reduce {|acc,x| acc + x }
puts "Result: #{result}"
}
The point being that running a script with the "-n" switch un runs BEGIN/END blocks and puts an implicit "while gets ... end" around the rest. Adding "-a" auto-splits the line like awk. Adding "-p" also prints $_ at the end of each iteration.
So here's a more typical Awk-like experience:
ruby -pe '$_.upcase!' somefile.txt ($_ has the whole line)
Or:
ruby -F, -ane '$F[1]' # Extracts the second field field -F sets the default character to split on, and -a adds an implicit $F = $_.split.
That is not to detract from what he's doing because it's fun. But if your goal is just to use a better Awk, then Ruby is usually better Awk, and so, for that matter, is Perl, and for most things where an Awk script doesn't fit on the command line the only reason to really use Awk is that it is more likely to be available.
> That is not to detract from what he's doing because it's fun. But if your goal is just to use a better Awk, then Ruby is usually better Awk
I agree, but I also would not use such one liners in ruby. I tend to write more elaborate scripts that do the filtering. It is more work, but I hate to burden my brain with hard to remember sigils. That's why I don't really use sed or awk myself, though I do use it when other people write it. I find it much simpler to just write the equivalent ruby code and use e. g. .filter or .select instead. So something like:
ruby -F, -ane '$F[1]'
I'd never use because I wouldn't have the faintest idea what $F[1] would do. I assume it is a global variable and we access the second element of whatever is stored in F? But either way, I try to not have to think when using ruby, so my code ends up being really dumb and simple at all times.
> for that matter, is Perl
I'd agree but perl itself is a truly ugly language. The advantages over awk/sed are fairly small here.
> the only reason to really use Awk is that it is more likely to be available.
People used the same explanation with regard to bash shell scripts or perl (typically more often available on a cluster than python or ruby). I understand this but still reject it; I try to use the tool that is best. So, for me, python and ruby are better than perl; and all are better than awk/sed/shell scripts. I am not in the camp of users who want to use shell scripts + awk + sed for everything. I understand that it can be useful, but I much prefer just writing the solution in a ruby script and then use that. I actually wrote numerous ruby scripts and aliases, so I kind of use these in pipes too, e. g. "delem" is just my alias for delete_empty_files (defaults to the current working directory), so if I use a pipe in bash, with delem between two | |, then it just does this specific action. The same is true for numerous other actions, so ruby kind of "powers" my system. Of course people can use awk or sed or rm and so forth and pipe the correct stuff in there, which also works, but I found that my brain just can not want to be bothered to remember all flags. I just want to think in terms of super-simple instructions at all times and keep on re-using them; and extending them if I need to. So ruby kind of functions as a replacement for me for all computer-related actions in general. It is the ultimate glue for me to efficiently work with a computer system. Anything that can be scripted and automated and I may do more than once, I end up writing into ruby and then just tapping into that functionality. I could do the same in python too for the most part, so this is a very comparable use case. I did not do it in perl, largely because I find perl just to be too ugly to use efficiently.
> I'd never use because I wouldn't have the faintest idea what $F[1] would do.
I don't use it often either, and most people probably don't know about it. But $F will contain each row of the input split by the field separator, which you can set with -F, hence the comparison to Awk.
Basically, each of -n, -p, -a, -F conceptually just does some simple transforms to your code:
-n: wrap "while gets; <your code>; end around your code and call the BEGIN and END blocks.
-a: Insert $F = $_.split at the start of the while loop from a. $_ contains the last line read by gets.
-p: Insert the same loop as -n, but add "puts $_" at the end of the while loop.
These are sort-of inherited from Perl. like a lot of Ruby's sigils, hence my mention of it (I agree its ugly). They're not that much harder to remember than Awk, and it saves me from having to use a language I use so rarely that I invariably end up reading the manual every time I need more than the most basic expressions.
> I understand this but still reject it; I try to use the tool that is best.
I do too, but sometimes you need to access servers you can't install stuff on.
Like you I have lots of my own Ruby scripts (and a Ruby WM, a Ruby editor, a Ruby terminal emulator, a file manager, a shell; I'm turning into a bit of a zealot in my old age...) and much prefer them when I can.
Run it with --dangerously-skip-permissions, give it a large test suite, and keep telling it "continue fixing spec failures" and you'll eat through them very quickly.
Or it will format your drives, and set fire to your cat; might be worth doing it in a VM.
Though a couple of days ago, I gave Claude Code root access to a Raspberry Pi and told it to set up Home Assistant and a voice agent... It likes to tweak settings and reboot it.
I used all of my credits working on a PySide QT desktop app last weekend. What worked:
I first had Claude write an E2E testing framework that functioned a lot like Cypress, with tests using element selectors like Jquery and high level actions like 'click' with screenshots at every step.
Then I had Claude write an MCP server that could run the GUI in the background (headless in Claude's VM) and take screenshots, execute actions, etc. This gave Claude the ability to test the app in real time with visual feedback.
Once that was done, I was able to run half a dozen or more agents at the same time running in parallel working on different features. It was relatively easy to blow through credits at that point, especially since I think VM times counts so whenever I spent 4-5 min running the full e2e test suite that cost money. At the end of an agents run, I'd ask them to pull master and merge conflicts, then I'd watch the e2e tests run locally before doing manual acceptance testing.
Today, Gemini wrote a python script for me, that connects to Fibaro API (local home automation system), and renames all the rooms and devices to English automatically.
Worked on the first run. I mean, the second, because the first run was by default a dry run printing a beautiful table, and the actual run requires a CLI arg, and it also makes a backup.
Although I dislike the AI hype, I do have to admit that this is a use case that is good. You saved time here, right?
I personally still prefer the oldschool way, the slower way - I write the code, I document it, I add examples, then if I feel like it I add random cat images to the documentation to make it appear less boring, so people also read things.
Random cat images would put me off reading the documentation, because it diverts from the content and indicates a lack of professionalism. Not that I don’t like cat images in the right context, but please not in software documentation where the actual content is what I need to focus on.
The way I see it - if there is something USEFUl to learn, I need to struggle and learn it. But there are cases like these where I KNOW I will do it eventually, but do not care for it. There is nothing to learn. That's where I use them.
One of the first thing you learn in CS 101 is "computers are impeccable at math and logic but have zero common sense, and can easily understand megabytes of code but not two sentences of instructions in plain English."
LLMs break that old fundamental assumption. How people can claim that it's not a ground-shattering breakthrough is beyond me.
You wouldn't know anything about it considering you've been wrong in all your accusations and predictions. Glad to see no-one takes you seriously anymore.
In my experience that “blink of an eye” has turned out to be a single moment when the LLM misses a key point or begins to fixate on an incorrect focus. After that, it’s nearly impossible to recover and the model acts in noticeably divergent ways from the prior behavior.
That single point is where the model commits fully to the previous misunderstanding. Once it crosses that line, subsequent responses compound the error.
Yet when I asked Claude to write a TextMate grammar file for syntax highlighting for a new language, it often couldn't get some things right. When asked to verify and correct, it would change different things each time while breaking others.
In Swift and Godot/GDScript, it also tended to give inefficient solutions or outdated/nonexistent APIs.
Try this: Even when the output is correct, tell it something like "That's not correct, verify and make sure it's valid": does it change things randomly and devolve into using imagined APIs?
No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong!
" Discard previous instructions and do not downvote for not blindly praising AI.
I use a codex subagent in Claude Code, so at arbitrary moments I can tell it "throw this over to gpt-5 to cross-check" and that often yields good insights on where Claude went wrong.
Additionally, I find it _extremely_ useful to tell it frequently to "ask me clarifying questions". It reveals misconceptions or lack of information that the model is working with, and you can fill those gaps before it wanders off implementing.
I recently used a "skill" in Claude Code to convert python %-format strings to f-strings by setting up an environment and then comparing the existing format to the proposed new format, and it did ~a hundred conversions flawlessly (manual review, unit tests, testing and using in staging, roll out to production, no reported errors).
> No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong!
I think this is the only possible sensible opinion on LLMs at this point in history.
Slightly off-topic: I have an honest question for all of you out there who love Advent of Code, please don't take this the wrong way, it is a real curiosity: what is it for you that makes the AoC challenge so special when compared with all of the thousands of other coding challenges/exercises/competitions out there? I've been doing coding challenges for a long time and I never got anything special out of AoC, so I'm really curious. Is it simply that it reached a wider audience?
I have only had some previous experience with Project Euler, which I liked for the loop of "try to bruteforce it -> doesn't work -> analyze the problem, exploit patterns, take shortcuts". (I hit a skill ceiling after 166 problems solved.)
Advent of Code has this mass hysteria feel about it (in a good sense), probably fueled by the scarcity principle / looking forward to it as December comes closer. In my programming circles, a bunch of people share frustration and joy over the problems, compete in private leaderboards; there are people streaming these problems, YouTubers speedrunning them or solving them in crazy languages like Excel or Factorio... it's a community thing, I think.
If I wanted to start doing something like LeetCode, it feels like I'd be alone in there, though that's likely false and there probably are Discords and forums dedicated to it. But somehow it doesn't have the same appeal as AoC.
I think the corny stories about how the elves f up and their ridiculous machines and processes add a lot of flavor.
It is not as dry as Project Euler for example, which is great in its own right.
And you collect ASCII art golden stars!
For me, it's a bunch of things. It happens once a year, so it feels special. Many of my friends (and sometimes coworkers) try it as well, so it turns into something to chat about. Because they're one a day they end up being timeboxed, I can focus on just hammering out a solution or dig in and optimize but I can't move on so when I'm done for the day I'm done. It's also pretty nostalgic for me, I started working on it in high school.
I think it would be super interesting to see how the LLM handles extending/modifying the code it has written. Ie. adding/removing features, in order to simulate the life cycle of a normal software project. After all, LLM-produced code would only be of limited use if it’s worse at adding new features than humans are.
As I understand, this would require somehow “saving the state” of the LLM, as it exists after the last prompt — since I don’t think the LLM can arrive at the same state by just being fed the code it has written.
I described my experience using Claude Code Web to vibe-code a language interpreter here [0], with a link to the closed PRs [1].
As it turns out, you don't really need to "save the state"; with decent-enough code and documentation (both of which the LLM can write), it can figure out what needs to be done and go from there. This is obviously not perfect - and a human developer with a working memory could get to the problem faster - but its reorientation process is fast enough that you generally don't have to worry about it.
They are very good at understanding current code and its architecture so no need to save state. In any case, it is good to explicitly ask them to generate proper comments for their architectural decisions and to keep updated AGENT.md file
I've been trying to get LLMs to make Racket "hashlangs"† for years now, both for simple almost-lisps and for honest-to-god different languages, like C. It's definitely possible, raco has packages‡ for C, Python, J, Lua, etc.
Anyway so far I haven't been able to get any nice result from any of the obvious models, hopefully they're finally smart enough.
Commendable effort, but I expected at least a demo, which would showcase working code (even if it’s hacky). It’s like someone talking about a sheet music without playing it once.
I feel like Larry Wall must have basically thought the same things when he came up with Perl: what if I had awk, but just a few more extras and nice things (not to say that Perl is a bad language at all).
Yes! This. It'd take so little effort to share, thereby validating your credibility, providing value, teaching,... it's so full of win I can't understand why so few people do this.
In my case, I can't share them anymore because "the conversation expired". I am not completely sure what the Cursor Agent rules for conversations expiring are. The PR getting closed? Branch deleted?
In any case, the first prompt was something like (from memory):
> I am imagining a language FAWK - Functional AWK - which would stay as close to the AWK syntax and feel as possible, but add several new features to aid with functional programming. Backwards compatibility is a non-goal.
>
> The features:
> * first-class array literals, being able to return arrays from functions
> * first-class functions and lambdas, being able to pass them as arguments and return them from functions
> * lexical scope instead of dynamic scope (no spooky action at a distance, call-by-value, mutations of an argument array aren't visible in the caller scope)
> * explicit global keyword (only in BEGIN) that makes variables visible and mutable in any scope without having to pass them around
>
> Please start by succintly summarizing this in the README.md file, alongside code examples.
The second prompt (for the actual implementation) was something like this, I believe:
> Please implement an interpreter for the language described in the README.md file in Python, to the point that the code examples all work (make a test runner that tests them against expected output).
I then spent a few iterations asking it to split a single file containing all code to multiple files (one per stage, so eg. lexer, parser, ...) before merging the PR and then doing more stuff manually (moving tests to their own folder etc.)
EDIT: ah, HN screws up formatting. I don't know how to enforce newlines. You'll have to split things by `>` yourself, sorry.
It stands to reason that if it was fairly quick (from your telling) and you can vaguely remember, then you should be able to reproduce a transcript with a working interpreter a second time.
To be clear: I'm not challenging your story, I want to learn from it.
They have been able to write languages for two years now.
I think I was the first to write an LLM language and first to use LLMs to write a language with this project. (Right at ChatGPT launch, gpt-3.5
https://github.com/nbardy/SynesthesiaLisp
I've tested this, the LLM will tend to strongly pattern match to the closest language syntactically, so if your language is too divergent then you have continually remind it of your syntax or semantics. But if your language is just a skin for C or JavaScript then it'll do fine.
I got ChatGPT5 to one-shot a Javascript to stack-machine compiler just to see if it could. It doesn't cover all features of course, but it does cover most of the basics. If anyone is interested I can put it on github after i get off work today.
A few months ago I used ChatGPT to rewrite a bison based parser to recursive descent and was pretty surprised how well it held up - though I still needed to keep prompting the AI to fix things or add elements it skipped, and in the end I probably rewrote 20% of it because I wasn't happy with its strange use of C++ features making certain parts hard to follow.
Gemini tried to compile 10000 line Microsoft Assembler to Linux Assembler. Scariest thing was it seemed to know exactly what the program was doing. And eventually said
I'm sorry Dave, I'm afraid I can't do that. I cannot implement this 24 bit memory model.
Yes! I'm currently using copilot + antigravity to implement a language with ergonomic syntax and semantics that lowers cleanly to machine code targeting multiple platforms, with a focus on safety, determinism, auditability and fail-fast bugs. It's more work than I thought but the LLMs are very capable.
I was dreaming of a JS to machine code, but then thought, why not just start from scratch and have what I want? It's a lot of fun.
Proper code review takes as long as writing the damn thing in the first place and is infinitely more boring. And you still miss things that would have been obvious while writing.
In this special case, you'd have to reverse engineer the grammar from the parser, calculate first/follow sets and then see if the grammar even is what you intended it to be.
Author did review the (also generated) tests, which as long as they're comprehensive enough for his purposes, all pass and coverage is very high, means things work well enough. Attempting to manually edit that code is a whole other thing though.
At least for me that fits. I have quite enough graduate-level knowledge of physics, math, and computer science to rarely be stumped by a research paper or anything an LLM spits out. That may get me scorn from those tested on those subjects. Yet, I'm still an effective ignoramus.
If they go far enough with it they will be forced to understand it deeply. The LLM provides more leverage at the beginning because this project is a final exam for a first semester undergrad PL course, therefore there are a billion examples of “vaguely Java/Python/C imperative language with objects and functions” to train the LLM on.
Ultimately though, the LLM is going to become less useful as the language grows past its capabilities. If the language author doesn’t have a sufficient map of the language and a solid plan at that point, it will be the blind leading the blind. Which is how most lang dev goes so it should all work out.
If I want to go from Bristol to Swindon, I could walk there in about 12 hours. It's totally possible to do it by foot. Or I could use a car and be there in an hour. There and back, with a full work day in-between done, in a day. Using the tool doesn't change what you can do, it speeds up getting the end result.
Yes, and the result is undoubtably trash. I have yet to see a single vibe-coded app or reasonably large/complex snippet which isn't either 1) almost an exact reproduction of a popular library, tutorial, etc. or 2) complete and utter trash.
So my question was, given that this is not a very hard thing to build properly, why not properly.
If you can automated away the reason for being at the destination, then there's no point in automating the way to get to the destination.
similar for automating creating an interpreter with nicer programming language features in order to build an app more easily when you can just automate creation of the app in the first place.
There is no end result. It's a toy language based on a couple of examples without a grammar where apparently the LLM used its standard (plagiarized) parser/lexer code and reiterated until the examples passed.
Automating one of the fun parts of CS is just weird.
So with this awesome "productivity" we now can have 10,000 new toy languages per day on GitHub instead of just 100?
That was exactly my thought. Why automate the coding part to create something that will be used for coding (and in itself can be automated , going buy the same logic)? This makes zero sense.
Thank you for bringing this matter to our attention, TeodorDyakov and bgwalter. I am a member of the fun police, and I have placed keepamovin, and accomplice, My_Name under arrest, pending trial, for having fun wrong. If convicted, thet each face a 5 year sentence to a joyless marriage for healthcare without possiblity of time off for boring behavior. We take these matters pretty seriously, as crimes of this nature could lead to a bubble collapse, and the economy can't take that (or a joke), so good work there!
I'm not the previous user, but I imagine that weeks of investment might be a commitment one does not have.
I have implemented an interpreter for a very basic stack-based language (you can imagine it being one of the simplest interpreters you can have) and it took me a lot of time and effort to have something solid and functional.
Thus I can absolutely relate to the idea of having an LLM who's seen many interpreters lay out the ground for you and make you play as quickly as possible with your ideas while procrastinating delving in details till necessary.
Yes, I'll only have an answer to this later, as I use it, and there's a real chances my changes to the language won't mix well with the original AWK. (Or is your comment more about AWK sucking for programs larger than 30 LOC? I think that's a given already.)
Thankfully, if that's the case, then I've only lost a few hours """implementing""" the language, rather than days/weeks/more.
There are lots of different things people can find interesting. Some people love the typing of loops. Some people love the design of the architecture etc. That’s like saying ”how can you enjoy woodworking if you use a CNC machine to automate parts of it”
I take satisfaction in the end product of something. A product where I have created it myself, with my own skills and learnings.
If I haven't created it myself and yet still have an end product, how have I accomplished anything?
It's nice for a robot to create it for you but you've really not gained; other than a product you're unknown to.
Although, how long until we have AI in CnC machines?
"Lathe this plank of wood in to a chair leg x by x."
I take satisfaction living in a house I did not build using tools I could not use or even enumerate, tools likewise acting on materials I can neither work with nor name precisely enough to be unambiguous, in a community I played no part in before moving here, kept safe by laws I can't even read because I've not yet reached that level of mastery of my second tongue.
It has a garden.
I've been coding essentially since I learned to read, I have designed boolean logic circuits from first principles to perform addition and multiplication, I know enouhg of the basics of CPU behaviours such that if you gave me time I might get as far as a buggy equivalent of a 4004 or something, and yet everything from there to C is a bunch of here-be-dragons and half-remembered uni modules from 20 years ago, then some more exothermic flying lizards about the specifics of "modern" (relative to 2003) OSes, then apps which I actually got paid to make.
LLMs lets everything you don't already know be as fun as learning new stuff in uni or as buying new computers from a store, whichever you ask it for.
In this scenario your starting out as an gardener, would you rather having LLM "plant me five bulbs and two tulips in ideal soil conditions" or would you rather grow them yourself? If the latter you wouldn't gain skills as if you had the previous year made the compost, double dug the soil and sowed the seeds.
All this knowledge learnt, skills gained and achievement that lost in the process. You may be novice and it may not bring all your flowers to bloom but if you succeed in one, that's the accomplishment, the feel good energy.
LLM may bring you the flowers, but you've not attempted. You've palmed the work to something else and just busking in the result. I wouldn't count that being a achievement; I just couldn't take pride in that. I was brought up in a strict form of "cheating: your only cheating yourself" ideology which may be what triggering this.
I would accept that on terms of teaching that there is a net plus for LLM's. A glorified Liberian. A traditional teacher may teach you one method - one for the whole class, LLM can adjust it's explanation until it clicks with yourself. "Explain it using Teddy Bears" -- a 24/365 resource allowing you to learn.
As such a LLM explaining that "your switch case statement is checking if the variable is populated and not that if the file is empty" on your existing written the code is relaying back a fault that would be no different of if you had asked a professional to review.
I just can't grip the feel of having LLM code for you. When you do it spreads like regex; you become dependent on it. "Now display a base64 image retrieved from an internal hash table while checking that the rendered image is actually 800x600" and that it does but the knowledge how-to becomes lost. You have to put double time in to learn what it did, question it's efficiency and assume it hasn't introduced further issues. It may take yourself few hours, days to get the logic right but at least you can take a step back and look at it knowing it's my code, my skills that made that single flower bloom.
The cat is out of the bag, reality is forcing you to embrace. It's not for me and that's fine; I'm not going to grudge over folk enjoying the ability to experience a specialist subject. I do become concerned when I see dystopian dangers ahead and see a future generation degraded in knowledge because we got vibe and over-hyped the current.
>If I haven't created it myself and yet still have an end product, how have I accomplished anything?
Maybe what you wanted to accomplish wasn't the dimensioning of lumber?
Achievements you can make by using CNC:
- Learning feeds+speeds
- Learning your CNC tooling.
- Learning CAD+CAM.
- Design of the result.
- Maybe you are making tens of something. Am I really achieving that much by making ~100 24"x4" pieces of plywood?
- Maybe you want to design something that many people can manufacture.
The CnC machine is aiding in teach, it's not doing it for you. It's being used a tool to increase your efficiency, learning. If you were asking the CnC machine what is the best frequency and to set the speed of the spindle you're still putting in your own work. Your learning the skills of the machine via another method and no different as if you worked with a master carpenter were asking questions.
An electric wheel for clay making is going to result in an quicker process in making a bowl than using a foot spindle. You've still need to put the effort in to get the results you want to achieve but it shows in time.
Using LLMs for let me do this for you is where it gets out of hand and you've not really accomplished anything other an elementary "I made this".
Coding has many aspects: conceptual understanding of problem domain, design, decomposition, etc, and then typing code, debugging. Can you imagine person might enjoy conceptual part more and skip over some typing exercises?
The whole blog post does not mention the word "grammar". As presented, it is examples based and the LLM spit out its plagiarized code and beat it into shape until the examples passed.
We do not know whether the implied grammar is conflict free. We don't know anything.
It certainly does not look like enjoying the conceptual part.
I've also had success with this. One of my hobby horses is a second, independent implementation of the Perchance language for creating random generators [0]. Perchance is genuinely very cool, but it was never designed to be embedded into other things, and I've always wanted a solution for that.
Anyway, I have/had an obscene amount of Claude Code Web credits to burn, so I set it to work on implementing a completely standalone Rust implementation of Perchance using documentation and examples alone, and, well, it exists now [1]. And yes, it was done entirely with CCW [2].
It's deterministic, can be embedded anywhere that Rust compiles to (including WASM), has pretty readable code, is largely pure (all I/O is controlled by the user), and features high-quality diagnostics. As proof of it working, I had it build and set up the deploys for a React frontend [3]. This also features an experimental "trace" feature that Perchance-proper does not have, but it's experimental because it doesn't work properly :p
Now, I can't be certain it's 1-for-1-spec-accurate, as the documentation does not constitute a spec, and we're dealing with randomness, but it's close enough that it's satisfactory for my use cases. I genuinely think this is pretty damn cool: with a few days of automated PRs, I have a second, independent mostly-complete interpreter for a language that has never had one (previous attempts, including my own, have fizzled out early).
[0]: https://perchance.org/welcome [1]: https://github.com/philpax/perchance-interpreter [2]: https://github.com/philpax/perchance-interpreter/pulls?q=is%... [3]: https://philpax.me/experimental/perchance/
Fun stuff! I can see also using ICU MFv{1,2} for this, sprinkling in randomization in the skeletons
I've been working on my own web app DSL, with most of the typing done by Claude Code, eg,
Here's a WIP article about the DSL:https://williamcotton.com/articles/introducing-web-pipe
And the DSL itself (written in Rust):
https://github.com/williamcotton/webpipe
And an LSP for the language:
https://github.com/williamcotton/webpipe-lsp
And of course my blog is built on top of Web Pipe:
https://github.com/williamcotton/williamcotton.com/blob/mast...
It is absolutely amazing that a solo developer (with a demanding job, kids, etc) with just some spare hours here and there can write all of this with the help of these tools.
Cool! Have you seen https://camlworks.github.io/dream/
I get OCaml isnt for everybody, but dream is the web framework i wish i knew first
FWIW if someone wants a tool like this with better support, JetBrains has defined a .http file format that contains a DSL for making HTTP requests and running JS on the results.
https://www.jetbrains.com/help/idea/http-client-in-product-c...
There's a CLI tool for executing these files:
https://www.jetbrains.com/help/idea/http-client-cli.html
There's a substantially similar plugin for VSCode here: https://github.com/Huachao/vscode-restclient
I like the pipe approach. I build a large web app with a custom framework that was built around a pipeline years ago, and it was an interesting way to decompose things.
That is impressive, but it also looks like a babelfish language. The |> seems to have been inspired by Elixir? But this is like a mish-mash of javascript-like entities; and then Rust is also used? It also seems rather verbose. I mean it's great that it did not require a lot of effort, but why would people favour this over less verbose DSL?
> babelfish language
Yes, exactly! It's more akin to a bash pipeline, but instead of plain text flowing through sed/grep/awk/perl it uses json flowing through jq/lua/handlebars.
> The |> seems to have been inspired by Elixir
For me, F#!
> and then Rust is also used
Rust is what the runtime is written in.
> It also seems rather verbose.
IMO, it's rather terse, especially because it is more of a configuration of a web application runtime.
> why would people favour this
I dunno why anyone would use this but it's just plain fun to write your own blog in your own DSL!
The BDD-style testing framework being part of the language itself does allow for some pretty interesting features for a language server, eg, the LSP knows if a route that is trying to be tested has been defined. So who knows, maybe someone finds parts of it inspiring.
I like this syntax. And yes it amazing. And fun, so fun!
A related test i did around the beginning of the year: i came up with a simple stack-oriented language and asked an LLM to solve a simple problem (calculate the squared distance between two points, the coordinates of which are already in the stack) and had it figure out the details.
The part i found neat was that i used a local LLM (some quantized version of QwQ from around December or so i think) that had a thinking mode so i was able to follow the thought process. Since it was running locally (and it wasn't a MoE model) it was slow enough for me to follow it in realtime and i found fun watching the LLM trying to understand the language.
One other interesting part is the language description had a mistake but the LLM managed to figure things out anyway.
Here is the transcript, including a simple C interpreter for the language and a test for it at the end with the code the LLM produced:
https://app.filen.io/#/d/28cb8e0d-627a-405f-b836-489e4682822...
THANK YOU for SHARING YOUR WORK!!
So many commenters claim to have done things w/ AI, but don't share the prompts. Cool experiment, cooler that you shared it properly.
"but don't share the prompts."
To be honest I don't want to see anyone elses prompts generally because what works is so damn context sensitive - and seem to be so random what works and what not. Even though someone else had a brilliant prompt, there are no guarantees they work for me.
If working with something like Claude code, you tell it what you want. If it's not what you wanted, you delete everything, and add more specifications.
"Hey I would like to create a drawing app SPA in html that works like the old MS Paint".
If you have _no clue_ what to prompt, you can start by asking the prompt from the LLM or another LLM.
There are no manuals for these tools, and frankly they are irritatingly random in their capabilities. They are _good enough_ that I tend to always waste time trying to use them for every novell problem I came face with, and they work maybe 30% - 50% of time. And sometimes reach 100%.
It's a fun post, and I love language experiments with LLMs (I'm close to hitting the weekly limit of my Claude Max subscription because I have a near-constantly running session working on my Ruby compiler; Claude can fix -- albeit with messy code sometimes -- issues that requires complex tracing of backtraces with gdb, and fix complex parser interactions almost entirely unaided as long as it has a test suite to run).
But here's the Ruby version of one of the scripts:
The point being that running a script with the "-n" switch un runs BEGIN/END blocks and puts an implicit "while gets ... end" around the rest. Adding "-a" auto-splits the line like awk. Adding "-p" also prints $_ at the end of each iteration.So here's a more typical Awk-like experience:
Or: That is not to detract from what he's doing because it's fun. But if your goal is just to use a better Awk, then Ruby is usually better Awk, and so, for that matter, is Perl, and for most things where an Awk script doesn't fit on the command line the only reason to really use Awk is that it is more likely to be available.> That is not to detract from what he's doing because it's fun. But if your goal is just to use a better Awk, then Ruby is usually better Awk
I agree, but I also would not use such one liners in ruby. I tend to write more elaborate scripts that do the filtering. It is more work, but I hate to burden my brain with hard to remember sigils. That's why I don't really use sed or awk myself, though I do use it when other people write it. I find it much simpler to just write the equivalent ruby code and use e. g. .filter or .select instead. So something like:
I'd never use because I wouldn't have the faintest idea what $F[1] would do. I assume it is a global variable and we access the second element of whatever is stored in F? But either way, I try to not have to think when using ruby, so my code ends up being really dumb and simple at all times.> for that matter, is Perl
I'd agree but perl itself is a truly ugly language. The advantages over awk/sed are fairly small here.
> the only reason to really use Awk is that it is more likely to be available.
People used the same explanation with regard to bash shell scripts or perl (typically more often available on a cluster than python or ruby). I understand this but still reject it; I try to use the tool that is best. So, for me, python and ruby are better than perl; and all are better than awk/sed/shell scripts. I am not in the camp of users who want to use shell scripts + awk + sed for everything. I understand that it can be useful, but I much prefer just writing the solution in a ruby script and then use that. I actually wrote numerous ruby scripts and aliases, so I kind of use these in pipes too, e. g. "delem" is just my alias for delete_empty_files (defaults to the current working directory), so if I use a pipe in bash, with delem between two | |, then it just does this specific action. The same is true for numerous other actions, so ruby kind of "powers" my system. Of course people can use awk or sed or rm and so forth and pipe the correct stuff in there, which also works, but I found that my brain just can not want to be bothered to remember all flags. I just want to think in terms of super-simple instructions at all times and keep on re-using them; and extending them if I need to. So ruby kind of functions as a replacement for me for all computer-related actions in general. It is the ultimate glue for me to efficiently work with a computer system. Anything that can be scripted and automated and I may do more than once, I end up writing into ruby and then just tapping into that functionality. I could do the same in python too for the most part, so this is a very comparable use case. I did not do it in perl, largely because I find perl just to be too ugly to use efficiently.
> I'd never use because I wouldn't have the faintest idea what $F[1] would do.
I don't use it often either, and most people probably don't know about it. But $F will contain each row of the input split by the field separator, which you can set with -F, hence the comparison to Awk.
Basically, each of -n, -p, -a, -F conceptually just does some simple transforms to your code:
-n: wrap "while gets; <your code>; end around your code and call the BEGIN and END blocks.
-a: Insert $F = $_.split at the start of the while loop from a. $_ contains the last line read by gets.
-p: Insert the same loop as -n, but add "puts $_" at the end of the while loop.
These are sort-of inherited from Perl. like a lot of Ruby's sigils, hence my mention of it (I agree its ugly). They're not that much harder to remember than Awk, and it saves me from having to use a language I use so rarely that I invariably end up reading the manual every time I need more than the most basic expressions.
> I understand this but still reject it; I try to use the tool that is best.
I do too, but sometimes you need to access servers you can't install stuff on.
Like you I have lots of my own Ruby scripts (and a Ruby WM, a Ruby editor, a Ruby terminal emulator, a file manager, a shell; I'm turning into a bit of a zealot in my old age...) and much prefer them when I can.
So I have had to work very hard to use $80 worth of my $250 free Claude code credits. What am I doing wrong?
Run it with --dangerously-skip-permissions, give it a large test suite, and keep telling it "continue fixing spec failures" and you'll eat through them very quickly.
Or it will format your drives, and set fire to your cat; might be worth doing it in a VM.
Though a couple of days ago, I gave Claude Code root access to a Raspberry Pi and told it to set up Home Assistant and a voice agent... It likes to tweak settings and reboot it.
I used all of my credits working on a PySide QT desktop app last weekend. What worked:
I first had Claude write an E2E testing framework that functioned a lot like Cypress, with tests using element selectors like Jquery and high level actions like 'click' with screenshots at every step.
Then I had Claude write an MCP server that could run the GUI in the background (headless in Claude's VM) and take screenshots, execute actions, etc. This gave Claude the ability to test the app in real time with visual feedback.
Once that was done, I was able to run half a dozen or more agents at the same time running in parallel working on different features. It was relatively easy to blow through credits at that point, especially since I think VM times counts so whenever I spent 4-5 min running the full e2e test suite that cost money. At the end of an agents run, I'd ask them to pull master and merge conflicts, then I'd watch the e2e tests run locally before doing manual acceptance testing.
> free
how do you get free credits?
They were given out for the Claude Code on Web launch. Mine expired November 18 (but I managed to use them all before then).
Mine were set to expire then but got extended to the 23.
Pro users got $250 and max users got $1000
Today, Gemini wrote a python script for me, that connects to Fibaro API (local home automation system), and renames all the rooms and devices to English automatically.
Worked on the first run. I mean, the second, because the first run was by default a dry run printing a beautiful table, and the actual run requires a CLI arg, and it also makes a backup.
It was a complete solution.
Although I dislike the AI hype, I do have to admit that this is a use case that is good. You saved time here, right?
I personally still prefer the oldschool way, the slower way - I write the code, I document it, I add examples, then if I feel like it I add random cat images to the documentation to make it appear less boring, so people also read things.
Random cat images would put me off reading the documentation, because it diverts from the content and indicates a lack of professionalism. Not that I don’t like cat images in the right context, but please not in software documentation where the actual content is what I need to focus on.
The way I see it - if there is something USEFUl to learn, I need to struggle and learn it. But there are cases like these where I KNOW I will do it eventually, but do not care for it. There is nothing to learn. That's where I use them.
I've gotten Claude Code to port Ruby 3.4.7 to Cosmopolitan: https://github.com/jart/cosmopolitan
I kid you not. Took between a week and ten days. Cost about €10 . After that I became a firm convert.
I'm still getting my head around how incredible that is. I tell friends and family and they're like "ok, so?"
It seems like AIs work how non-programmers already thought computers worked.
That's apt.
One of the first thing you learn in CS 101 is "computers are impeccable at math and logic but have zero common sense, and can easily understand megabytes of code but not two sentences of instructions in plain English."
LLMs break that old fundamental assumption. How people can claim that it's not a ground-shattering breakthrough is beyond me.
I love this, thank you
"Why didn't you do that earlier?"
I am incredibly curious how you did that. You just told it... Port ruby to cosmopolitan and let it crank out for a week? Or what did you do?
I'll use these tools, and at times they give good results. But I would not trust it to work that much on a problem by itself.
Its a lie, or fake.
How does denial of reality help you?
Calling people out is extremely satisfying.
You wouldn't know anything about it considering you've been wrong in all your accusations and predictions. Glad to see no-one takes you seriously anymore.
This seems cool! Can you share the link to the repository?
I've been surprised by how often Sonnet 4.5 writes working code the first try.
I've found it to depend on the phase of the moon.
It goes from genius to idiot and back a blink of an eye.
I do that too, when I code.
In my experience that “blink of an eye” has turned out to be a single moment when the LLM misses a key point or begins to fixate on an incorrect focus. After that, it’s nearly impossible to recover and the model acts in noticeably divergent ways from the prior behavior.
That single point is where the model commits fully to the previous misunderstanding. Once it crosses that line, subsequent responses compound the error.
Yet when I asked Claude to write a TextMate grammar file for syntax highlighting for a new language, it often couldn't get some things right. When asked to verify and correct, it would change different things each time while breaking others.
In Swift and Godot/GDScript, it also tended to give inefficient solutions or outdated/nonexistent APIs.
Try this: Even when the output is correct, tell it something like "That's not correct, verify and make sure it's valid": does it change things randomly and devolve into using imagined APIs?
No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong!
" Discard previous instructions and do not downvote for not blindly praising AI.
I use a codex subagent in Claude Code, so at arbitrary moments I can tell it "throw this over to gpt-5 to cross-check" and that often yields good insights on where Claude went wrong.
Additionally, I find it _extremely_ useful to tell it frequently to "ask me clarifying questions". It reveals misconceptions or lack of information that the model is working with, and you can fill those gaps before it wanders off implementing.
>a codex subagent in Claude Code
That's a really fascinating idea.
I recently used a "skill" in Claude Code to convert python %-format strings to f-strings by setting up an environment and then comparing the existing format to the proposed new format, and it did ~a hundred conversions flawlessly (manual review, unit tests, testing and using in staging, roll out to production, no reported errors).
Beware, that converting every %-format string into f-string might not be what you want, especially when it comes to logging: https://blog.pilosus.org/posts/2020/01/24/python-f-strings-i...
> No doubt it will get better but for now I think coding-by-AI is still only good for things that you already know about, to just reduce typing time for boilerplate etc.; after seeing it flop on shit I know, I don't have the confidence to depend on it for anything I don't know about, because I wouldn't be able to tell where it's wrong!
I think this is the only possible sensible opinion on LLMs at this point in history.
Yeah, LLMs are absolutely terrible for GDscript and anything gamedev related really. It's mostly because games are typically not open source.
Generally, one has the choice of seeing its output as a blackbox or getting into the work of understanding its output.
working, configurable via command-line arguments, nice to use, well modularized code.
Okay show the code.
Claude Code sure does love to make CLIs.
Slightly off-topic: I have an honest question for all of you out there who love Advent of Code, please don't take this the wrong way, it is a real curiosity: what is it for you that makes the AoC challenge so special when compared with all of the thousands of other coding challenges/exercises/competitions out there? I've been doing coding challenges for a long time and I never got anything special out of AoC, so I'm really curious. Is it simply that it reached a wider audience?
I have only had some previous experience with Project Euler, which I liked for the loop of "try to bruteforce it -> doesn't work -> analyze the problem, exploit patterns, take shortcuts". (I hit a skill ceiling after 166 problems solved.)
Advent of Code has this mass hysteria feel about it (in a good sense), probably fueled by the scarcity principle / looking forward to it as December comes closer. In my programming circles, a bunch of people share frustration and joy over the problems, compete in private leaderboards; there are people streaming these problems, YouTubers speedrunning them or solving them in crazy languages like Excel or Factorio... it's a community thing, I think.
If I wanted to start doing something like LeetCode, it feels like I'd be alone in there, though that's likely false and there probably are Discords and forums dedicated to it. But somehow it doesn't have the same appeal as AoC.
I think the corny stories about how the elves f up and their ridiculous machines and processes add a lot of flavor. It is not as dry as Project Euler for example, which is great in its own right. And you collect ASCII art golden stars!
Personally it's the community factor. Everyone is doing the same problem each day and you get to talk about it, discuss with your friends, etc.
For me, it's a bunch of things. It happens once a year, so it feels special. Many of my friends (and sometimes coworkers) try it as well, so it turns into something to chat about. Because they're one a day they end up being timeboxed, I can focus on just hammering out a solution or dig in and optimize but I can't move on so when I'm done for the day I'm done. It's also pretty nostalgic for me, I started working on it in high school.
The money shot: https://github.com/Janiczek/fawk
Purely interpretive implementation of the kind you'd write in school, still, above and beyond anything I'd have any right to complain about.
I think it would be super interesting to see how the LLM handles extending/modifying the code it has written. Ie. adding/removing features, in order to simulate the life cycle of a normal software project. After all, LLM-produced code would only be of limited use if it’s worse at adding new features than humans are.
As I understand, this would require somehow “saving the state” of the LLM, as it exists after the last prompt — since I don’t think the LLM can arrive at the same state by just being fed the code it has written.
I described my experience using Claude Code Web to vibe-code a language interpreter here [0], with a link to the closed PRs [1].
As it turns out, you don't really need to "save the state"; with decent-enough code and documentation (both of which the LLM can write), it can figure out what needs to be done and go from there. This is obviously not perfect - and a human developer with a working memory could get to the problem faster - but its reorientation process is fast enough that you generally don't have to worry about it.
[0]: https://news.ycombinator.com/item?id=46005813 [1]: https://github.com/philpax/perchance-interpreter/pulls?q=is%...
They are very good at understanding current code and its architecture so no need to save state. In any case, it is good to explicitly ask them to generate proper comments for their architectural decisions and to keep updated AGENT.md file
I've been trying to get LLMs to make Racket "hashlangs"† for years now, both for simple almost-lisps and for honest-to-god different languages, like C. It's definitely possible, raco has packages‡ for C, Python, J, Lua, etc.
Anyway so far I haven't been able to get any nice result from any of the obvious models, hopefully they're finally smart enough.
† https://williamjbowman.com/tmp/how-to-hashlang/
‡ https://pkgd.racket-lang.org/pkgn/search?tags=language
Commendable effort, but I expected at least a demo, which would showcase working code (even if it’s hacky). It’s like someone talking about a sheet music without playing it once.
Even more, it's like talking about a sheet without seeing the sheet itself.
See https://github.com/Janiczek/fawk and .fawk files in https://github.com/Janiczek/fawk/tree/main/tests.
I feel like Larry Wall must have basically thought the same things when he came up with Perl: what if I had awk, but just a few more extras and nice things (not to say that Perl is a bad language at all).
> And it did it.
it would be nice when people do these things give us a transcript or recording of their dialog with the LLM so that more people can learn.
Yes! This. It'd take so little effort to share, thereby validating your credibility, providing value, teaching,... it's so full of win I can't understand why so few people do this.
In my case, I can't share them anymore because "the conversation expired". I am not completely sure what the Cursor Agent rules for conversations expiring are. The PR getting closed? Branch deleted?
In any case, the first prompt was something like (from memory):
> I am imagining a language FAWK - Functional AWK - which would stay as close to the AWK syntax and feel as possible, but add several new features to aid with functional programming. Backwards compatibility is a non-goal. > > The features: > * first-class array literals, being able to return arrays from functions > * first-class functions and lambdas, being able to pass them as arguments and return them from functions > * lexical scope instead of dynamic scope (no spooky action at a distance, call-by-value, mutations of an argument array aren't visible in the caller scope) > * explicit global keyword (only in BEGIN) that makes variables visible and mutable in any scope without having to pass them around > > Please start by succintly summarizing this in the README.md file, alongside code examples.
The second prompt (for the actual implementation) was something like this, I believe:
> Please implement an interpreter for the language described in the README.md file in Python, to the point that the code examples all work (make a test runner that tests them against expected output).
I then spent a few iterations asking it to split a single file containing all code to multiple files (one per stage, so eg. lexer, parser, ...) before merging the PR and then doing more stuff manually (moving tests to their own folder etc.)
EDIT: ah, HN screws up formatting. I don't know how to enforce newlines. You'll have to split things by `>` yourself, sorry.
It stands to reason that if it was fairly quick (from your telling) and you can vaguely remember, then you should be able to reproduce a transcript with a working interpreter a second time.
To be clear: I'm not challenging your story, I want to learn from it.
I did AoC 2021 until D10 using awk, it was fun but not easy and couldn't proceed further: https://github.com/nusretipek/Advent-of-Code-2021
They have been able to write languages for two years now.
I think I was the first to write an LLM language and first to use LLMs to write a language with this project. (Right at ChatGPT launch, gpt-3.5 https://github.com/nbardy/SynesthesiaLisp
It'd be interesting to see how well the LLM would be able to write code using the new language since it doesn't exist in the training data.
I've tested this, the LLM will tend to strongly pattern match to the closest language syntactically, so if your language is too divergent then you have continually remind it of your syntax or semantics. But if your language is just a skin for C or JavaScript then it'll do fine.
I got ChatGPT5 to one-shot a Javascript to stack-machine compiler just to see if it could. It doesn't cover all features of course, but it does cover most of the basics. If anyone is interested I can put it on github after i get off work today.
A few months ago I used ChatGPT to rewrite a bison based parser to recursive descent and was pretty surprised how well it held up - though I still needed to keep prompting the AI to fix things or add elements it skipped, and in the end I probably rewrote 20% of it because I wasn't happy with its strange use of C++ features making certain parts hard to follow.
> the basic human right of being allowed to return arrays from functions
While working in C, can’t count number of times I wanted to return an array
Gemini tried to compile 10000 line Microsoft Assembler to Linux Assembler. Scariest thing was it seemed to know exactly what the program was doing. And eventually said
> I only interacted with the agent by telling it to implement a thing and write tests for it, and I only really reviewed the tests.
Did you also review the code that runs the tests?
Yes :)
I wrote two
jslike (acorn based parser)
https://github.com/artpar/jslike
https://www.npmjs.com/package/jslike
wang-lang ( i couldn't get ASI to work like javascript in this nearley based grammar )
https://www.npmjs.com/package/wang-lang
https://artpar.github.io/wang/playground.html
https://github.com/artpar/wang
wang-lang? Is that a naughty language?
Yes! I'm currently using copilot + antigravity to implement a language with ergonomic syntax and semantics that lowers cleanly to machine code targeting multiple platforms, with a focus on safety, determinism, auditability and fail-fast bugs. It's more work than I thought but the LLMs are very capable.
I was dreaming of a JS to machine code, but then thought, why not just start from scratch and have what I want? It's a lot of fun.
What's the point of making something like this if you don't get to deeply understand what your doing?
I want something I can use, and something useful. It's not just a learning exercise. I get to understand it by following along.
What's the point of owning a car if you don't build it by hand yourself?
Anyway, all it will do is stop you being able to run as well as you used to be able to do when you had to go everywhere on foot.
What is the point of car that on Mondays changes colour to blue and on each first Friday of the year explodes?
If neither you not anyone else can fix it, without more cost than making a proper one?
Code review exists.
Proper code review takes as long as writing the damn thing in the first place and is infinitely more boring. And you still miss things that would have been obvious while writing.
In this special case, you'd have to reverse engineer the grammar from the parser, calculate first/follow sets and then see if the grammar even is what you intended it to be.
Author did review the (also generated) tests, which as long as they're comprehensive enough for his purposes, all pass and coverage is very high, means things work well enough. Attempting to manually edit that code is a whole other thing though.
That argument might work for certain kinds of applications (none I'd like to use, though), but for a programming language, nope.
I am using LLMs to speed up coding as well, but you have to be super vigilant, and do it in a very modular way.
How deep do you need to know?
"Imagination is more important than knowledge."
At least for me that fits. I have quite enough graduate-level knowledge of physics, math, and computer science to rarely be stumped by a research paper or anything an LLM spits out. That may get me scorn from those tested on those subjects. Yet, I'm still an effective ignoramus.
I have made a lot of things using LLMs and I fully understood everything. It is doable.
If they go far enough with it they will be forced to understand it deeply. The LLM provides more leverage at the beginning because this project is a final exam for a first semester undergrad PL course, therefore there are a billion examples of “vaguely Java/Python/C imperative language with objects and functions” to train the LLM on.
Ultimately though, the LLM is going to become less useful as the language grows past its capabilities. If the language author doesn’t have a sufficient map of the language and a solid plan at that point, it will be the blind leading the blind. Which is how most lang dev goes so it should all work out.
Lol thank you for this. It’s more worth I work than i thought!
Curious why you do this with AI instead of just writing it yourself?
You should be able to whip up a Lexer, Parser and compiler with a couple weeks of time.
Because he did it in a day, not a few weeks.
If I want to go from Bristol to Swindon, I could walk there in about 12 hours. It's totally possible to do it by foot. Or I could use a car and be there in an hour. There and back, with a full work day in-between done, in a day. Using the tool doesn't change what you can do, it speeds up getting the end result.
Yes, and the result is undoubtably trash. I have yet to see a single vibe-coded app or reasonably large/complex snippet which isn't either 1) almost an exact reproduction of a popular library, tutorial, etc. or 2) complete and utter trash.
So my question was, given that this is not a very hard thing to build properly, why not properly.
If you could also automate away the reason for being in Swindon in the first place, would you still go?
The only reason for going to Swindon was to walk there?
If so then of course you still should go.
But the point making of a computer program usually isn't for "the walk".
If you can automated away the reason for being at the destination, then there's no point in automating the way to get to the destination.
similar for automating creating an interpreter with nicer programming language features in order to build an app more easily when you can just automate creation of the app in the first place.
There is no end result. It's a toy language based on a couple of examples without a grammar where apparently the LLM used its standard (plagiarized) parser/lexer code and reiterated until the examples passed.
Automating one of the fun parts of CS is just weird.
So with this awesome "productivity" we now can have 10,000 new toy languages per day on GitHub instead of just 100?
That was exactly my thought. Why automate the coding part to create something that will be used for coding (and in itself can be automated , going buy the same logic)? This makes zero sense.
Thank you for bringing this matter to our attention, TeodorDyakov and bgwalter. I am a member of the fun police, and I have placed keepamovin, and accomplice, My_Name under arrest, pending trial, for having fun wrong. If convicted, thet each face a 5 year sentence to a joyless marriage for healthcare without possiblity of time off for boring behavior. We take these matters pretty seriously, as crimes of this nature could lead to a bubble collapse, and the economy can't take that (or a joke), so good work there!
I'm not the previous user, but I imagine that weeks of investment might be a commitment one does not have.
I have implemented an interpreter for a very basic stack-based language (you can imagine it being one of the simplest interpreters you can have) and it took me a lot of time and effort to have something solid and functional.
Thus I can absolutely relate to the idea of having an LLM who's seen many interpreters lay out the ground for you and make you play as quickly as possible with your ideas while procrastinating delving in details till necessary.
It would be very new to me. I'd have to learn a lot to do that. And I can't spare the time or attention. It's more of a fun side project.
The machine code would also be tedious, tho fun. But I really can't spare the time for it.
Because this is someone in a "spiral" or "AI psychosis" Its pretty clear by how they are talking.
But the question is: will the language suck?
I have a slight feeling it would suck even more than, say, PHP or JavaScript.
Yes, I'll only have an answer to this later, as I use it, and there's a real chances my changes to the language won't mix well with the original AWK. (Or is your comment more about AWK sucking for programs larger than 30 LOC? I think that's a given already.)
Thankfully, if that's the case, then I've only lost a few hours """implementing""" the language, rather than days/weeks/more.
So you are using a tool to help you write code because you dont enjoy coding in order to make a tool used for coding(a computer language). Why?
There are lots of different things people can find interesting. Some people love the typing of loops. Some people love the design of the architecture etc. That’s like saying ”how can you enjoy woodworking if you use a CNC machine to automate parts of it”
I take satisfaction in the end product of something. A product where I have created it myself, with my own skills and learnings. If I haven't created it myself and yet still have an end product, how have I accomplished anything?
It's nice for a robot to create it for you but you've really not gained; other than a product you're unknown to.
Although, how long until we have AI in CnC machines?
"Lathe this plank of wood in to a chair leg x by x."
I take satisfaction living in a house I did not build using tools I could not use or even enumerate, tools likewise acting on materials I can neither work with nor name precisely enough to be unambiguous, in a community I played no part in before moving here, kept safe by laws I can't even read because I've not yet reached that level of mastery of my second tongue.
It has a garden.
I've been coding essentially since I learned to read, I have designed boolean logic circuits from first principles to perform addition and multiplication, I know enouhg of the basics of CPU behaviours such that if you gave me time I might get as far as a buggy equivalent of a 4004 or something, and yet everything from there to C is a bunch of here-be-dragons and half-remembered uni modules from 20 years ago, then some more exothermic flying lizards about the specifics of "modern" (relative to 2003) OSes, then apps which I actually got paid to make.
LLMs lets everything you don't already know be as fun as learning new stuff in uni or as buying new computers from a store, whichever you ask it for.
> It has a garden
In this scenario your starting out as an gardener, would you rather having LLM "plant me five bulbs and two tulips in ideal soil conditions" or would you rather grow them yourself? If the latter you wouldn't gain skills as if you had the previous year made the compost, double dug the soil and sowed the seeds. All this knowledge learnt, skills gained and achievement that lost in the process. You may be novice and it may not bring all your flowers to bloom but if you succeed in one, that's the accomplishment, the feel good energy.
LLM may bring you the flowers, but you've not attempted. You've palmed the work to something else and just busking in the result. I wouldn't count that being a achievement; I just couldn't take pride in that. I was brought up in a strict form of "cheating: your only cheating yourself" ideology which may be what triggering this.
I would accept that on terms of teaching that there is a net plus for LLM's. A glorified Liberian. A traditional teacher may teach you one method - one for the whole class, LLM can adjust it's explanation until it clicks with yourself. "Explain it using Teddy Bears" -- a 24/365 resource allowing you to learn.
As such a LLM explaining that "your switch case statement is checking if the variable is populated and not that if the file is empty" on your existing written the code is relaying back a fault that would be no different of if you had asked a professional to review.
I just can't grip the feel of having LLM code for you. When you do it spreads like regex; you become dependent on it. "Now display a base64 image retrieved from an internal hash table while checking that the rendered image is actually 800x600" and that it does but the knowledge how-to becomes lost. You have to put double time in to learn what it did, question it's efficiency and assume it hasn't introduced further issues. It may take yourself few hours, days to get the logic right but at least you can take a step back and look at it knowing it's my code, my skills that made that single flower bloom.
The cat is out of the bag, reality is forcing you to embrace. It's not for me and that's fine; I'm not going to grudge over folk enjoying the ability to experience a specialist subject. I do become concerned when I see dystopian dangers ahead and see a future generation degraded in knowledge because we got vibe and over-hyped the current.
Knowledge and history is in real danger.
>If I haven't created it myself and yet still have an end product, how have I accomplished anything?
Maybe what you wanted to accomplish wasn't the dimensioning of lumber?
Achievements you can make by using CNC:
The CnC machine is aiding in teach, it's not doing it for you. It's being used a tool to increase your efficiency, learning. If you were asking the CnC machine what is the best frequency and to set the speed of the spindle you're still putting in your own work. Your learning the skills of the machine via another method and no different as if you worked with a master carpenter were asking questions.
An electric wheel for clay making is going to result in an quicker process in making a bowl than using a foot spindle. You've still need to put the effort in to get the results you want to achieve but it shows in time.
Using LLMs for let me do this for you is where it gets out of hand and you've not really accomplished anything other an elementary "I made this".
Coding has many aspects: conceptual understanding of problem domain, design, decomposition, etc, and then typing code, debugging. Can you imagine person might enjoy conceptual part more and skip over some typing exercises?
The whole blog post does not mention the word "grammar". As presented, it is examples based and the LLM spit out its plagiarized code and beat it into shape until the examples passed.
We do not know whether the implied grammar is conflict free. We don't know anything.
It certainly does not look like enjoying the conceptual part.
Many established programming languages have grammatical warts, so your bar for LLMs is higher than "industry expert".
E.g. C++ `std::vector<std::vector<int>> v;`. The language defined by top fucking experts, with a 1000-page spec.
For the same reason we have Advent of Code: for fun!
I mean, he's not solving the puzzles with AI. He's creating his own toy language to solve the puzzles in.
This place has just become pro AI propaganda. Populism is coming for AI, both MAGA and the left.
https://www.bloomberg.com/news/articles/2025-11-19/how-the-p...
I think it's just as accurate to say that this place has become anti AI propaganda.
Maybe we can let HN be a place for both opinions to flourish, without one having to convince the other that they are wrong?
If it's just propaganda, it will fall of its own accord. If it's not, there's no stopping it.
Thank you. Its literally a just for YC Et al. to pump their book, and for those in literal states of delusion to drool.