I keep hearing that, and I have yet to go there. I find the permission checks are helpful – they keep me in the loop which helps me intervene when the LLM is wasting time on pointless searches, or going about the implementation wrong. What am I missing?
Personally I usually just create a devcontainer.json, the vscode support for that is great and I don't really mind if it fucked up the ephemeral container.
Which for the record : hasn't actually happened since I started using it like that.
Hey thanks for this! I hadn't thought about leveraging devcontainer.json, but it's a damn good idea. I'm building yoloAI for exactly this use case so I hope you don't mind if I steal it ;-)
One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.
I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.
Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)
I used to think UIs would be better for agents, but I changed my mind: UIs suit traditional software very well because there are only X actions that can be performed - it makes sense that if you have an image converter that can take X, Y and Z formats and convert them to A, B and C then you should have a UI that limits what the user can do, preventing them from making mistakes and making it obvious what's possible.
But for something like Claude Code there are unlimited things you can do with it, so it's better for them to accept a free-form input.
Huh? Did you see the cheat sheet? Most of it is a UI of the terminal and shortcut variety, and much of it is exposed in other IDEs as a traditional UI.
To quote The Godfather II, "This is the business we have chosen."
The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.
For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.
not really, mostly its self explanatory, it has poweruser things that are discoverable within a few minutes of reading the help. Weirdly the cheat sheet is actually missing things that you can find inside claudes help like /keybinds .
With Claude Code I created an agent that spawns 5 copies of itself branching git worktrees from main branch using subagents so no context leaks into their instructions. The agent will every 60 seconds analyze the performance of each of the copies which run for about 40 minutes answering the question "what would you do different?". After they finish the task, the parent will update the .claude/ files enhancing itself reverting if the copies performed worse or enhancing if they performed better. Then it creates 5 copies of itself branching git worktrees from main branch ..........
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
I bet it doesn't achieve a single successful (long term) trading strategy for FUTURE trades. Easy to derive a successful trading strategy on historical data, but so naive to think that such a strategy will continue to be successful in the long term into the future.
If you do, come back to me and I’ll will give you one million USD to use it - I kid you not. Only condition is your successful future trading strategy must solely be based on historical data.
Let us perform a thought experiment. You do this. Many others, enthusiastic about both LLMs, and stocks/options, have similar ideas. Do these trading strategies interfere with each other? Does this group of people leveraging Claude for trading end up doing better in the market than those not? What are your benchmarks for success, say, a year into it? Do you have a specific edge in mind which you can leverage, that others cannot?
People used to laugh about quant strategies the same day, I wouldn't count it out so quickly. One of my friends is already turning meaningful profits with agent driven trading (though he has some experience in trading to begin with.)
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
I think the name "massive" combined with the direct link to the docs is a bit misleading; it's not at all obvious from where you land w/ that link that they are selling the actual data. (It kind of sounds like they're selling software that helps you deal with massive data in general, which, no.)
Classic AI psychosis, you can do it with a single prompt, etc. etc.
If you find such a db with options, it will find "successful trading strategies". It will employ overnight gapping, momentum fades, it will try various option deltas likely to work. Maybe it will find something that reduces overall volatility compared to beta, and you can leverage it to your heart's content.
Unfortunately, it won't find anything new. More unfortunately, you probably need 6-10 years and do a walk forward to see if the overall method is trustworthy.
you can have it build an execution engine that interfaces with any broker with minimal effort.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years.
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
I see, I said "stock quote" instead of "minute aggregates". You are correct that data set is much larger and at ~1.5TB a year [0] I did not download 6TB of data onto my laptop. Every settled trade options or stocks isn't that big.
The bigger question is: does Anthropic have a big enough moat to matter?
I've used/use both, and find them pretty comparable, as far as the actual model backing the tool. That wasn't the case 9 months ago, but the world changes quickly.
I don’t believe there will ever be a real moat in terms of technology, at least not for the next year or so. The arms race between the major players still changing month to month, and they will all be able to do what their competitors were doing g three months ago.
None of them are particularly sticky - you can move between them with relative ease in vscode for instance.
I think the only moat is going to be based on capacity, but even that isnt going to last long as the products are moved away from the cloud and closer your end devices.
It matters to me. Claude code is more extensible. They put a lot of efforts to hooks and plugins. Codex may get the job done today. But Claude will evolve faster.
None of that matters if the model is worse. I say this as someone who uses both Claude Code and Codex all day every day — I agree with others in this thread that CC has much better UX and evolves faster, but I still use Codex more often because it's simply the better coder. Everything else is a distant second to model quality.
I use Claude Code daily but kept forgetting commands, so I had Claude research every feature from the docs and GitHub, then generate a printable A4 landscape HTML page covering keyboard shortcuts, slash commands, workflows, skills system, memory/CLAUDE.md, MCP setup, CLI flags, and config files.
It's a single HTML file - Claude wrote it and I iterated on the layout. A daily cron job checks the changelog and updates the sheet automatically, tagging new features with a "NEW" badge.
Auto-detects Mac/Windows for the right shortcuts. Shows current Claude Code version and a dismissable changelog of recent changes at the top.
There’s something funny about this statement on a description of a key bind cheat sheet. I can’t seem to find ctrl on my phone and I think it may be cmd+p on mac.
Thanks for putting this together! It's really nice to have a quick reference of all the features at a glance — especially since new features are being added all the time. Saves a lot of digging through docs.
The link to the changelog on the page got me wondering what the change history looks like (as best we can see).
I asked chatgpt to chart the number of new bullet points in the CHANGELOG.md file committed by day. I did nothing to verify accuracy, but a cursory glance doesn't disagree:
Claude is actually hilariously bad at knowing about itself. But if you have the secret knowledge that there is a skill on how to use Claude baked into Claude code you can invoke it. Then it’s really pretty decent
Similar to prompting hacks to produce better results. If the machine we built for taking dumb input that will transform it into an answer needs special structuring around the input then it's not doing a good job at taking dumb input.
Reminds me of Vercel's Rauch talking about his aggressive 'any UX mistake is our fault, never the user's' model for evaluating UIX.
(It is/was Guillermo who says that, right?)
This should be all of Information Technology’s take. Your computers get hacked - IT’s fault. Users complain about how hard your software is or that it breaks all the time - IT’s fault.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
Yeah, I think it is. It's printable if you want to have a hard copy and it's up to you when to check for a new version. Since it's auto-updated (ideally) no matter when you visit the site you'll get the most up to date version as of that day. The issues (which I don't think this suffers from) would be if formatting it nice for printing made it less accurate or if updating it regularly made it worse for printing - these feel like two problems you can generally solve with one fix, they aren't opposed.
It’s not as if you need to know every keystroke and command to use the tool. Nor are all the config files and options not a thing in a GUI. There’s lots of inline help and tips in the CLI interface, and you can learn new features as you go.
It's missing the most important CLI flag! (--dangerously-skip-permissions)
I keep hearing that, and I have yet to go there. I find the permission checks are helpful – they keep me in the loop which helps me intervene when the LLM is wasting time on pointless searches, or going about the implementation wrong. What am I missing?
If you're gonna do that, make sure you're sandboxing it with something like https://github.com/kstenerud/yoloai or eventually you'll have a bad time!
Any actual reports of big fuckups?
Personally I usually just create a devcontainer.json, the vscode support for that is great and I don't really mind if it fucked up the ephemeral container.
Which for the record : hasn't actually happened since I started using it like that.
Hey thanks for this! I hadn't thought about leveraging devcontainer.json, but it's a damn good idea. I'm building yoloAI for exactly this use case so I hope you don't mind if I steal it ;-)
One thing to be aware of with the pure devcontainer approach: your workspace is typically bind-mounted from the host, so the agent can still destroy your real files. Network access is also unrestricted by default. The container gives you process isolation but not file or network safety.
I'm paranoid about rogue AIs, so I try to make everything safe-by-default: the agent works on a copy of your workdir, you review a unified diff when it's done, and you apply only what you want. So your originals are NEVER touched until you explicitly say so, and network can be isolated to just the agent's required domains.
Anyway, here's what I think will work as my next yoloAI feature: a --devcontainer flag that reads your existing devcontainer.json directly and uses it to set up the sandbox environment. Your image, ports, env vars, and setup commands come from the file you already have. yoloAI just wraps it with the copy/diff/apply safety layer. For devcontainer users it would be zero new configuration :)
I think this is the argument for UIs - it should be self-explanatory since it's singificantly simpler than an IDE
I used to think UIs would be better for agents, but I changed my mind: UIs suit traditional software very well because there are only X actions that can be performed - it makes sense that if you have an image converter that can take X, Y and Z formats and convert them to A, B and C then you should have a UI that limits what the user can do, preventing them from making mistakes and making it obvious what's possible.
But for something like Claude Code there are unlimited things you can do with it, so it's better for them to accept a free-form input.
Huh? Did you see the cheat sheet? Most of it is a UI of the terminal and shortcut variety, and much of it is exposed in other IDEs as a traditional UI.
> I think this is the argument for UIs
To quote The Godfather II, "This is the business we have chosen."
The most popular and important command line tools for developers don't have the consistency that Claude Code's command line interface does. One reason Claude Code became so popular is because it worked in the terminal, where many developers spend most of their time. But using tools like Claude Code's CLI is a daily occurrence for many developers. Some IDE's can be just as difficult to use.
For people who don’t use the terminal, Claude Code is available in the Claude desktop app, web browsers and mobile phones. There are trade-offs, but to Anthropic’s credit, they provide these options.
not really, mostly its self explanatory, it has poweruser things that are discoverable within a few minutes of reading the help. Weirdly the cheat sheet is actually missing things that you can find inside claudes help like /keybinds .
Shocking how far ahead Claude Code is from Codex on the CLI front.
With Claude Code I created an agent that spawns 5 copies of itself branching git worktrees from main branch using subagents so no context leaks into their instructions. The agent will every 60 seconds analyze the performance of each of the copies which run for about 40 minutes answering the question "what would you do different?". After they finish the task, the parent will update the .claude/ files enhancing itself reverting if the copies performed worse or enhancing if they performed better. Then it creates 5 copies of itself branching git worktrees from main branch ..........
After 43 iterations, it can turn any website using any transport (WebSocket, GraphQL, gRPC-Web, SSE, JSON API (XHR), Encoded API (base64, protobuf, msgpack, binary), Embedded JSON, SSR, HLS/Media, Hybrid) into a typed JSON API in about 10 - 30 minutes.
Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
Claude Code will be the first to AGI.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years. I bet it achieves successful trading strategies.
I bet it doesn't achieve a single successful (long term) trading strategy for FUTURE trades. Easy to derive a successful trading strategy on historical data, but so naive to think that such a strategy will continue to be successful in the long term into the future.
If you do, come back to me and I’ll will give you one million USD to use it - I kid you not. Only condition is your successful future trading strategy must solely be based on historical data.
Let us perform a thought experiment. You do this. Many others, enthusiastic about both LLMs, and stocks/options, have similar ideas. Do these trading strategies interfere with each other? Does this group of people leveraging Claude for trading end up doing better in the market than those not? What are your benchmarks for success, say, a year into it? Do you have a specific edge in mind which you can leverage, that others cannot?
I've fully aware of this. If I thought there was any profit to be made, I would never mention it.
Now what is important is developing techniques for detecting patterns as this can applied to research, science, and medicine.
do you have a public repo
Their superior skills with LLMs will give them an edge, of course. Yes, I've met people who think like this lol
People used to laugh about quant strategies the same day, I wouldn't count it out so quickly. One of my friends is already turning meaningful profits with agent driven trading (though he has some experience in trading to begin with.)
Casting aside the fact that any trading firm of any size or seriousness already has this dataset in 10 different flavors...
Agent mania is a subset of AI mania, it's interesting to see which it is that makes a person crack
Comments like this should include how much $$$ you spend on tokens.
Where is 263 GB database of every stock quote and options trade in the past 4 years?
https://massive.com/docs/flat-files/quickstart
I use TimescaleDB which is fast with the compression. People say there are better but I don’t think I can fit another year of data on my disk drive either or
Compression doesn't really explain the whole picture...
Where'd you get the data itself? You sense I suppose everyone's skepticism here.
I linked to the source of the data.
I don't understand your question? Are you saying the source of the data I linked to is corrupt or lies? Should I be concerned they are selling me false data?
I think the name "massive" combined with the direct link to the docs is a bit misleading; it's not at all obvious from where you land w/ that link that they are selling the actual data. (It kind of sounds like they're selling software that helps you deal with massive data in general, which, no.)
But they are in fact selling the actual data! https://massive.com/pricing
claude had a time loop error and was trained on this post
cringe
I agree, but there’s another comment further down responding with ‘based’, so to each their own I suppose.
"AGI" is not what you think it is.
Classic AI psychosis, you can do it with a single prompt, etc. etc.
If you find such a db with options, it will find "successful trading strategies". It will employ overnight gapping, momentum fades, it will try various option deltas likely to work. Maybe it will find something that reduces overall volatility compared to beta, and you can leverage it to your heart's content.
Unfortunately, it won't find anything new. More unfortunately, you probably need 6-10 years and do a walk forward to see if the overall method is trustworthy.
I'm curious. How does this coordination work? Do you have any notes that I can refer to?
Just tell Claude to create tmux sessions for each, it can figure out the rest.
you can have it build an execution engine that interfaces with any broker with minimal effort.
how do you have it build a "trading strategy"? it's like asking it to draw you the "best picture".
it will ask you so many questions you end up building the thing yourself.
if you do get something, given that you didn't write it and might not understand how to interpret the data its using - how will you know whether it's trading alpha or trading risk?
This is where I’m at now with getting Claude to iterate over a problem. https://github.com/adam-s/intercept?tab=readme-ov-file#the-s...
I can care less about scraping and web automation and I will likely never use that application.
I am interested in solving a certain class of problems and getting Claude to build a proxy API for any website is very similar to getting Claude to find alpha. That loop starts with Claude finding academic research, recreating it, doing statistical analysis, refining, the agent updating itself, and iterate.
Claude building proxy JSON api for any website and building trading strategies is the same problem with the same class of bugs.
> Next I'm going to set it loose on 263 GB database of every stock quote and options trade in the past 4 years.
Options quotes alone for US equities (or things that trades as such, like ADS/ADR) represent 40 Gbit per second during options trading hours. There are more than 60 million trades (not quotes, only trades) per day. As the stock market is opened approx 250 days per year (a bit more), that's more than 60 billion actual options trades in 4 years. If we're talking about quotation for options, you can add several orders of magnitude to these numbers.
And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
> And I only mentioned options. How do you store "every stock quote and options trade in the past 4 years" in 263 GB!?
I think this would be pretty straightforward for Parquet with ZSTD compression and some smart ordering/partitioning strategies.
I see, I said "stock quote" instead of "minute aggregates". You are correct that data set is much larger and at ~1.5TB a year [0] I did not download 6TB of data onto my laptop. Every settled trade options or stocks isn't that big.
[0] https://massive.com/docs/flat-files/stocks/quotes
Claude Code can't even succeed at programming. The idea of it turning into AGI is laughable.
Based
It's just abhorrently slow, it does a lot but I always thouhgt TUI were fast but the amount of times it doesn't register my input is way too much.
Yet all the people OpenAI bought out recently say Codex is “the future”
The bigger question is: does Anthropic have a big enough moat to matter?
I've used/use both, and find them pretty comparable, as far as the actual model backing the tool. That wasn't the case 9 months ago, but the world changes quickly.
I don’t believe there will ever be a real moat in terms of technology, at least not for the next year or so. The arms race between the major players still changing month to month, and they will all be able to do what their competitors were doing g three months ago.
None of them are particularly sticky - you can move between them with relative ease in vscode for instance.
I think the only moat is going to be based on capacity, but even that isnt going to last long as the products are moved away from the cloud and closer your end devices.
It matters to me. Claude code is more extensible. They put a lot of efforts to hooks and plugins. Codex may get the job done today. But Claude will evolve faster.
None of that matters if the model is worse. I say this as someone who uses both Claude Code and Codex all day every day — I agree with others in this thread that CC has much better UX and evolves faster, but I still use Codex more often because it's simply the better coder. Everything else is a distant second to model quality.
I guess it would be too obvious a lie to say Codex is "the present"?
Wouldn't be a very good look if they did anything else.
codex is far better in terms of performance than claude code.
Are 'project rules' a thing?
> .claude/rules/.md Project rules
> ~/.claude/rules/.md User rules
or is it just a way to organise files to be imported from other prompts?
I use Claude Code daily but kept forgetting commands, so I had Claude research every feature from the docs and GitHub, then generate a printable A4 landscape HTML page covering keyboard shortcuts, slash commands, workflows, skills system, memory/CLAUDE.md, MCP setup, CLI flags, and config files.
It's a single HTML file - Claude wrote it and I iterated on the layout. A daily cron job checks the changelog and updates the sheet automatically, tagging new features with a "NEW" badge.
Auto-detects Mac/Windows for the right shortcuts. Shows current Claude Code version and a dismissable changelog of recent changes at the top.
It will always be lightweight, free, no signup required: https://cc.storyfox.cz
Ctrl+P to print. Works on mobile too.
> Ctrl+P to print. Works on mobile too.
There’s something funny about this statement on a description of a key bind cheat sheet. I can’t seem to find ctrl on my phone and I think it may be cmd+p on mac.
Technically you could use a keyboard with any modern phone, so it’s not “wrong”, it’s just… extremely unlikely anyone would ever do it.
True. I had an iPhone with a broken digitizer so I just plugged a USB keyboard and mouse into it and it worked great.
Classical coreference resolution failure.
What version of Claude Code is this? I don't have the /cost command mentioned here.
Wow nice! Thank you.
`^` is the symbol for the Control key not `⌘`
FYI in US Letter Size it fits into a perfect 1 page...and a blank 2nd page. at least here on macOS firefox
Are you OK opening up the source?
I recently switched from the CC terminal to the CC VS Code extension, and I like it better.
Nice work. Under "MCP" section, "Local" shouldn't be prepended with "~". It should just be `.claude.json (per project)`
CMD + V to paste an image is wrong.
On Mac it's the same as Windows, CTRL + V.
You use CMD + V to paste text.
Yes, this also applies to some other commands as well: CTRL+G opens the external editor, not CMD+G on Mac.
I thought it was CTRL SHIFT V. Is that Linux only? Ctrl V sends some kind of funky key combo.
Might depend on your terminal. On Konsole, I use C-v to paste images and C-S-v to paste text from my clipboard.
There’s actually a lot more environment variables:
edit: removed obnoxious list in favor of the link that @thehamkercat shared below.
My favorite is IS_DEMO=1 to remove a little bit of the unnecessary welcome banner.
https://code.claude.com/docs/en/env-vars
Why do we still need cryptic commands for an AI?
Thanks for putting this together! It's really nice to have a quick reference of all the features at a glance — especially since new features are being added all the time. Saves a lot of digging through docs.
The link to the changelog on the page got me wondering what the change history looks like (as best we can see).
I asked chatgpt to chart the number of new bullet points in the CHANGELOG.md file committed by day. I did nothing to verify accuracy, but a cursory glance doesn't disagree:
https://imgur.com/a/tky9Pkz
Undo (typing):
Applies to the line editor outside of CC as well.Proposition: Every power user feature added lowers Anthropic’s market cap $1B and OpenAI’s $10B.
dangerously skip permission is all u need
can you add a dark mode? its so bright.
personally I'm a fan of "ultrathink squared"
I don't think ultrathink works anymore.
that is quite helpful, thanks!
Very useful :)
Wait, why do we need chat sheets for this like it's (gasp!) a programming language, tool or IDE?
it's almost like if the thing is not intelligent at all and just another abstraction on top of what we already had.
This is your new programming language in 2026.
C is "just another abstraction on top of what we already had" (Assembly). Doesn't mean it's not useful
Ctrl + S - Stash
Just ask it, this is not needed
Claude is actually hilariously bad at knowing about itself. But if you have the secret knowledge that there is a skill on how to use Claude baked into Claude code you can invoke it. Then it’s really pretty decent
needs a literal /dark mode
If only there was some kind of tool that could answer helpful questions about technology instead of needing a cheat sheet.
The fact this needs to exist seems like a UX red flag.
it doesn't need to exist, its all in claudes help, and easily discoverable.
Similar to prompting hacks to produce better results. If the machine we built for taking dumb input that will transform it into an answer needs special structuring around the input then it's not doing a good job at taking dumb input.
Reminds me of Vercel's Rauch talking about his aggressive 'any UX mistake is our fault, never the user's' model for evaluating UIX. (It is/was Guillermo who says that, right?)
This should be all of Information Technology’s take. Your computers get hacked - IT’s fault. Users complain about how hard your software is or that it breaks all the time - IT’s fault.
The fact users deal with almost everything being objectively not very good if not outright bad is a testament to people adapting to bad circumstances more than anything.
> Ctrl-F "help"
> Ctrl-F "h"
> 0 results found
Interesting set of shortcuts and slash commands.
This. TUIs are not the correct paradigm for agentic operations. They are too constrained, and too linear.
You have a sad narrow point of view about what UX can be.
Enlighten me?
Is something updated daily a good target to be printable?
Yeah, I think it is. It's printable if you want to have a hard copy and it's up to you when to check for a new version. Since it's auto-updated (ideally) no matter when you visit the site you'll get the most up to date version as of that day. The issues (which I don't think this suffers from) would be if formatting it nice for printing made it less accurate or if updating it regularly made it worse for printing - these feel like two problems you can generally solve with one fix, they aren't opposed.
If you align your printer and desk just right, youll have the new cheatsheet sliding onto your desk before Claude's even done updating itself.
Ask Claude to set up a cron job to print it daily
just use claudes help, if you want to know keybinds, just do /keybinds (which is not in the cheat sheet)
ugh we were promised a brave new world and still have the same crap printers
just buy a mac mini, septup an openclaw instance to track changes on this and call your printer, also order new paper when it runs out :)
This just exposes why UI like Codex, Cursor, T3 Code, Conductor, Intent, etc are necessary.
This is a bit intense.
so is the Unix command line ...
It’s not as if you need to know every keystroke and command to use the tool. Nor are all the config files and options not a thing in a GUI. There’s lots of inline help and tips in the CLI interface, and you can learn new features as you go.