I'd love to see them point at a target that's not a decades old C/C++ codebase. Of the targets, only browsers are what should be considered hardened, and their biggest lever is sandboxing, which requires a lot of chained exploits to bypass - we're seeing that LLMs are fast to discover bugs, which means they can chain more easily. But bug density in these code bases is known to be extremely high - especially the underlying operating systems, which are always the weak link for sandbox escapes.
I'd love to see them go for a wasm interpreter escape, or a Firecracker escape, etc. They say that these aren't just "stack-smashing" but it's not like heap spray is a novel technique lol
> It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses.
I think this sounds more impressive than it is, for example. KASLR has a terrible history for preventing an LPE, and LPE in Linux is incredibly common. Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.
> Because these codebases are so frequently audited, almost all trivial bugs have been found and patched. What’s left is, almost by definition, the kind of bug that is challenging to find. This makes finding these bugs a good test of capabilities.
This just isn't true. Humans find new bugs in all of this software constantly.
It's all very impressive that an agent can do this stuff, to be clear, but I guess I see this as an obvious implication of "agents can explore program states very well".
edit: To be clear, I stopped about 30% of the way through. Take that as you will.
The majority of vulnerabilities are in newly committed lines of code. This has been shown again and again [1] [2]
From a marketing standpoint Anthropic is showing that they're able to direct 'compute' to find vulnerabilities where human time/cost is not efficient or effective.
Project Glasswing is attempting to pay off as many of these old vulnerabilities as possible now so the low-hanging fruit has already been picked.
The next generation of Mythos and real world vulnerabilities exploits are going to be in newly committed code...
> Mythos Preview identified a memory-corruption vulnerability in a production memory-safe VMM. This vulnerability has not been patched, so we neither name the project nor discuss details of the exploit.
Good morning Sir.
> Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.
No. It's still like this. Bonus point that there are always free KASLR leaks (prefetch side-channels).
But then, this thing is just.. I don't have a word for this. Just randomly read paragraphs from the post and it's like, what?
A very good outcome for AI safety would be if when improved models get released, malicious actors use them to break society in very visible ways. Looks like we're getting close to that world.
A plateau is unlikely, at least for cybersecurity. RL scales well here and is replicable outside of Anthropic (rewards are verifiable, so setting up the training environment doesn't require that much cleverness).
The post also points out that the model wasn't trained specifically on cybersecurity, and that it was just a side-effect – so I think there's still a lot of headroom.
It's scary, but there's also some room for cautious non-pessimism. More people than ever can cause billions of dollars of damage in attacks now [1], but the same tools can be used for defensive use. For that reason, I'm more optimistic about mitigations in security vs. other risk areas like biosecurity.
On a topic like cybersecurity, we never win by not looking: One needs top of the line knowledge of how to break a system to be able to protect it. We have that dilemma dealing with human experts: The same government sponsored unit that tells you that you need to update your encryption can hold on to the information and use it to exploit it at their leisure.
Given that it's absolutely impossible to stop people not aligned with us (for any definition of us) from doing AI research, the most reasonable way forward is to dedicate compute resources to the frontier, and to automatically send reasonable disclosures to major projects. It could in itself be a pretty reasonable product. Just like you pay for dubious security scans and publish that you are making them, an LLM company could offer actually expensive security reviews with a preview model, and charge accordingly.
I'd love to see them point at a target that's not a decades old C/C++ codebase. Of the targets, only browsers are what should be considered hardened, and their biggest lever is sandboxing, which requires a lot of chained exploits to bypass - we're seeing that LLMs are fast to discover bugs, which means they can chain more easily. But bug density in these code bases is known to be extremely high - especially the underlying operating systems, which are always the weak link for sandbox escapes.
I'd love to see them go for a wasm interpreter escape, or a Firecracker escape, etc. They say that these aren't just "stack-smashing" but it's not like heap spray is a novel technique lol
> It autonomously obtained local privilege escalation exploits on Linux and other operating systems by exploiting subtle race conditions and KASLR-bypasses.
I think this sounds more impressive than it is, for example. KASLR has a terrible history for preventing an LPE, and LPE in Linux is incredibly common. Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.
> Because these codebases are so frequently audited, almost all trivial bugs have been found and patched. What’s left is, almost by definition, the kind of bug that is challenging to find. This makes finding these bugs a good test of capabilities.
This just isn't true. Humans find new bugs in all of this software constantly.
It's all very impressive that an agent can do this stuff, to be clear, but I guess I see this as an obvious implication of "agents can explore program states very well".
edit: To be clear, I stopped about 30% of the way through. Take that as you will.
The majority of vulnerabilities are in newly committed lines of code. This has been shown again and again [1] [2]
From a marketing standpoint Anthropic is showing that they're able to direct 'compute' to find vulnerabilities where human time/cost is not efficient or effective.
Project Glasswing is attempting to pay off as many of these old vulnerabilities as possible now so the low-hanging fruit has already been picked.
The next generation of Mythos and real world vulnerabilities exploits are going to be in newly committed code...
[1]: https://dl.acm.org/doi/epdf/10.1145/2635868.2635880
[2]: https://arxiv.org/abs/2601.22196
> Mythos Preview identified a memory-corruption vulnerability in a production memory-safe VMM. This vulnerability has not been patched, so we neither name the project nor discuss details of the exploit.
Good morning Sir.
> Has anything changed here? I don't pay much attention but KASLR was considered basically useless for preventing LPE a few years ago.
No. It's still like this. Bonus point that there are always free KASLR leaks (prefetch side-channels).
But then, this thing is just.. I don't have a word for this. Just randomly read paragraphs from the post and it's like, what?
The name made me think about Tales of Symphonia :)
A very good outcome for AI safety would be if when improved models get released, malicious actors use them to break society in very visible ways. Looks like we're getting close to that world.
It would certainly be good news for cybersecurity employment!
Gives me Fight Club vibes.
This is becoming a bit scary. I almost hope we'll reach some kind of plateau for llm intelligence soon.
A plateau is unlikely, at least for cybersecurity. RL scales well here and is replicable outside of Anthropic (rewards are verifiable, so setting up the training environment doesn't require that much cleverness).
The post also points out that the model wasn't trained specifically on cybersecurity, and that it was just a side-effect – so I think there's still a lot of headroom.
It's scary, but there's also some room for cautious non-pessimism. More people than ever can cause billions of dollars of damage in attacks now [1], but the same tools can be used for defensive use. For that reason, I'm more optimistic about mitigations in security vs. other risk areas like biosecurity.
[1]: https://www.noahlebovic.com/testing-an-autonomous-hacker/
On a topic like cybersecurity, we never win by not looking: One needs top of the line knowledge of how to break a system to be able to protect it. We have that dilemma dealing with human experts: The same government sponsored unit that tells you that you need to update your encryption can hold on to the information and use it to exploit it at their leisure.
Given that it's absolutely impossible to stop people not aligned with us (for any definition of us) from doing AI research, the most reasonable way forward is to dedicate compute resources to the frontier, and to automatically send reasonable disclosures to major projects. It could in itself be a pretty reasonable product. Just like you pay for dubious security scans and publish that you are making them, an LLM company could offer actually expensive security reviews with a preview model, and charge accordingly.
We need to promote alignment and other ethics benchmarks; we can't change what we don't measure. I don't even know any off the top of my head.
If we don't innovate, someone else will. This is the very nature of being a human being. We summit mountains, regardless of the danger or challenge.