If a computer (or “agent” in modern terms) wants to order you a pizza it can technically already do so.
The reason computers currently can’t order us pizza or book us flights isn’t because of a technical limitation, it’s because the pizza place doesn’t want to just sell you a pizza and the airline doesn’t want to just sell you a flight. Instead they have an entire payroll of people whose salaries are derived from wasting human time, more commonly know as “engagement”. In fact those people will get paid regardless if you actually buy anything, so their incentive is often to waste more of your time even if it means trading off an actual purchase.
The “malicious” uses of AI that this very article refers to are mostly just that - computers/AI agents acting on behalf of humans to sidestep the “wasting human time” issue. The fact that agents may issue more requests than a human user is because information is intentionally not being presented to them in a concise, structured manner. If Dominos or Pizza Hut wanted to sell just pizzas tomorrow they can trivially publish an OpenAPI spec for agents to consume, or even collaborate on an HPOP protocol (Hypertext Pizza Ordering Protocol) to which HPOP clients can connect (no LLMs needed even). But they don’t, because wasting human time is the whole point.
So why would any of these companies suddenly opt into this system? Companies that are after actual money and don’t profit from wasting human time are already ready and don’t have to do anything (if an AI agent is already throwing Bitcoin or valid credit card details at you to buy your pizzas, you are fine), and those that do have zero incentive to opt in since they’d be trading off “engagement” for old-school, boring money (who needs that nowadays right?).
I understood this as a tool to fight bot-net scraping. I imagined that this would add accountability to clients for how many requests they make.
I know that phrasing it like "large company cloudflare wants to increase internet accountability" will make many people uncomfortable. I think caution is good here. However, I also think that the internet has a real accountability problem that deserves attention. I think that the accountability problem is so bad, that some solution is going to end up getting implemented. That might mean that the most pro-freedom approach is to help design the solution, rather than avoiding the conversation.
Bad ideas:
You're getting lots of bot requests, so you start demanding clients login to view your blog. It's anti-user, anti-privacy, very annoying, readership drops, everyone is sad.
Instead, what if your browser included your government id in every request automatically? Anti-user, anti-privacy, no browser would implement it.
This idea:
But ARC is a middle ground. Subsets of the internet band together (in this case, via cloudflare) and strike a compromise with users. Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this. I assume that it would be sufficiently pro-social that the IETF and browsers all agree to it and it's transparent & completely privacy-respecting to normal users.
We already sort of have some accountability: it's "proof of bandwidth" and "proof of multiple unique ip addresses", but that's not well tuned. In fact, IP addresses destroy privacy for most people, while doing very little to stop bot-nets.
> Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this.
This seems like it would just cause the tokens to become a commodity.
The premise is that you're giving out enough for the usage of the large majority of people, but how many do you give out? If you give out enough for the 95th percentile of usage then 5% of people -- i.e. hundreds of millions of people in the world -- won't have enough for their normal usage. Which is the first problem.
Meanwhile 95% of people would then have more tokens than they need, and the tokens would be scarce, so then they would sell the ones they're not using. Which is the second problem. The people who are the most strapped for cash sell all their tokens for a couple bucks but then get locked out of the internet.
The third problem is that the AI companies would be the ones buying them, and since the large majority of people would have more than they need, they wouldn't be that expensive, and then that wouldn't prevent scraping. Unless you turn the scarcity way up and make the first and second problems really bad.
Many of us here are old enough to remember the promise of web 2.0 where "APIs will talk with APIs making everything super fast and automated". Then, businesses realized that they do no in fact just "sell a product" but have this entire flywheel and side hustle they depend on to extract maximum value from the user.
Oh and also turns out if the data you share is easily collected it can be analyzed and tracked to prove your crimes like price gauging, IP infringement and other unlawful acts - that's not good for business either!
Web 2.0 was sites not having finished loading when you thought they had, buttons having a 1 in 20 chance of doing nothing when you click them, and the advent of "oops, something went wrong" being considered an acceptable error message.
Maybe the company doesn't want to spend the effort to develop an API. They can through some Cloudflare solution in front and call it done.
Also I wonder if credit card chargebacks are a concern. They might worry that allowing a single user to make a million orders would be a problem, so they might want to rate limit users.
One problem with HPOP is the chicken-egg adoption problem: There is little reason to implement HPOP because nobody will have a client for it, and little reason to build a client because nobody has implemented HPOP.
Part of this is the friction required to implement a client for a bespoke API that only one vendor offers, and the even bigger friction of building a standard.
AI and MCP servers might be able to fix this. In turn, companies will have a motivation to offer AI-compatible interfaces because if the only way to order a pizza is through an engagement farm, the AI agent is just going to order the pizza somewhere else.
If big pizza franchises wanted HPOP they could just make it the api by which their apps talk to their backend. New cross-pizza-place-apps and tools would pop up within a month
Really, they could each do their own bespoke thing as long as they didn't go out of their way to shut out other implementers.
Instant messaging used to work like this until everyone wanted to own their customer bases and lock them in, for the time-wasting aspect
What are you on about? I genuinely don't understand your point. Of course they make money by selling pizzas, not something else. And they've figured the way to maximize that is by having a brand and a presence and own the customer relationship, thus people buy from then.
If they end up as just a pizza-api they have no moat and are trivially replaced by another api and bakery, and will make less money.
Making money by selling pizzas? Maybe. Big chains make money by selling high profit items like drinks or fries or getting you to upsize. And a whole sales process and a/b tested menus and marketing to encourage you do choose the profitable options. They lose all that if an agent just makes an order 'large pepporoni kthx bye'. Probably fantastic from a consumer point of view, but lots of businesses are going to hate it.
They're talking about the sales experience that a pizza API would completely sidestep.
The big national pizza chains don't offer good prices on pizza. They offer bad prices on pizza, and then offer 'deals' that bring prices back down. These deals, generally, exist to steer customers towards buying more or buying higher-margin items (bread sticks, soda, etc).
If you could order pizza through an API, they wouldn't get the chance to upsell you. If it worked for multiple pizza places, it would advantage places who offer better value with their list prices.
The original comment may be phrased clumsily. I believe the idea was that they do not want to make money by competing on a pizza market. That's the contradiction as old as the modern economy that it both relies on free market in principle, yet evolves naturally toward monopolies/walled gardens/fiefdoms.
Another contradiction at play here is that of inovation vs standardisation. Indeed, you could argue that dominoes' website is also a place where thay can inovate (bring your own recipes! delivery by drone! pay with tokens! wtv!) whereas a pizza protocol would slow down or prevent some inovation. And that LLMs are used to circumvent and therefore standardize the process of ordering a pizza (like you had user maintained APIs to query various incompatible banq websites; these days they probably use LLMs as well).
This is something I’ve been saying for a while[0,1]:
Services need the ability to obtain an identifier that:
- Belongs to exactly one real person.
- That a person cannot own more than one of.
- That is unique per-service.
- That cannot be tied to a real-world identity.
- That can be used by the person to optionally disclose attributes like whether they are an adult or not.
Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.
The ability to “reset” your identity is the underlying hole that enables a vast amount of abuse. It’s possible to have persistent, pseudonymous access to the Internet without disclosing real-world identity. Being able to permanently ban abusers from a service would have a hugely positive effect on the Internet.
Exactly one seems hard to implement (some kind of global registry?). I think relaxing this requirement slightly, such that a user could for instance get a small number of different identities by going to different attestors, would be easier to implement while also making for a better balance. That is, I don't want users to be able to trivially make thousands of accounts, but I also don't want websites to be able to entirely prevent privacy throwaway accounts, for a false ban from Google's services to be bound to your soul for life, to be permanently locked out using anything digital because your identifier was compromised by malware and can't be "reset", or so on.
This is generally considered an unsolvable problem when trying to fulfill all of these requirements (cf. sibling post). Most subsets are easy, but not the full list.
An issue, also in crypto, is that people will get their "identifiers" stolen. How do you prevent stealing, or recover stolen identifiers, without compromising anonymity?
Another issue is that people will hire (or enslave) others to effectively lend their identifiers, and it's very hard to distinguish between someone "lending" their identifier vs using it for themselves.
I've been thinking about hierarchical management. Roughly, your identifier is managed by your town, which has its own identifier managed by your state, which has its own identifier managed by your government, which has its own identifier managed by a bloc of governments, which has its own identifier managed by an international organization. When you interact with a foreign website and it requests your identity, you forward the request to your town with your personal identifier, your town forwards the request to your state with the town's identifier, and so on. Town "management" means that towns generate, assign, and revoke stolen personal identifiers, and authenticate requests; state "management" means that states generate, assign, and revoke town identifiers, and authenticate requests (not knowing who in the town sent the request); etc.
The idea is to prevent a much more powerful organization, like a state, from persecuting a much less powerful one, like an individual. In the hierarchical system, your town can persecute you: they can refuse to give you an identifier, give yours to someone else, track what sites you visit, etc. But then, especially if you can convince other town members (which ideally happens if you're unjustly persecuted), it's easier for you to confront the town and convince them to change, than it is to confront and convince a large government. Likewise, states can persecute entire towns, but an entire town is better at resisting than an individual, especially if that town allies with other towns. And governments can persecute entire states, and blocs can persecute entire governments, and the international organization can persecute entire blocs, but not the layer below.
In practice, the hierarchy probably needs many more layers; today's "towns" are sometimes big cities, states are much larger than towns, governments and much more powerful than states, etc. so there must be layers in-between for the layer below to effectively challenge the layer above. Assigning layers may be particularly hard because it requires balance, to enable most justified persecutions, e.g. a bloc punishing a government for not taking care of its scam centers, while preventing most unjustified persecutions. And there will inevitably be towns, states, governments, etc. where the majority of citizens are "unjust", and the layer above can only punish them entirely. So yes, hierarchical management still has many flaws, but is there a better alternative?
The way that this is usually implemented is with some sort of HSM (eg. smart card, like on a e-id), that holds a private key that's shared with hundreds (or more) other HSMs. The HSM part ensures the key can't be copied out to forge infinite amounts of other identifies, and the shared private key ensures it's vaguely anonymous.
I don't see how you can prevent multiple people sharing access to one HSM. Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?
>I don't see how you can prevent multiple people sharing access to one HSM.
Obviously that's out of scope unless the HSM has a retina scanner or whatever, but even then there's nothing preventing someone from consensually using their cousin's government issued id (ie. HSM) to access a 18+ site.
> Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?
The idea is that the HSM will sign arbitrary proofs to give to relying parties. The relying parties can validate the key used to sign the proof is valid through some sort of certificate chain that is ultimately rooted at some government CA. However because the key is shared among hundreds/thousands/tens of thousands of HSMs/ids, it's impossible to tie that to a specific person/id/HSM.
> Is this assuming the HSM holds multiple keys?
Yeah, you'd need a separate device-specific key to sign/generate an identifier that's unique per-service. To summarize:
each HSM contains two keys:
1. K1: device-specific key, specific to the given HSM
2. K2: shared across some large number of HSMs
both keys is resistant to be extracted from the HSM, and the HSM will only use them for signing
To authenticate to a website (relying party):
1. HSM generates id, using something like hmac(site domain name, K1)
2. HSM generate signing blob containing the above id, and whatever additional attributes the user wants to disclose (eg. their name or whether they're 18+) plus timestamp/anti-replay token (or similar), signs it with k2, and returns to the site. The HSM also returns a certificate certifying that K2 is issued by some national government.
The site can verify the response comes from a genuine HSM because the certificate chains to some national government's CA. The site can also be sure that users can't create multiple accounts, because each HSM will generate the same id given the same site. However two sites can't correlate identities because the id changes depending on the site, and the signing key/certificate is shared among a large number of users. Governments can still theoretically deanonymize users if they retain K1 and work with site operators.
A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb
I first thought this was just a crypto play with 1 wallet per real person (wasn't a huge fan), but with the proliferation of AI, it makes sense we'll eventually need safeguards to ensure a user's humanity, ideally without any other identifiers needed.
> A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb
The flak should be because it's from Sam Altman. A billionaire tech bro giving us both the disease and the cure, and profiting massively along the way, is what's truly dystopian.
> In order to use Privacy Pass for per-user rate-limiting, it's necessary to limit the number of tokens issued to each user (e.g., 100 tokens per user per hour). To rate limit an AI agent, this role would be fulfilled by the AI platform. To obtain tokens, the user would log in with the platform, and said platform would allow the user to get tokens from the issuer. The AI platform fulfills the attester role in Privacy Pass parlance.
If it's up to the AI platform to issue limited tokens to users, and it's also the AI platform making the web requests, I'm not understanding the purpose of the cryptography/tokens. Couldn't the platform already limit a user to 100 web requests per hour just with an internal counter?
1. You convince a server that you deserve to have 100 tokens (probably by presenting some non-anonymous credentials)
2. You handshake with the server and walk away with 100 untraceable tokens
3. At anytime, you can present the server with a token. The server only knows
a. The token is valid
b. The token has not been previously used
Other details (disclaimer, I am not a cryptographer):
- The server has a public + public key for ARC, which is how it knows that it was the one to issue the tokens. It's also how you know that your tokens are in the same pool as everyone else's tokens.
- It seems like there's an option for your 100 tokens to all be 'branded' with some public information. I assume this would be information like "Expires June 2026" or "Token Value: 1 USD", not "User ID 209385"
- The client actually ends up with a key which will generate the 100 tokens in sequence.
- Obviously the number 100 is configurable.
- It seems like there were already schemes to do this, but providing only one token (RFC 9497, RFC 9474) but I'm not sure how popular those were.
Could this help solve the pesky problem of anonymous age attestation (i.e. the "I'm acting on behalf of someone who's over 18") by having some attestator that only issues time-linked tokens (you're not allowed to use more than X in any given span of time, so users are significantly deterred from sharing tokens with others) to persons who are verifiably over 18?
This article was published on Oct 30th, but it refers to IETF draft-yun-cfrg-arc-01, a draft that was superseded by draft-yun-privacypass-crypto-arc-00 <https://datatracker.ietf.org/doc/draft-yun-privacypass-crypt...> on Oct 20th, i.e. more than a week before the article was published.
This blog post is offensive to me on three levels:
1. It is clearly not written with a desire to actually convey information in a concise, helpful way.
2. It is riddled with advertisements for Cloudflare services which bear absolutely no relevance to the topic at hand
3. The actual point of the article (anonymous rate limiting tokens) is pointlessly obscured by an irrelevant use case (AI agents for some reason)
Of course, the second two points seem to be heavily related to the first.
This is barely any better -- in terms of respect for the reader's intelligence and savviness -- than those "Apple just gave ten million users a reason to THROW AWAY THEIR IPHONES" trash articles. Just slop meant to get you to click on links to Cloudflare services and vaguely associate Cloudflare with the "Agentic AI future", with no actual intention whatsoever of creating a quality article.
Cloudflare already does rate limiting. They explain in the article why their existing rate limiting solutions (IP, fingerprinting) don't work well with AI. That explains why they need a new solution.
Well, not really. They've explained why their existing solutions don't work well for proxies / gateways, which cloud-based AI agents are an example of.
There's a few comments asking for further info on the motivation.
I'll explain my understanding.
Consider what problem CAPTCHA aims to solve (abuse) and how that's ineffective in an age of AI agents: it cannot distinguish "bot that is trying to buy a pizza" vs "bot that is trying to spider my site".
I don't understand Cloudflare's solution enough to explain that part.
I'm glad to see research here, because if we don't have innovation solutions, we might end up with microtransactions for browsing.
They appear to just want to rate limit by having you go through a hassle to set up an account with some service and then be given 100 tokens per hour which you can use to conduct 100 rate limited actions per hour.
Think SMS verification but with cryptographic sorcery to make it private.
Depending on the level of hassle the service may even use SMS verification at setup. SMS verification is typically easy to acquire for as little as a few cents, but if the goal is to prevent millions of rate limited requests a few cents can add up.
I don’t understand the problem they are trying to solve, and this article is long, so apologies if they actually get around to explaining.
I have a credit card, and an agent. I want a pizza.
These credentials do what, exactly? Prevent the pizza place from taking my money? Allow me to order anonymously so they don’t know where to deliver it?
Also, they are security professionals, so when they say anonymous, they don’t mean pseudonymous, so my agent can produce an unlimited number of identities, right? How do they keep the website from correlating time and IP addresses to link my anonymous requests to a pseudonym?
My cynical take is that the pizzeria has to pay cloudflare a few pennies to process the transaction. What am I missing?
Although this is clearly the equivalent of Cloudflare propaganda, they are trying to address the issue of connecting a user and an agent in a way that respects the users privacy.
They effectively use credentials and cryptography to link the two together in a zero-knowledge type of way. Real issue, although no one is clearly dying for this yet.
Real solution too, but blind credentials and Chaumian signing is equally naive to think it addresses the root issue. Something like Apple will step in to cast a liability shield over all parties and just continue to trap users into the Apple data ecosystem.
The right way to do this is to give the user sovereignty over their identity and usage such that platforms cater to users rather than the middle-men in-between. Harder than what Cloudflare probably wants to truly solve for.
But, why do we want to tie the agent to the user’s identity?
The interface the user wants is “I pay for and obtain pizza”. The interface the pizzaria wants is “I obtain payment via credit card, and send a pizza to some physical location”.
It doesn’t matter who the agent that orders the pizza is acting on behalf of, or if there is an agent, or if some third party indexed the pizzaria menu, then some anarcho-crypto syndicate based in the White House decided to run an auction, and buy this particular pizza for this particular person.
If a malicious user is attacking a site via an agent, the current solution is to block the agent and everyone else using that agent, because the valid requests are indistinguishable from the malicious requests. If the agent passes on a token identifying the users, you can just block agent requests using the malicious user's token.
I think the idea would be that you ask your credit card to convert $10 into 10 untraceable tokens, and then spend them one at a time. You do a handshake dance with the credit card company so you walk away with tokens that only you know, and you have assurance that the tokens are in the same pool as everyone else who asked for untraceable tokens from that credit card company.
Then you can go and spend them freely. The credit card company (and maybe ever third parties?) can verify that the tokens are valid, but they can't associate them with a user. Assuming that the credit card company keeps a log, they can also verify that a token has never been used before.
In some sense, it's a light-weight and anonymous block chain.
They have the nickname Crimeflare for a reason. They allow hundreds of thousands of criminals to use their services maliciously and its a huge hassle to report them only to be met with their stance of "we are only routing traffic not hosting it" and they wont remove the most blatant phishing and malicious pages.
Are you confusing their comments about (paraphrased) "horrible but legal" (up to a point) sites like dailystormer, 8chan, and kiwifarms, with actual blatant phishing sites?
I find it very difficult to believe they won't remove sites involved in clear phishing or malware delivery campaigns, if they can verify it themselves or in cooperation with a security team at a company they trust. That's different from sites that are morally repugnant and whose members spew vitriol, but aren't making any particular threats (and even in cases where there are clear and present threats, CF usually seems to prefer to notify law enforcement, and then follow court orders, rather than inject themselves as a 3rd party judge into the proceedings).
This isn’t true about Daily S. They have been actively working towards and expressively proposing a new holocaust for decades now. In what way are they not an existential threat for Jews, or LGBTQ?
That makes them almost trustworthy.
Ultimately you either have a free internet or an internet free of scams phishing and malware. I'd chose the free internet every time
What problem is this trying to solve exactly?
If a computer (or “agent” in modern terms) wants to order you a pizza it can technically already do so.
The reason computers currently can’t order us pizza or book us flights isn’t because of a technical limitation, it’s because the pizza place doesn’t want to just sell you a pizza and the airline doesn’t want to just sell you a flight. Instead they have an entire payroll of people whose salaries are derived from wasting human time, more commonly know as “engagement”. In fact those people will get paid regardless if you actually buy anything, so their incentive is often to waste more of your time even if it means trading off an actual purchase.
The “malicious” uses of AI that this very article refers to are mostly just that - computers/AI agents acting on behalf of humans to sidestep the “wasting human time” issue. The fact that agents may issue more requests than a human user is because information is intentionally not being presented to them in a concise, structured manner. If Dominos or Pizza Hut wanted to sell just pizzas tomorrow they can trivially publish an OpenAPI spec for agents to consume, or even collaborate on an HPOP protocol (Hypertext Pizza Ordering Protocol) to which HPOP clients can connect (no LLMs needed even). But they don’t, because wasting human time is the whole point.
So why would any of these companies suddenly opt into this system? Companies that are after actual money and don’t profit from wasting human time are already ready and don’t have to do anything (if an AI agent is already throwing Bitcoin or valid credit card details at you to buy your pizzas, you are fine), and those that do have zero incentive to opt in since they’d be trading off “engagement” for old-school, boring money (who needs that nowadays right?).
I understood this as a tool to fight bot-net scraping. I imagined that this would add accountability to clients for how many requests they make.
I know that phrasing it like "large company cloudflare wants to increase internet accountability" will make many people uncomfortable. I think caution is good here. However, I also think that the internet has a real accountability problem that deserves attention. I think that the accountability problem is so bad, that some solution is going to end up getting implemented. That might mean that the most pro-freedom approach is to help design the solution, rather than avoiding the conversation.
Bad ideas:
You're getting lots of bot requests, so you start demanding clients login to view your blog. It's anti-user, anti-privacy, very annoying, readership drops, everyone is sad.
Instead, what if your browser included your government id in every request automatically? Anti-user, anti-privacy, no browser would implement it.
This idea:
But ARC is a middle ground. Subsets of the internet band together (in this case, via cloudflare) and strike a compromise with users. Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this. I assume that it would be sufficiently pro-social that the IETF and browsers all agree to it and it's transparent & completely privacy-respecting to normal users.
We already sort of have some accountability: it's "proof of bandwidth" and "proof of multiple unique ip addresses", but that's not well tuned. In fact, IP addresses destroy privacy for most people, while doing very little to stop bot-nets.
> Individual users need to register with cloudflare, and then cloudflare gives you a million tokens per month to request websites. Or some scheme like this.
This seems like it would just cause the tokens to become a commodity.
The premise is that you're giving out enough for the usage of the large majority of people, but how many do you give out? If you give out enough for the 95th percentile of usage then 5% of people -- i.e. hundreds of millions of people in the world -- won't have enough for their normal usage. Which is the first problem.
Meanwhile 95% of people would then have more tokens than they need, and the tokens would be scarce, so then they would sell the ones they're not using. Which is the second problem. The people who are the most strapped for cash sell all their tokens for a couple bucks but then get locked out of the internet.
The third problem is that the AI companies would be the ones buying them, and since the large majority of people would have more than they need, they wouldn't be that expensive, and then that wouldn't prevent scraping. Unless you turn the scarcity way up and make the first and second problems really bad.
Many of us here are old enough to remember the promise of web 2.0 where "APIs will talk with APIs making everything super fast and automated". Then, businesses realized that they do no in fact just "sell a product" but have this entire flywheel and side hustle they depend on to extract maximum value from the user.
Oh and also turns out if the data you share is easily collected it can be analyzed and tracked to prove your crimes like price gauging, IP infringement and other unlawful acts - that's not good for business either!
> promise of web 2.0 where "APIs will talk with APIs making everything super fast and automated"
Wait I thought web 2.0 was DHTML / client-side scripting and XmlHttpRequest?
Web 2.0 was sites not having finished loading when you thought they had, buttons having a 1 in 20 chance of doing nothing when you click them, and the advent of "oops, something went wrong" being considered an acceptable error message.
Maybe the company doesn't want to spend the effort to develop an API. They can through some Cloudflare solution in front and call it done.
Also I wonder if credit card chargebacks are a concern. They might worry that allowing a single user to make a million orders would be a problem, so they might want to rate limit users.
One problem with HPOP is the chicken-egg adoption problem: There is little reason to implement HPOP because nobody will have a client for it, and little reason to build a client because nobody has implemented HPOP.
Part of this is the friction required to implement a client for a bespoke API that only one vendor offers, and the even bigger friction of building a standard.
AI and MCP servers might be able to fix this. In turn, companies will have a motivation to offer AI-compatible interfaces because if the only way to order a pizza is through an engagement farm, the AI agent is just going to order the pizza somewhere else.
If big pizza franchises wanted HPOP they could just make it the api by which their apps talk to their backend. New cross-pizza-place-apps and tools would pop up within a month
Really, they could each do their own bespoke thing as long as they didn't go out of their way to shut out other implementers.
Instant messaging used to work like this until everyone wanted to own their customer bases and lock them in, for the time-wasting aspect
You can see it another way: everyone wants to be the one that controls access to services; it is what search and news aggregators have in common.
Even if pizza hut wanted people to order pizza the most efficiently with no time wasted it would still want it to happen on their own platforms.
Because if people went to all-pizzas.com for their pizza need then each restaurant and chain would depend on them not to screw them up
What are you on about? I genuinely don't understand your point. Of course they make money by selling pizzas, not something else. And they've figured the way to maximize that is by having a brand and a presence and own the customer relationship, thus people buy from then.
If they end up as just a pizza-api they have no moat and are trivially replaced by another api and bakery, and will make less money.
Making money by selling pizzas? Maybe. Big chains make money by selling high profit items like drinks or fries or getting you to upsize. And a whole sales process and a/b tested menus and marketing to encourage you do choose the profitable options. They lose all that if an agent just makes an order 'large pepporoni kthx bye'. Probably fantastic from a consumer point of view, but lots of businesses are going to hate it.
They're talking about the sales experience that a pizza API would completely sidestep.
The big national pizza chains don't offer good prices on pizza. They offer bad prices on pizza, and then offer 'deals' that bring prices back down. These deals, generally, exist to steer customers towards buying more or buying higher-margin items (bread sticks, soda, etc).
If you could order pizza through an API, they wouldn't get the chance to upsell you. If it worked for multiple pizza places, it would advantage places who offer better value with their list prices.
The original comment may be phrased clumsily. I believe the idea was that they do not want to make money by competing on a pizza market. That's the contradiction as old as the modern economy that it both relies on free market in principle, yet evolves naturally toward monopolies/walled gardens/fiefdoms.
Another contradiction at play here is that of inovation vs standardisation. Indeed, you could argue that dominoes' website is also a place where thay can inovate (bring your own recipes! delivery by drone! pay with tokens! wtv!) whereas a pizza protocol would slow down or prevent some inovation. And that LLMs are used to circumvent and therefore standardize the process of ordering a pizza (like you had user maintained APIs to query various incompatible banq websites; these days they probably use LLMs as well).
This is something I’ve been saying for a while[0,1]:
Services need the ability to obtain an identifier that:
- Belongs to exactly one real person.
- That a person cannot own more than one of.
- That is unique per-service.
- That cannot be tied to a real-world identity.
- That can be used by the person to optionally disclose attributes like whether they are an adult or not.
Services generally don’t care about knowing your exact identity but being able to ban a person and not have them simply register a new account, and being able to stop people from registering thousands of accounts would go a long way towards wiping out inauthentic and abusive behaviour.
[0] https://news.ycombinator.com/item?id=41709792
[1] https://news.ycombinator.com/item?id=44378709
The ability to “reset” your identity is the underlying hole that enables a vast amount of abuse. It’s possible to have persistent, pseudonymous access to the Internet without disclosing real-world identity. Being able to permanently ban abusers from a service would have a hugely positive effect on the Internet.
> - That a person cannot own more than one of.
Exactly one seems hard to implement (some kind of global registry?). I think relaxing this requirement slightly, such that a user could for instance get a small number of different identities by going to different attestors, would be easier to implement while also making for a better balance. That is, I don't want users to be able to trivially make thousands of accounts, but I also don't want websites to be able to entirely prevent privacy throwaway accounts, for a false ban from Google's services to be bound to your soul for life, to be permanently locked out using anything digital because your identifier was compromised by malware and can't be "reset", or so on.
https://en.wikipedia.org/wiki/Sybil_attack
This is generally considered an unsolvable problem when trying to fulfill all of these requirements (cf. sibling post). Most subsets are easy, but not the full list.
An issue, also in crypto, is that people will get their "identifiers" stolen. How do you prevent stealing, or recover stolen identifiers, without compromising anonymity?
Another issue is that people will hire (or enslave) others to effectively lend their identifiers, and it's very hard to distinguish between someone "lending" their identifier vs using it for themselves.
I've been thinking about hierarchical management. Roughly, your identifier is managed by your town, which has its own identifier managed by your state, which has its own identifier managed by your government, which has its own identifier managed by a bloc of governments, which has its own identifier managed by an international organization. When you interact with a foreign website and it requests your identity, you forward the request to your town with your personal identifier, your town forwards the request to your state with the town's identifier, and so on. Town "management" means that towns generate, assign, and revoke stolen personal identifiers, and authenticate requests; state "management" means that states generate, assign, and revoke town identifiers, and authenticate requests (not knowing who in the town sent the request); etc.
The idea is to prevent a much more powerful organization, like a state, from persecuting a much less powerful one, like an individual. In the hierarchical system, your town can persecute you: they can refuse to give you an identifier, give yours to someone else, track what sites you visit, etc. But then, especially if you can convince other town members (which ideally happens if you're unjustly persecuted), it's easier for you to confront the town and convince them to change, than it is to confront and convince a large government. Likewise, states can persecute entire towns, but an entire town is better at resisting than an individual, especially if that town allies with other towns. And governments can persecute entire states, and blocs can persecute entire governments, and the international organization can persecute entire blocs, but not the layer below.
In practice, the hierarchy probably needs many more layers; today's "towns" are sometimes big cities, states are much larger than towns, governments and much more powerful than states, etc. so there must be layers in-between for the layer below to effectively challenge the layer above. Assigning layers may be particularly hard because it requires balance, to enable most justified persecutions, e.g. a bloc punishing a government for not taking care of its scam centers, while preventing most unjustified persecutions. And there will inevitably be towns, states, governments, etc. where the majority of citizens are "unjust", and the layer above can only punish them entirely. So yes, hierarchical management still has many flaws, but is there a better alternative?
> - Belongs to exactly one real person.
> - That a person cannot own more than one of.
These are mutually exclusive. Especially if you add 'cannot be tied to a real-world identity'.
The way that this is usually implemented is with some sort of HSM (eg. smart card, like on a e-id), that holds a private key that's shared with hundreds (or more) other HSMs. The HSM part ensures the key can't be copied out to forge infinite amounts of other identifies, and the shared private key ensures it's vaguely anonymous.
> - Belongs to exactly one real person.
I don't see how you can prevent multiple people sharing access to one HSM. Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?
btw: "usually". Can you cite an implementation?
>btw: "usually". Can you cite an implementation?
u2f has it: https://security.stackexchange.com/questions/224692/how-does...
>I don't see how you can prevent multiple people sharing access to one HSM.
Obviously that's out of scope unless the HSM has a retina scanner or whatever, but even then there's nothing preventing someone from consensually using their cousin's government issued id (ie. HSM) to access a 18+ site.
> Also, if the key is the same in hundreds of HSMs, this isn't fulfilled to begin with? Is this assuming the HSM holds multiple keys?
The idea is that the HSM will sign arbitrary proofs to give to relying parties. The relying parties can validate the key used to sign the proof is valid through some sort of certificate chain that is ultimately rooted at some government CA. However because the key is shared among hundreds/thousands/tens of thousands of HSMs/ids, it's impossible to tie that to a specific person/id/HSM.
> Is this assuming the HSM holds multiple keys?
Yeah, you'd need a separate device-specific key to sign/generate an identifier that's unique per-service. To summarize:
each HSM contains two keys:
1. K1: device-specific key, specific to the given HSM
2. K2: shared across some large number of HSMs
both keys is resistant to be extracted from the HSM, and the HSM will only use them for signing
To authenticate to a website (relying party):
1. HSM generates id, using something like hmac(site domain name, K1)
2. HSM generate signing blob containing the above id, and whatever additional attributes the user wants to disclose (eg. their name or whether they're 18+) plus timestamp/anti-replay token (or similar), signs it with k2, and returns to the site. The HSM also returns a certificate certifying that K2 is issued by some national government.
The site can verify the response comes from a genuine HSM because the certificate chains to some national government's CA. The site can also be sure that users can't create multiple accounts, because each HSM will generate the same id given the same site. However two sites can't correlate identities because the id changes depending on the site, and the signing key/certificate is shared among a large number of users. Governments can still theoretically deanonymize users if they retain K1 and work with site operators.
A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb
I first thought this was just a crypto play with 1 wallet per real person (wasn't a huge fan), but with the proliferation of AI, it makes sense we'll eventually need safeguards to ensure a user's humanity, ideally without any other identifiers needed.
> A lot of folks give it flak for being incredibly dystopian, but this: https://world.org/orb
The flak should be because it's from Sam Altman. A billionaire tech bro giving us both the disease and the cure, and profiting massively along the way, is what's truly dystopian.
> In order to use Privacy Pass for per-user rate-limiting, it's necessary to limit the number of tokens issued to each user (e.g., 100 tokens per user per hour). To rate limit an AI agent, this role would be fulfilled by the AI platform. To obtain tokens, the user would log in with the platform, and said platform would allow the user to get tokens from the issuer. The AI platform fulfills the attester role in Privacy Pass parlance.
If it's up to the AI platform to issue limited tokens to users, and it's also the AI platform making the web requests, I'm not understanding the purpose of the cryptography/tokens. Couldn't the platform already limit a user to 100 web requests per hour just with an internal counter?
It seems like the tl;dr is:
Cloudflare is helping to develop & eager to implement an open protocol called ARC (Anonymous Rate-Limited Credentials)
What is ARC? You can read the proposal here: https://www.ietf.org/archive/id/draft-yun-cfrg-arc-01.html#n...
But my summary is:
1. You convince a server that you deserve to have 100 tokens (probably by presenting some non-anonymous credentials)
2. You handshake with the server and walk away with 100 untraceable tokens
3. At anytime, you can present the server with a token. The server only knows
Other details (disclaimer, I am not a cryptographer):- The server has a public + public key for ARC, which is how it knows that it was the one to issue the tokens. It's also how you know that your tokens are in the same pool as everyone else's tokens.
- It seems like there's an option for your 100 tokens to all be 'branded' with some public information. I assume this would be information like "Expires June 2026" or "Token Value: 1 USD", not "User ID 209385"
- The client actually ends up with a key which will generate the 100 tokens in sequence.
- Obviously the number 100 is configurable.
- It seems like there were already schemes to do this, but providing only one token (RFC 9497, RFC 9474) but I'm not sure how popular those were.
Thanks. I'm glad someone made it to the end.
Could this help solve the pesky problem of anonymous age attestation (i.e. the "I'm acting on behalf of someone who's over 18") by having some attestator that only issues time-linked tokens (you're not allowed to use more than X in any given span of time, so users are significantly deterred from sharing tokens with others) to persons who are verifiably over 18?
This article was published on Oct 30th, but it refers to IETF draft-yun-cfrg-arc-01, a draft that was superseded by draft-yun-privacypass-crypto-arc-00 <https://datatracker.ietf.org/doc/draft-yun-privacypass-crypt...> on Oct 20th, i.e. more than a week before the article was published.
This blog post is offensive to me on three levels:
1. It is clearly not written with a desire to actually convey information in a concise, helpful way.
2. It is riddled with advertisements for Cloudflare services which bear absolutely no relevance to the topic at hand
3. The actual point of the article (anonymous rate limiting tokens) is pointlessly obscured by an irrelevant use case (AI agents for some reason)
Of course, the second two points seem to be heavily related to the first.
This is barely any better -- in terms of respect for the reader's intelligence and savviness -- than those "Apple just gave ten million users a reason to THROW AWAY THEIR IPHONES" trash articles. Just slop meant to get you to click on links to Cloudflare services and vaguely associate Cloudflare with the "Agentic AI future", with no actual intention whatsoever of creating a quality article.
Cloudflare already does rate limiting. They explain in the article why their existing rate limiting solutions (IP, fingerprinting) don't work well with AI. That explains why they need a new solution.
Well, not really. They've explained why their existing solutions don't work well for proxies / gateways, which cloud-based AI agents are an example of.
these Cloudflare blog posts are so verbose it's insane....
perhaps they need rate limiting
Probably not the best example... Without a credit card involvement the case is much much stronger.
There's a few comments asking for further info on the motivation.
I'll explain my understanding.
Consider what problem CAPTCHA aims to solve (abuse) and how that's ineffective in an age of AI agents: it cannot distinguish "bot that is trying to buy a pizza" vs "bot that is trying to spider my site".
I don't understand Cloudflare's solution enough to explain that part.
I'm glad to see research here, because if we don't have innovation solutions, we might end up with microtransactions for browsing.
They appear to just want to rate limit by having you go through a hassle to set up an account with some service and then be given 100 tokens per hour which you can use to conduct 100 rate limited actions per hour.
Think SMS verification but with cryptographic sorcery to make it private.
Depending on the level of hassle the service may even use SMS verification at setup. SMS verification is typically easy to acquire for as little as a few cents, but if the goal is to prevent millions of rate limited requests a few cents can add up.
I don’t understand the problem they are trying to solve, and this article is long, so apologies if they actually get around to explaining.
I have a credit card, and an agent. I want a pizza.
These credentials do what, exactly? Prevent the pizza place from taking my money? Allow me to order anonymously so they don’t know where to deliver it?
Also, they are security professionals, so when they say anonymous, they don’t mean pseudonymous, so my agent can produce an unlimited number of identities, right? How do they keep the website from correlating time and IP addresses to link my anonymous requests to a pseudonym?
My cynical take is that the pizzeria has to pay cloudflare a few pennies to process the transaction. What am I missing?
Although this is clearly the equivalent of Cloudflare propaganda, they are trying to address the issue of connecting a user and an agent in a way that respects the users privacy.
They effectively use credentials and cryptography to link the two together in a zero-knowledge type of way. Real issue, although no one is clearly dying for this yet.
Real solution too, but blind credentials and Chaumian signing is equally naive to think it addresses the root issue. Something like Apple will step in to cast a liability shield over all parties and just continue to trap users into the Apple data ecosystem.
The right way to do this is to give the user sovereignty over their identity and usage such that platforms cater to users rather than the middle-men in-between. Harder than what Cloudflare probably wants to truly solve for.
Still, cool article even if a bit lengthy.
But, why do we want to tie the agent to the user’s identity?
The interface the user wants is “I pay for and obtain pizza”. The interface the pizzaria wants is “I obtain payment via credit card, and send a pizza to some physical location”.
It doesn’t matter who the agent that orders the pizza is acting on behalf of, or if there is an agent, or if some third party indexed the pizzaria menu, then some anarcho-crypto syndicate based in the White House decided to run an auction, and buy this particular pizza for this particular person.
If a malicious user is attacking a site via an agent, the current solution is to block the agent and everyone else using that agent, because the valid requests are indistinguishable from the malicious requests. If the agent passes on a token identifying the users, you can just block agent requests using the malicious user's token.
I think the idea would be that you ask your credit card to convert $10 into 10 untraceable tokens, and then spend them one at a time. You do a handshake dance with the credit card company so you walk away with tokens that only you know, and you have assurance that the tokens are in the same pool as everyone else who asked for untraceable tokens from that credit card company.
Then you can go and spend them freely. The credit card company (and maybe ever third parties?) can verify that the tokens are valid, but they can't associate them with a user. Assuming that the credit card company keeps a log, they can also verify that a token has never been used before.
In some sense, it's a light-weight and anonymous block chain.
The attempt appears to be to rate limit. The acquisition of access tokens is meant to be rate limited.
Similar logic to SMS verification, but actually private.
I dont get this…
Wild future that Cloudflare is making their own crypto to shill.
CF = no thanks.
They have the nickname Crimeflare for a reason. They allow hundreds of thousands of criminals to use their services maliciously and its a huge hassle to report them only to be met with their stance of "we are only routing traffic not hosting it" and they wont remove the most blatant phishing and malicious pages.
https://blog.cloudflare.com/how-cloudflare-is-using-automati...
Are you confusing their comments about (paraphrased) "horrible but legal" (up to a point) sites like dailystormer, 8chan, and kiwifarms, with actual blatant phishing sites?
I find it very difficult to believe they won't remove sites involved in clear phishing or malware delivery campaigns, if they can verify it themselves or in cooperation with a security team at a company they trust. That's different from sites that are morally repugnant and whose members spew vitriol, but aren't making any particular threats (and even in cases where there are clear and present threats, CF usually seems to prefer to notify law enforcement, and then follow court orders, rather than inject themselves as a 3rd party judge into the proceedings).
> but aren't making any particular threats
This isn’t true about Daily S. They have been actively working towards and expressively proposing a new holocaust for decades now. In what way are they not an existential threat for Jews, or LGBTQ?
That makes them almost trustworthy. Ultimately you either have a free internet or an internet free of scams phishing and malware. I'd chose the free internet every time