You can't build your own Cloudflare in any meaningful sense. You can choose not to have the functionality Cloudflare provides because you prioritize the risk of a Cloudflare outage as more important than the benefits Cloudflare gives you, but that probability tree is probably going to land in Cloudflare's favor for 99.99% of businesses.
If you can build a system with redundancy to continue working even if Cloudflare is unavailable then you should, but most years that's going to be a waste of time.
I think you'd be better off spending the time building good relationships with your customers and users so that in the event of an outage that's beyond your control they trust your business and continue to be happy customers when you're back up and running.
Exactly, CloudFlare falls squarely in the "Buy" category. This is not a product you just build, you'd overpay massively for global capacity.
In general I think people are overreaction to the CloudFlare outage and most of these types of articles aren't really thought all the way through.
Also the conclusion on Jurassic Park is wrong. Hammond "spared no expense" yet Nedry was a single point of failure? Seems like they spared at least some expense in the IT department
> Also the conclusion on Jurassic Park is wrong. Hammond "spared no expense" yet Nedry was a single point of failure? Seems like they spared at least some expense in the IT department
Even if they did "spare no expense" they could have wound up in the same situation. I see this a lot, "it would be better if only we spent more money" but the only thing casually related to increasing expense is increased withdrawals from the bank account. Spending more money doesn't guarantee a better outcome see US public schools for example.
edit: coming back to this.
Was the Cloudflare outage really caused by reading a file that was over 200 lines when the process can only handle a max of 200? That's a good example, I'm sure Cloudflare spared no expense in that part of their infrastructure yet here they are (or were).
> I'm sure Cloudflare spared no expense in that part of their infrastructure yet here they are
Almost everyone developing software spares some expense. It's maybe the main argument you can make for why it's engineering vs not. It's a cost-benefit tradeoff.
Cloudflare isn't doing e.g. super expensive formally verified software up and down its whole stack, practically nobody does that.
But I don't choose cloudflare either, because its too complicated and I don't need that. So I choose the simplest possible thing with as little complexity as possible (for me, that was BunnyCDN). If it goes down, its usually obviously why. And I didn't rely on anything special about it, so I can move away painlessly.
> Here’s the thing, if your core business function depends on some capability, you should own it if at all possible.
If I'm building something that allows my customers to do X, then yes I will own the software that allows my customers to do X. Makes sense.
> They’ll craft artisanal monitoring solutions while their actual business logic—the thing customers pay for—runs on someone else’s computer.
So instead I should build an artisanal hosting solution on my own hardware that I purchase and maintain? I could drop proxmox on them and go from there, or K8s, or even just bare metal and systemd scripts.
But my business isn't about any of those things, its about X. How does owning and running my own hardware get me closer to delivering on X?
The OP's point is that if your monitoring solution dies, your customers don't even notice, so you shouldn't build it yourself. But if the service running your actual business logic dies, your customers get cut off, so you should build and maintain that part more directly. (And obviously this is a spectrum — you probably don't need to design your own CPU.)
if the service running your actual business logic dies
In a modern tech business that's everything from the frontend to the database though, including all the bits to keep that running at scale. That's too much for most companies to handle when they're starting and scaling. You'll need to compromise on that value early on, and you'll probably persuade yourself that it's tech debt you'll pay off later. But you won't, because you can't, and that will lead you to dislike the system you built.
It's much simpler and more motivating to accept that in any modern tech business has to rely on third parties, and the fact you pay them money means they probably won't screw it up. It has to be an accepted risk or you'll be paralysed by having too much to do.
The advice here is contradictory. It suggests you should build and own things your business depends on, wherever possible, but also that you should buy things that aren't a core value of your core business.
There would very typically be a large overlap here.
Probably very few companies should build and run their own CDN and internet scale firewall, for example. Doesn't have to be cloudflare, but there aren't any providers that will have zero outages (a homegrown one is likely to be orders of magnitude worse and more expensive).
Instead we need a startup that builds over every cloud provider. Think of a web server for example. AWS has EC2, GCP has its own equivalent and Azure has its own and so on. What if we had a startup that virtualizes a layer on top of these such that we AWS has an outage, you lose 1/3rd of your operating capacity, when Azure has an outage you lose 1/3rd of your operating capacity. In order for you startup s virtual webserver to go down, all of AWS, GCP and Azure wil have to go down simultaneously. Basically build on top of everyone s cloud service into one single unified virtual layer that offers end products to consumers. A 6GB RAM server that the end consumer purchases has 2GB of RAM running on AWS, 2GB on Azure and 2GB on GCP. I am sure we can also strategize something along the same lines for a database server with the added question of the database sharding strategy at play
This is what Fog and other cloud agnostic libraries promise. The problem is they you get tied to the lowest common feature set or writing different code paths to take advantage of latest features.
> A 6GB RAM server that the end consumer purchases has 2GB of RAM running on AWS, 2GB on Azure and 2GB on GCP.
That'd be very inefficient usage of compute. Memory access now has network latency, cache locality doesn't exist, processes don't work. You're basically subverting how computers fundamentally work today. There's no benefit.
I know Kubernetes and containers has everyone thinking servers don't matter but we should have less virtualization, not more. Redundancy and virtualization are not the same thing.
It's great in theory, it's just relatively expensive. You'll need to pay to be running on all the clouds plus keeping extra traffic to keep databases synced. Distributed systems are hard.
In practice, you're better off just having one cloud but if you ever reach the point you care about this, you're better off running some cloud-agnostic platform like Kubernetes in a multi-cloud setup (i.e. one cluster per cloud) and then load-balancing or failing over via DNS.
Redundancy is a proven way to build resilience into your infrastructure. Ownership does not mean you have to build it. OP is correct that you need to understand it all, but that understanding also allows for solid DR plans that use multiple providers for a resilient infrastructure.
An alternative to multiple providers is to use commoditized providers. By using simple infrastructure rather than cloud platforms, I can redploy my infrastructure using ansible with another provider in hours rather than re-building my platform if I decide the cloud is the wrong fit.
For data analysis and medium-sized ML jobs, my personal computer is so much faster and more responsive than any cloud solution. Of course you get none of the resiliency or security guarantees of the cloud, but it’s a data point. I genuinely hate using cloud and avoid using it if at all possible. Even a MacBook Pro is faster.
There's no easy answer, but you should definitely model what happens when X goes down if you depend on X.
It may even be a rational decision to take the downtime if the cost of avoiding it exceeds the expected cost of an eventual downtime, but that's a business decision that requires some serious thought.
> It may even be a rational decision to take the downtime if the cost of avoiding it exceeds the expected cost of an eventual downtime, but that's a business decision that requires some serious thought.
that's at the root of all infrastructure decisions, not just web app tech stacks but even something like utility service. I think it gets lost on a lot of technology people because we love to work on big technical things. No one wants a boring answer like a couple webservers and postgres with a backup in a different datacenter when there's a wall of knobs and switches to play with at the hyperscalers.
any company offering services with SLA that does not have this as a caveat is just crazy to me. "we guarantee our services will be up and running as long as the 3rd party services we run on top of are running."
> you can point the finger at them with no issues.
yeah sure, if your business is one of the 500 startups on HN creating inane shit like a notes app or a calendar, but outages can affect genuine companies that people rely on
Wardley Mapping is a framework for better understanding Build v Buy (v Rent) at a more strategic level. tldr - it's much more nuanced than 'if you depend on it own it'
Meh. This opinion highlights the fact that availability is the least understood pillar in security. The Right Way to Think About It is having good security analysis and doing proper Risk Management. This means it is their job to do business impact analysis, 3rd party assessments, and run tabletop exercises on all your critical systems to tell you what is rock solid and what is a house of cards.
Understanding exactly who does what and how they can be reached to work an emergency is all part of the availability pillar. Size matters not. Your security team needs to vet your team, your critical systems, your code, and your 3rd and 4th party dependencies constantly.
Yeah but my DevOps only know Aws or Cloudflare UIs and refuse to consider any other platforms. The leadership sees multiple bills as bad. Back to square one? No one will learn anything because people enjoy the pseudo holiday for problems they set themselves up to do nothing about.
You can't build your own Cloudflare in any meaningful sense. You can choose not to have the functionality Cloudflare provides because you prioritize the risk of a Cloudflare outage as more important than the benefits Cloudflare gives you, but that probability tree is probably going to land in Cloudflare's favor for 99.99% of businesses.
If you can build a system with redundancy to continue working even if Cloudflare is unavailable then you should, but most years that's going to be a waste of time.
I think you'd be better off spending the time building good relationships with your customers and users so that in the event of an outage that's beyond your control they trust your business and continue to be happy customers when you're back up and running.
Exactly, CloudFlare falls squarely in the "Buy" category. This is not a product you just build, you'd overpay massively for global capacity.
In general I think people are overreaction to the CloudFlare outage and most of these types of articles aren't really thought all the way through.
Also the conclusion on Jurassic Park is wrong. Hammond "spared no expense" yet Nedry was a single point of failure? Seems like they spared at least some expense in the IT department
> Also the conclusion on Jurassic Park is wrong. Hammond "spared no expense" yet Nedry was a single point of failure? Seems like they spared at least some expense in the IT department
Even if they did "spare no expense" they could have wound up in the same situation. I see this a lot, "it would be better if only we spent more money" but the only thing casually related to increasing expense is increased withdrawals from the bank account. Spending more money doesn't guarantee a better outcome see US public schools for example.
edit: coming back to this. Was the Cloudflare outage really caused by reading a file that was over 200 lines when the process can only handle a max of 200? That's a good example, I'm sure Cloudflare spared no expense in that part of their infrastructure yet here they are (or were).
> I'm sure Cloudflare spared no expense in that part of their infrastructure yet here they are
Almost everyone developing software spares some expense. It's maybe the main argument you can make for why it's engineering vs not. It's a cost-benefit tradeoff.
Cloudflare isn't doing e.g. super expensive formally verified software up and down its whole stack, practically nobody does that.
Yea agreed. I don't build my own CDNs.
But I don't choose cloudflare either, because its too complicated and I don't need that. So I choose the simplest possible thing with as little complexity as possible (for me, that was BunnyCDN). If it goes down, its usually obviously why. And I didn't rely on anything special about it, so I can move away painlessly.
Your customers are also likely down if they run online services
> Here’s the thing, if your core business function depends on some capability, you should own it if at all possible.
If I'm building something that allows my customers to do X, then yes I will own the software that allows my customers to do X. Makes sense.
> They’ll craft artisanal monitoring solutions while their actual business logic—the thing customers pay for—runs on someone else’s computer.
So instead I should build an artisanal hosting solution on my own hardware that I purchase and maintain? I could drop proxmox on them and go from there, or K8s, or even just bare metal and systemd scripts.
But my business isn't about any of those things, its about X. How does owning and running my own hardware get me closer to delivering on X?
The OP's point is that if your monitoring solution dies, your customers don't even notice, so you shouldn't build it yourself. But if the service running your actual business logic dies, your customers get cut off, so you should build and maintain that part more directly. (And obviously this is a spectrum — you probably don't need to design your own CPU.)
if the service running your actual business logic dies
In a modern tech business that's everything from the frontend to the database though, including all the bits to keep that running at scale. That's too much for most companies to handle when they're starting and scaling. You'll need to compromise on that value early on, and you'll probably persuade yourself that it's tech debt you'll pay off later. But you won't, because you can't, and that will lead you to dislike the system you built.
It's much simpler and more motivating to accept that in any modern tech business has to rely on third parties, and the fact you pay them money means they probably won't screw it up. It has to be an accepted risk or you'll be paralysed by having too much to do.
The advice here is contradictory. It suggests you should build and own things your business depends on, wherever possible, but also that you should buy things that aren't a core value of your core business.
There would very typically be a large overlap here.
Probably very few companies should build and run their own CDN and internet scale firewall, for example. Doesn't have to be cloudflare, but there aren't any providers that will have zero outages (a homegrown one is likely to be orders of magnitude worse and more expensive).
Instead we need a startup that builds over every cloud provider. Think of a web server for example. AWS has EC2, GCP has its own equivalent and Azure has its own and so on. What if we had a startup that virtualizes a layer on top of these such that we AWS has an outage, you lose 1/3rd of your operating capacity, when Azure has an outage you lose 1/3rd of your operating capacity. In order for you startup s virtual webserver to go down, all of AWS, GCP and Azure wil have to go down simultaneously. Basically build on top of everyone s cloud service into one single unified virtual layer that offers end products to consumers. A 6GB RAM server that the end consumer purchases has 2GB of RAM running on AWS, 2GB on Azure and 2GB on GCP. I am sure we can also strategize something along the same lines for a database server with the added question of the database sharding strategy at play
This is what Fog and other cloud agnostic libraries promise. The problem is they you get tied to the lowest common feature set or writing different code paths to take advantage of latest features.
> A 6GB RAM server that the end consumer purchases has 2GB of RAM running on AWS, 2GB on Azure and 2GB on GCP.
That'd be very inefficient usage of compute. Memory access now has network latency, cache locality doesn't exist, processes don't work. You're basically subverting how computers fundamentally work today. There's no benefit.
I know Kubernetes and containers has everyone thinking servers don't matter but we should have less virtualization, not more. Redundancy and virtualization are not the same thing.
It's great in theory, it's just relatively expensive. You'll need to pay to be running on all the clouds plus keeping extra traffic to keep databases synced. Distributed systems are hard.
In practice, you're better off just having one cloud but if you ever reach the point you care about this, you're better off running some cloud-agnostic platform like Kubernetes in a multi-cloud setup (i.e. one cluster per cloud) and then load-balancing or failing over via DNS.
Redundancy is a proven way to build resilience into your infrastructure. Ownership does not mean you have to build it. OP is correct that you need to understand it all, but that understanding also allows for solid DR plans that use multiple providers for a resilient infrastructure.
An alternative to multiple providers is to use commoditized providers. By using simple infrastructure rather than cloud platforms, I can redploy my infrastructure using ansible with another provider in hours rather than re-building my platform if I decide the cloud is the wrong fit.
An aside: it looks like there is a certificate error for https://certkit.com/ as it's for *.mscertkit.com (this was on Chromium + Linux)
wow, yea. that's foolish. Fixing.
For data analysis and medium-sized ML jobs, my personal computer is so much faster and more responsive than any cloud solution. Of course you get none of the resiliency or security guarantees of the cloud, but it’s a data point. I genuinely hate using cloud and avoid using it if at all possible. Even a MacBook Pro is faster.
There's no easy answer, but you should definitely model what happens when X goes down if you depend on X.
It may even be a rational decision to take the downtime if the cost of avoiding it exceeds the expected cost of an eventual downtime, but that's a business decision that requires some serious thought.
> It may even be a rational decision to take the downtime if the cost of avoiding it exceeds the expected cost of an eventual downtime, but that's a business decision that requires some serious thought.
that's at the root of all infrastructure decisions, not just web app tech stacks but even something like utility service. I think it gets lost on a lot of technology people because we love to work on big technical things. No one wants a boring answer like a couple webservers and postgres with a backup in a different datacenter when there's a wall of knobs and switches to play with at the hyperscalers.
What this outage teaches you is that when a third party vendor fails and the internet breaks you can point the finger at them with no issues.
If your shit breaks and everyone else's shit is still working that's a problem.
any company offering services with SLA that does not have this as a caveat is just crazy to me. "we guarantee our services will be up and running as long as the 3rd party services we run on top of are running."
> you can point the finger at them with no issues.
yeah sure, if your business is one of the 500 startups on HN creating inane shit like a notes app or a calendar, but outages can affect genuine companies that people rely on
Recoverable master and short dns ttl
Wardley Mapping is a framework for better understanding Build v Buy (v Rent) at a more strategic level. tldr - it's much more nuanced than 'if you depend on it own it'
Meh. This opinion highlights the fact that availability is the least understood pillar in security. The Right Way to Think About It is having good security analysis and doing proper Risk Management. This means it is their job to do business impact analysis, 3rd party assessments, and run tabletop exercises on all your critical systems to tell you what is rock solid and what is a house of cards.
How you approach this is very different depending on the size of organization. We're a small shop (3), but we deliver big services to lots of people.
We do this by owning everything we can, and using simple vendors for what we can't.
Understanding exactly who does what and how they can be reached to work an emergency is all part of the availability pillar. Size matters not. Your security team needs to vet your team, your critical systems, your code, and your 3rd and 4th party dependencies constantly.
Yeah but my DevOps only know Aws or Cloudflare UIs and refuse to consider any other platforms. The leadership sees multiple bills as bad. Back to square one? No one will learn anything because people enjoy the pseudo holiday for problems they set themselves up to do nothing about.