Haha, this is weird. I'm old enough to know how to do servers tha old way, setting up nginx, memcached, LAMP stacks and whatever used to be the way to do servers. I am very not much an IaC/Docker/k8s/Amazon whatever thing native. To me AWS always felt like an equally complex way of doing the same things I always used to do.
Now the next generation, who hasn't been indoctrinated into the 'cloud is the way to go, also microservices' mentality, and arriving at a time when the cloud providers aren't flooding everyone with free credits, courses and conferences to push their solutions, to them it looks like another legacy stack.
Or maybe not legacy, but definitely it has a rugged everyday feel as opposed to having the air of 'future tech' around it.
And all this to build a website that can be run on a single server, and even not a beefy one :). SQLite + Go is arguably even easier to start with and it can get you surprisingly far
You can buy a raspberry pi with 16GB RAM and 128GB of SD storage for $150, and point it at a VPS with a static IP for $5/mo. For personal projects (on top of the massive cost savings) it’s much more satisfying to own your own whole server rather than just figure out how to configure something like aws and risk the costs of misconfiguration.
complexity = money for AWS. At this point, I am actually scared I will end up misconfiguring something and stare at a 6 figure bill one morning. I'd much rather use other services or spin up my own than have very delicate discussions with pushy AWS sales guys first thing in the morning.
I'll give them the benefit of the doubt when they implement a tuneable spending limit, even if it includes infinity by default. until then, they're a business beholden to shareholder value.
What semantics do you expect when your spending limit is hit?
Turning off all compute resources (EC2, Lambda, Fargate, etc) seems obvious, but what about systems managing state like S3, EBS, and DynamoDB? Should buckets, volumes, and tables be deleted?
Allow users to configure storage to deduct the cost of keeping it until the end of the billing cycle from their spending limit. Then even when you hit your limit, you still actually have enough allowed spend left to pay for your storage. Actually the same principle could be used with compute or any other per-time service, and could just be an option when creating a resource. If you ask for a reservation, then failure to make it because it would exceed your spend limit results in failure to create the resource.
This is a basic problem that every adult needs to know how to solve, like "how can I make sure I don't run out of money to pay for food and shelter when I'm buying toys?" You set aside money to pay for the important things first, and what remains sets your limit for discretionary spending.
How about starting with the obvious? Doing nothing because you can't figure out how to do everything, is letting the perfect be the enemy of the good. Or maybe they don't have the incentives, as per the GP.
This is either a disingenuous argument or lack of basic positive creativity.
A simple way is for each created resource or rule that creates resources have a toggle to stop/delete the resource if spending limit is reached. I would use this by not enabling this on backups and enabling them on non-production critical resources.
I’ve seen this argument repeated a lot and I think it’s disingenuous. If AWS cared about simplifying billing they could figure out semantics that make sense. Just to throw out an example, they could allow account owners to either opt in or opt out of a hard cutoff. It’s clear they don’t have an incentive to fix this problem.
"Tunable spending limit" has consequences that can create other, equally real, problems.
Best effort: Turn off all compute resources, drop dynamically-adjustable persistent resources to their minimums (e.g. dynamo write and read capacity of 1 on every table), leave EBS volumes and S3 alone. In some cases, a user might find their business effectively offline while still racking up a massive AWS bill.
Hard cutoff: Very close to deleting an AWS account. In addition to compute and dynamically-adjustable resources to minimums, this means deleting S3 buckets, Dynamo tables, EBS volumes and snapshots, and everything else that racks up cost by the hour.
The best effort approach sounds reasonable to me. The hard cutoff solution sounds worse than the problem it purports to solve.
Agreed that AWS is poorly incentivized to fix the problem.
Quite sure the complexity of AWS is forcing the SME types out - even the most basic operations require reference to chatGPT to find the details buried in the UI. It's gone from being 'good enough' - I mean the UI 15 years ago was basic, but adequate to labyrinthine. We don't have complex needs, they're met by AWS, but the combo of premium pricing and a diabolical UI will force our migration eventually.
AWS is degrading on all fronts. If your a large scale business you almost certainly hit stock-outs during peak periods - the elasticity is a lie. If you are a mid scale its high prices and complexity, and finally if your just a beginner good friggin luck getting anything done now that they gutted free support and put near useless quotas on anything usable for modern AI development.
when you have enough scale where AWS can't promise elasticity - they make it up with discounts high enough that it is not going to be cheaper to spin up your own infra.
Try Azure...shared responsibility its shared data when another tenant
keys leak into yours. Then go to GCP where your incident ticket gets
answered by a documentation bot... You will crawl back to AWS
praying for a good old fashioned throttling error.
GCP had second mover advantage in that they got to build a lot of constructs to segment identity and network access in ways that make sense. It's incredibly easy to give a service account access to conduct an action in every folder and project in a GCP org, compared with having to deploy a role to each account in an AWS org. VPCs being global and subnets being regional also make a lot more sense than having them be regional and zonal, respectively, in AWS.
But the support I get from AWS is world class compared to GCP. My AWS account team is always proactively reaching out, not just for potential security risks but also with cost optimization advice.
Yeah, most people say that the best thing about AWS compared to the other 2, is that AWS works. Also AWS runs on AWS, unlike Google's and Microsoft's infra, which is apparently its own bespoke thing.
Not surprising in my opinion.
It is not hard to setup a VPS with whatever you want to deploy nowadays.
99% of the websites out there don't need to be able to infinity scale and it is way cheaper to host your stuff on a VPS, that doesn't have the risk of exploding in cost the way s3 does.
I think AWS is still better in terms of customer support and help that they provide. You really need to keep an eye on the billing console when starting playing around with AWS, the free tier boundary line is pretty thin. for new accounts they have started to give 6 months free plus also some credits. They are trying their best to make it simpler.
If you are on AWS and you are not using the (unfortunately named) Copilot CLI[0], you're opting in to the full complexity of AWS. Copilot fortunately makes most common scenarios waaaay easier to build, setup, and deploy including building staged environments following best practices for AWS (separate accounts; I know, in and of itself kind of a pain in the ass).
It encodes the most common types of application backend deployments into simple patterns and makes it pretty easy to build full and deploy full application stacks (Route 53 -> CloudFront -> S3 (static FE) -> ALB + SGs + TGs -> ECS cluster (backend APIs) -> DBs)
With the Copilot CLI, I find the experience on AWS significantly better and in some ways, more well-rounded than on GCP. GCP's Firebase tooling and CLI comes close, but alas, Firebase does not have the same level of extensibility that Copilot offers by allowing you to include both CDK and CloudFormation as extension points which then allows you to use Copilot to manage a good chunk of your AWS infra with a single, easy to use CLI.
For simple apps, I prefer Firebase on GCP. For more complex apps, I think Copilot on AWS is really, really good. One caveat: ECS is much slower to roll nodes over to new versions compared to Cloud Run. Best I could achieve on it was ~180s for a Blue/Green rollover whereas Cloud Run does this in seconds.
TL;DR: if you are not an enterprise and you are on AWS, your life will assuredly be better with Copilot instead of CDK, CloudFormation, or almost any other solution for deploying to AWS.
TIL, that looks pretty nice. My go-to for when I want to 'just fucking run a container' is Fly.io, all you need to do is put together a TOML manifest. If your bill is under $5 they give a 100% discount too.
Yeah that name makes me think it's going to give me a dubiously working LLM solution that I don't understand, which is worse than a non-working solution because I at least recognize the latter as a problem.
Haha, this is weird. I'm old enough to know how to do servers tha old way, setting up nginx, memcached, LAMP stacks and whatever used to be the way to do servers. I am very not much an IaC/Docker/k8s/Amazon whatever thing native. To me AWS always felt like an equally complex way of doing the same things I always used to do.
Now the next generation, who hasn't been indoctrinated into the 'cloud is the way to go, also microservices' mentality, and arriving at a time when the cloud providers aren't flooding everyone with free credits, courses and conferences to push their solutions, to them it looks like another legacy stack.
Or maybe not legacy, but definitely it has a rugged everyday feel as opposed to having the air of 'future tech' around it.
And all this to build a website that can be run on a single server, and even not a beefy one :). SQLite + Go is arguably even easier to start with and it can get you surprisingly far
You can buy a raspberry pi with 16GB RAM and 128GB of SD storage for $150, and point it at a VPS with a static IP for $5/mo. For personal projects (on top of the massive cost savings) it’s much more satisfying to own your own whole server rather than just figure out how to configure something like aws and risk the costs of misconfiguration.
What's the VPS for? Just for a IP that's not your own personal one?
complexity = money for AWS. At this point, I am actually scared I will end up misconfiguring something and stare at a 6 figure bill one morning. I'd much rather use other services or spin up my own than have very delicate discussions with pushy AWS sales guys first thing in the morning.
> complexity = money for AWS
why do you think so? this is a very incurious read into it because the space AWS works in is highly competitive.
I'll give them the benefit of the doubt when they implement a tuneable spending limit, even if it includes infinity by default. until then, they're a business beholden to shareholder value.
What semantics do you expect when your spending limit is hit?
Turning off all compute resources (EC2, Lambda, Fargate, etc) seems obvious, but what about systems managing state like S3, EBS, and DynamoDB? Should buckets, volumes, and tables be deleted?
Allow users to configure storage to deduct the cost of keeping it until the end of the billing cycle from their spending limit. Then even when you hit your limit, you still actually have enough allowed spend left to pay for your storage. Actually the same principle could be used with compute or any other per-time service, and could just be an option when creating a resource. If you ask for a reservation, then failure to make it because it would exceed your spend limit results in failure to create the resource.
This is a basic problem that every adult needs to know how to solve, like "how can I make sure I don't run out of money to pay for food and shelter when I'm buying toys?" You set aside money to pay for the important things first, and what remains sets your limit for discretionary spending.
How about starting with the obvious? Doing nothing because you can't figure out how to do everything, is letting the perfect be the enemy of the good. Or maybe they don't have the incentives, as per the GP.
Every time this debate comes up, I point that there are cost-limited subscriptions for Azure, but only for Visual Studio Subscriptions and the like.
The cloud vendors are capable of solving this problem, they just refuse to.
Unlimited spending by customers is precisely what they want.
This is either a disingenuous argument or lack of basic positive creativity.
A simple way is for each created resource or rule that creates resources have a toggle to stop/delete the resource if spending limit is reached. I would use this by not enabling this on backups and enabling them on non-production critical resources.
I’ve seen this argument repeated a lot and I think it’s disingenuous. If AWS cared about simplifying billing they could figure out semantics that make sense. Just to throw out an example, they could allow account owners to either opt in or opt out of a hard cutoff. It’s clear they don’t have an incentive to fix this problem.
Unexpected, runaway AWS cost is a real problem.
"Tunable spending limit" has consequences that can create other, equally real, problems.
Best effort: Turn off all compute resources, drop dynamically-adjustable persistent resources to their minimums (e.g. dynamo write and read capacity of 1 on every table), leave EBS volumes and S3 alone. In some cases, a user might find their business effectively offline while still racking up a massive AWS bill.
Hard cutoff: Very close to deleting an AWS account. In addition to compute and dynamically-adjustable resources to minimums, this means deleting S3 buckets, Dynamo tables, EBS volumes and snapshots, and everything else that racks up cost by the hour.
The best effort approach sounds reasonable to me. The hard cutoff solution sounds worse than the problem it purports to solve.
Agreed that AWS is poorly incentivized to fix the problem.
if it is easy then the other providers would offer it to eat AWS's lunch.
Very well articulated.
Software sales people are there for their goals first, and your goals second.
If there's a more optimal way for them to reach their goals in the long run, unbeknownst to you, it will happen.
Quite sure the complexity of AWS is forcing the SME types out - even the most basic operations require reference to chatGPT to find the details buried in the UI. It's gone from being 'good enough' - I mean the UI 15 years ago was basic, but adequate to labyrinthine. We don't have complex needs, they're met by AWS, but the combo of premium pricing and a diabolical UI will force our migration eventually.
AWS is degrading on all fronts. If your a large scale business you almost certainly hit stock-outs during peak periods - the elasticity is a lie. If you are a mid scale its high prices and complexity, and finally if your just a beginner good friggin luck getting anything done now that they gutted free support and put near useless quotas on anything usable for modern AI development.
when you have enough scale where AWS can't promise elasticity - they make it up with discounts high enough that it is not going to be cheaper to spin up your own infra.
> AWS is degrading on all fronts.
Try Azure...shared responsibility its shared data when another tenant keys leak into yours. Then go to GCP where your incident ticket gets answered by a documentation bot... You will crawl back to AWS praying for a good old fashioned throttling error.
GCP had second mover advantage in that they got to build a lot of constructs to segment identity and network access in ways that make sense. It's incredibly easy to give a service account access to conduct an action in every folder and project in a GCP org, compared with having to deploy a role to each account in an AWS org. VPCs being global and subnets being regional also make a lot more sense than having them be regional and zonal, respectively, in AWS.
But the support I get from AWS is world class compared to GCP. My AWS account team is always proactively reaching out, not just for potential security risks but also with cost optimization advice.
Yeah, most people say that the best thing about AWS compared to the other 2, is that AWS works. Also AWS runs on AWS, unlike Google's and Microsoft's infra, which is apparently its own bespoke thing.
Not surprising in my opinion. It is not hard to setup a VPS with whatever you want to deploy nowadays. 99% of the websites out there don't need to be able to infinity scale and it is way cheaper to host your stuff on a VPS, that doesn't have the risk of exploding in cost the way s3 does.
They’re not going back to VPS setups sadly, the article is about deploy button dollar siphons like Vercel, Render, Netlify
I think AWS is still better in terms of customer support and help that they provide. You really need to keep an eye on the billing console when starting playing around with AWS, the free tier boundary line is pretty thin. for new accounts they have started to give 6 months free plus also some credits. They are trying their best to make it simpler.
What are the no-anxiety, "so easy a caveman can do it" setups that ya'll are using now? I've been very happy with Railway (no affiliation).
I got tired of vercel's cold starts and upsell button on every feature.
If you are on AWS and you are not using the (unfortunately named) Copilot CLI[0], you're opting in to the full complexity of AWS. Copilot fortunately makes most common scenarios waaaay easier to build, setup, and deploy including building staged environments following best practices for AWS (separate accounts; I know, in and of itself kind of a pain in the ass).
It encodes the most common types of application backend deployments into simple patterns and makes it pretty easy to build full and deploy full application stacks (Route 53 -> CloudFront -> S3 (static FE) -> ALB + SGs + TGs -> ECS cluster (backend APIs) -> DBs)
With the Copilot CLI, I find the experience on AWS significantly better and in some ways, more well-rounded than on GCP. GCP's Firebase tooling and CLI comes close, but alas, Firebase does not have the same level of extensibility that Copilot offers by allowing you to include both CDK and CloudFormation as extension points which then allows you to use Copilot to manage a good chunk of your AWS infra with a single, easy to use CLI.
For simple apps, I prefer Firebase on GCP. For more complex apps, I think Copilot on AWS is really, really good. One caveat: ECS is much slower to roll nodes over to new versions compared to Cloud Run. Best I could achieve on it was ~180s for a Blue/Green rollover whereas Cloud Run does this in seconds.
TL;DR: if you are not an enterprise and you are on AWS, your life will assuredly be better with Copilot instead of CDK, CloudFormation, or almost any other solution for deploying to AWS.
[0] https://aws.github.io/copilot-cli/
TIL, that looks pretty nice. My go-to for when I want to 'just fucking run a container' is Fly.io, all you need to do is put together a TOML manifest. If your bill is under $5 they give a 100% discount too.
(not affiliated with or paid by Fly.io)
Yeah that name makes me think it's going to give me a dubiously working LLM solution that I don't understand, which is worse than a non-working solution because I at least recognize the latter as a problem.
Yup; but they were first. Copilot CLI was around long before GH Copilot existed.
Certified amAz0n pRoNoUns : AlB TG K8 SG ECP GcP S3 S5 S7 K8 RollOver EcS COpIloT FiReBase CLI DBGs CloUdRuN!!!
AWS is by far simpler than their main competitors gcp or azure. If you want simple you have to go up a layer to Heroku or Vercel.
Unpopular opinion: it’s good actually for their to be different solutions that meet different needs.