We use GitLab on the daily. Roughly 200 repos pushing to ~20 on any given day. There have been a few small, unpublished outages that we determined were server side since we have a geo-distributed team, but as a platform seems far more stable than 5-6 years ago.
My only real current complaint is that the webhooks that are supposed to fire in repo activity have been a little flaky for us over the past 6-8 months. We have a pretty robust chatops system in play, so these things are highly noticeable to our team. It’s generally consistent, but we’ve had hooks fail to post to our systems on a few different occasions which forced us to chase up threads until we determined our operator ingestion service never even received the hooks.
FWIW, GitHub is also unreliable with webhooks. Many recent GH outages have affected webhooks.
They are pretty good, in my experience, at *eventually* delivering all updates. The outages take the form of a "pause" in delivery, every so often... maybe once every 5 weeks?
Usually the outages are pretty brief but sometimes it can be up to a few hours. Basically I'm unaware of any provider whose webhooks are as reliable as their primary API. If you're obsessive about maintaining SLAs around timely state, you can't really get around maintaining some sort of fall-back poll.
> you can't really get around maintaining some sort of fall-back poll.
This has been my experience with GitHub Actions as well, which I imagine rely on the same underlying event system as webhooks.
Every so often, an Action will not be triggered or otherwise go into the void. So for Actions that trigger on push, I usually just add a cron schedule to them as well.
Completely agree on all points. We've had dual remotes running on a few high traffic repos pushing to both GitLab and GitHub simultaneously as a debug mechanism and our experiences mirror yours.
My org hosts it on prem, and while I don't like the way pages are organized for projects, I only really interact with the PR page and that is laid out well. Most of my interaction with git is happening from my terminal anyway so ¯\_ (ツ)_/¯
That has definitely not been my experience. I like Gitlab, but they've had regular incidents all along. If a git push failed I wouldn't question it, it's almost never my network. I'd just open Gitlab's Gitlab and find the current active issue.
To Gitlab's credit their observability seems to be good, and they do a good job communicating and resolving incidents quickly.
Some companies that shall not be named have status pages that always show green and might as well be a static picture. Some use words like "some customers may have experienced partial service degradation" to mean "complete downtime". Gitlab also has incidents, but they're a lot more trutworthy. You can just open the issue tracker and there's the full incident complete with diagnosis.
There were so many severe Github Actions outages (10+ ?) in the past year. Cause: Migration to the disaster zone also known as Azure, I assume. Most of them happened during (morning) CET working hours, as to not inconvenience the americans and/or make headlines.
Money doesn't buy competency. It's a long-term culture thing. You can never let go on maintaining competency in your organization. It rots if you do. I guess Microsoft did let go.
You seem to be correct. Not that much visible from the outside, but yes it seems like they always ran on Azure, from the 2018 launch. (Apologies for the disinfo, although I qualified it with the "I assume".)
Doubt it. I'm Ops person on Azure, while they just had terrible outage recently, they tend to be as stable as any other cloud provider and I haven't had many issues with Azure itself compared to whatever slop the devs are chucking into production.
iirc (it's been a while) they where on rackspace when Microsoft bought them out - there was an article a few months ago saying they where moving to Azure and freezing new features while they do the move[1].
Honestly I don't know half the features they have added because the surface is huge at this point everyone seems to be using a (different) subset of them anyway.
So a feature freeze isn't likely to have much impact on me.
A team of us moved it off Rackspace in 2013, it’s been mostly in a set of GitHub operated colo since then. Used to be there was some workloads on AWS and a bit of DirectConnect. Now it’s some workloads on Azure.
To the best of my knowledge there’s been no Rackspace in the picture since about 2013, the details behind that are fuzzy as it’s been 10+ years since I worked on infrastructure at GitHub.
yeah, we did not have anything in Rackspace for many years before the Microsoft acquisition. I remember having to migrate some tiny internal things off of Heroku, though!
In the Pragmatic Engineer podcast episode with the former CEO of Github, the latter mentioned that they had their own infra for everything. If I remember correctly, this was due to the fact that Github is quite old and at the time when Github Actions became a thing, cloud providers were not really offering the kind of infra that was necessary to support the feature.
GitHub is old, but GitHub Actions are not. Indeed, GitHub Actions launched two months after the Microsoft acquisition was announced [0], and it is a half-assed clone of Azure Pipelines.
I can't read the entirety of this article[1] because it's paywalled, but it looks like they ran their own servers:
> GitHub is currently hosted on the company’s own hardware, centrally located in Virginia
I imagine this predates their acquisition from Microsoft. Honestly, given how often Github seems to be down compared to the level of dependency people have on it, this might be one of the few cases where I might have understood if Microsoft embraced and extended a bit harder.
Fair enough, my Azure experience is minimal enough that maybe I shouldn't make assumptions about whether this would improve things. That being said, I do think there's merit in the idea that if Microsoft is going to be able to solve this problem, they probably should try to solve it just once, and in a general way, rather than just for Github?
Yep. Was using github for oauth on a petproject of mine. Got the unicorn, and was considering takingthe break, or just etting up something else. Seems to be running again for me now though.
We used to obsessively care about 500s. Like I would make a change that caused a 0.1% spike in 500s and I would silently say I'm sorry to the folks who got the unicorn page.
I'm not sure the new school cares nearly as much. But then again this is how companies change as they mature. I saw this with StubHub as well.. The people who care the most are the initial employees, employee #7291 usually dgaf
I fall into the new school gen z category, and I think you're right. We don't care. We don't care about the problems started before us, and we owe nothing to no one (but our employers, must increase value for shareholders of course).
I simply want to survive. I'll kiss ass where I have to, but not to people I don't work on behalf of.
Can't say that's entirely true for me ('02). If my [ employer, supervisor, ... ] provides me with logical, traceable tasks with their context properly laid out, I can totally put a ton of effort into providing meticulous, well thought out solutions, that are as good as it gets under the provided constraints. It's the non-sensical (be it actually non-sensical or just not understood enough because of unprovided context) tasks that make me not care.
I'll throw in my $0.02, as a fellow zoomer. I care about the things that are mine (as in, my code, my decisions, etc. etc.). But if management fucks up and tells me to fix it, there is no amount of money that will make me care. Especially if I advised management _not_ to do that in the first place.
I was getting crazy thinking that there was something wrong with my SSH keys all of a sudden. Thanks $DEITY it's just GitHub.
Same. I reflex replaced mine thinking it needed to be. Glad its working now though
Speaking of GitHub issues if you go to Insight->Traffic in your repo you’ll most likely see this banner:
“ Referring sites and popular content are temporarily unavailable or may not display accurately. We're actively working to resolve the issue.”
It’s been like that for months now with no sign of anyone working on it. They just don’t care about user experience anymore.
https://github.com/orgs/community/discussions/173494
Must be a day ending in Y.
Anyone using GitLab have any insight on how well their operations are running these days?
We originally left GitLab for GitHub after being bit by a major outage that resulted in data loss. Our code was saved, but we lost everything else.
But that was almost 10 years ago at this point.
We use GitLab on the daily. Roughly 200 repos pushing to ~20 on any given day. There have been a few small, unpublished outages that we determined were server side since we have a geo-distributed team, but as a platform seems far more stable than 5-6 years ago.
My only real current complaint is that the webhooks that are supposed to fire in repo activity have been a little flaky for us over the past 6-8 months. We have a pretty robust chatops system in play, so these things are highly noticeable to our team. It’s generally consistent, but we’ve had hooks fail to post to our systems on a few different occasions which forced us to chase up threads until we determined our operator ingestion service never even received the hooks.
That aside, we’re relatively happy customers.
FWIW, GitHub is also unreliable with webhooks. Many recent GH outages have affected webhooks.
They are pretty good, in my experience, at *eventually* delivering all updates. The outages take the form of a "pause" in delivery, every so often... maybe once every 5 weeks?
Usually the outages are pretty brief but sometimes it can be up to a few hours. Basically I'm unaware of any provider whose webhooks are as reliable as their primary API. If you're obsessive about maintaining SLAs around timely state, you can't really get around maintaining some sort of fall-back poll.
> you can't really get around maintaining some sort of fall-back poll.
This has been my experience with GitHub Actions as well, which I imagine rely on the same underlying event system as webhooks.
Every so often, an Action will not be triggered or otherwise go into the void. So for Actions that trigger on push, I usually just add a cron schedule to them as well.
Completely agree on all points. We've had dual remotes running on a few high traffic repos pushing to both GitLab and GitHub simultaneously as a debug mechanism and our experiences mirror yours.
We’re using gitlab, loads of issues and outages, we want to go to github
Not sure what specific operational services are of interest - but here's a link to their historical service status [0]
[0] https://status.gitlab.com/pages/history/5b36dc6502d06804c083...
My org hosts it on prem, and while I don't like the way pages are organized for projects, I only really interact with the PR page and that is laid out well. Most of my interaction with git is happening from my terminal anyway so ¯\_ (ツ)_/¯
No issues on GitLab.
Haven't seen any outage from GitLab in like, ever.
That has definitely not been my experience. I like Gitlab, but they've had regular incidents all along. If a git push failed I wouldn't question it, it's almost never my network. I'd just open Gitlab's Gitlab and find the current active issue.
To Gitlab's credit their observability seems to be good, and they do a good job communicating and resolving incidents quickly.
Some companies that shall not be named have status pages that always show green and might as well be a static picture. Some use words like "some customers may have experienced partial service degradation" to mean "complete downtime". Gitlab also has incidents, but they're a lot more trutworthy. You can just open the issue tracker and there's the full incident complete with diagnosis.
https://status.gitlab.com/pages/history/5b36dc6502d06804c083...
Never had any problems really.
GitHub on the other hand has outages more frequently.
I’m old enough to remember when GitHub was on main page due to a cool feature they added, now they just end up here when it stops working
Your weekly reminder to take a break
Github is owned by Microsoft, so this is a pretty small time indie operation, you need to give them a break.
I bet Microsoft is sad not because people can’t push, but because the training data for Copilot has slowed down.
PS: None of our 40+ engineers felt anything, our self hosted Forgejo is as snappy as ever.
until your hardware fails! Or your VPS provider goes down!
Or whatever else, software services going down is going to happen in some capacity, eventually. Real question is what is acceptable
Not replacing the CEO suggests they aren't focusing on it as much as they were.
Just your casual $3.8T company.
There were so many severe Github Actions outages (10+ ?) in the past year. Cause: Migration to the disaster zone also known as Azure, I assume. Most of them happened during (morning) CET working hours, as to not inconvenience the americans and/or make headlines.
Money doesn't buy competency. It's a long-term culture thing. You can never let go on maintaining competency in your organization. It rots if you do. I guess Microsoft did let go.
I thought GitHub Actions (in particular; not the rest of GitHub) was always Azure, because it was initially a fork of Azure Pipelines?
GitHub as a whole, including the previously non-Azure bits, does seem flakier than a few years ago though, for sure.
You seem to be correct. Not that much visible from the outside, but yes it seems like they always ran on Azure, from the 2018 launch. (Apologies for the disinfo, although I qualified it with the "I assume".)
“guess Microsoft did let go” - are we thinking of the same Microsoft here?
I am thinking of the atrophying one. Not MikeRoweSoft.
This sure does seem to happen a lot
Coincidentally, Azure Devops was also missing the ssh keys earlier today, both in the web ui and for ssh login.
Ah that was why. Oh well, I just needed to get the code to the server, so I didn't really need Github anyway.
Related to the recent announcement they are moving to Azure?
Oh no. I look forward to watching my browser redirect 40 times on every attempted page load.
https://news.ycombinator.com/item?id=45517173
Doubt it. I'm Ops person on Azure, while they just had terrible outage recently, they tend to be as stable as any other cloud provider and I haven't had many issues with Azure itself compared to whatever slop the devs are chucking into production.
>they tend to be as stable as any other cloud provider
Absolutely not.
Wow. It wasn't already running on Azure? What was it (or is it) running on?
iirc (it's been a while) they where on rackspace when Microsoft bought them out - there was an article a few months ago saying they where moving to Azure and freezing new features while they do the move[1].
[1] https://thenewstack.io/github-will-prioritize-migrating-to-a...
Honestly I don't know half the features they have added because the surface is huge at this point everyone seems to be using a (different) subset of them anyway.
So a feature freeze isn't likely to have much impact on me.
EDIT: went and checked - https://github.blog/news-insights/github-is-moving-to-racksp... not sure if they moved again before the MS acquisition though.
A team of us moved it off Rackspace in 2013, it’s been mostly in a set of GitHub operated colo since then. Used to be there was some workloads on AWS and a bit of DirectConnect. Now it’s some workloads on Azure.
To the best of my knowledge there’s been no Rackspace in the picture since about 2013, the details behind that are fuzzy as it’s been 10+ years since I worked on infrastructure at GitHub.
yeah, we did not have anything in Rackspace for many years before the Microsoft acquisition. I remember having to migrate some tiny internal things off of Heroku, though!
In the Pragmatic Engineer podcast episode with the former CEO of Github, the latter mentioned that they had their own infra for everything. If I remember correctly, this was due to the fact that Github is quite old and at the time when Github Actions became a thing, cloud providers were not really offering the kind of infra that was necessary to support the feature.
GitHub is old, but GitHub Actions are not. Indeed, GitHub Actions launched two months after the Microsoft acquisition was announced [0], and it is a half-assed clone of Azure Pipelines.
[0] https://en.wikipedia.org/wiki/Timeline_of_GitHub
I can't read the entirety of this article[1] because it's paywalled, but it looks like they ran their own servers:
> GitHub is currently hosted on the company’s own hardware, centrally located in Virginia
I imagine this predates their acquisition from Microsoft. Honestly, given how often Github seems to be down compared to the level of dependency people have on it, this might be one of the few cases where I might have understood if Microsoft embraced and extended a bit harder.
[1]: https://www.theverge.com/tech/796119/microsoft-github-azure-...
Well… https://www.reuters.com/technology/microsoft-azure-down-thou...
Fair enough, my Azure experience is minimal enough that maybe I shouldn't make assumptions about whether this would improve things. That being said, I do think there's merit in the idea that if Microsoft is going to be able to solve this problem, they probably should try to solve it just once, and in a general way, rather than just for Github?
>Microsoft
>solve it just once, and in a general way
Not Sharepoint? What a bummer.
Why does the main page show all green when there is an ongoing incident? All green here -> https://www.githubstatus.com/
This is normal for Microsoft. It's as though status is owned and controlled by either marketing or accounting, not engineering.
It's marked as resolved for some reason
because then some mid-level manager gets a telling off
and/or has to pay the SLA out of their budget
ahh, you are right. I am blind.
I thought my SSH keys were revoked, whew.
Just started to replace mine when I saw someone post a message about GitHub
Yep. Was using github for oauth on a petproject of mine. Got the unicorn, and was considering takingthe break, or just etting up something else. Seems to be running again for me now though.
Another outage brought to you by Azure.
thought i was going crazy
Looking forward to the postmortem.
Are they using AI agents this time to resolve the outage? Probably not.
But this time, there is no CEO of GitHub to contact and good luck contacting Satya to solve the outage.
The postmortem will be simple since Github goes down so consistently every week you can almost use it as an alternative timekeeping system.
The pulsar of web services
It's possible that Microsoft buying GitHub was a large-scale psyop intended to reduce the productivity of the competition.
Any time their startup competitors are making too much progress they can just push the "GitHub incident" button and slow everyone down.
We used to obsessively care about 500s. Like I would make a change that caused a 0.1% spike in 500s and I would silently say I'm sorry to the folks who got the unicorn page.
I'm not sure the new school cares nearly as much. But then again this is how companies change as they mature. I saw this with StubHub as well.. The people who care the most are the initial employees, employee #7291 usually dgaf
I fall into the new school gen z category, and I think you're right. We don't care. We don't care about the problems started before us, and we owe nothing to no one (but our employers, must increase value for shareholders of course).
I simply want to survive. I'll kiss ass where I have to, but not to people I don't work on behalf of.
Can't say that's entirely true for me ('02). If my [ employer, supervisor, ... ] provides me with logical, traceable tasks with their context properly laid out, I can totally put a ton of effort into providing meticulous, well thought out solutions, that are as good as it gets under the provided constraints. It's the non-sensical (be it actually non-sensical or just not understood enough because of unprovided context) tasks that make me not care.
I'll throw in my $0.02, as a fellow zoomer. I care about the things that are mine (as in, my code, my decisions, etc. etc.). But if management fucks up and tells me to fix it, there is no amount of money that will make me care. Especially if I advised management _not_ to do that in the first place.
Hell yeah