×
all 13 comments

[–]These_Voices 5 points6 points  (0 children)

yup, github had a 4 hour incident that messed with all the code we deployed. I cant believe more of the internet hasnt crashed

[–]AnotherBangerDuDuDu 3 points4 points  (1 child)

/s Good news though 100% up today https://www.githubstatus.com/

[–]dashingThroughSnow12 1 point2 points  (0 children)

I didn’t realize today was April Fool’s day because that’s a bad joke.

[–]wartortle 2 points3 points  (0 children)

Yep it looks like they were merging in the diff with trunk from the pr’s base branch. So any commits in trunk not in the PR’s trunk got reverted. Insane.

[–]rwong48 1 point2 points  (0 children)

this incident https://www.githubstatus.com/incidents/zsg1lk7w13cf

we noticed 3 hours ago and scrambled to "fix" (revert) these bad commits

[–]Fabulous-Shape-5786 1 point2 points  (1 child)

The level of data loss is shocking. Could easily go missed and deployed. Bad commits. All customers required to fix in their own way. Scary that there were no unit tests that caught this, and maybe worse that they kept their merge queues running.

The number of GitHub incidents has really increased in the last few months. It tracks with increased AI in the field but no idea if this is contributing to it, but it seems like a good guess. If so, it doesn't bode well for software in general.

[–]AntDracula 2 points3 points  (0 children)

I mean, combine increased AI usage with increased layoffs. The result is inevitable.

Also, isn’t Microslop now offering early retirement buyouts to their most senior employees? Prepare for the slopocalypse

[–]bradfordmaster 1 point2 points  (0 children)

The level of insane this is is hard to overstate. It's one thing to have downtime. It's another to silently corrupt people's git repos. Like, this is literally the one job of git and git hosting companies to avoid this kind of mistake. We might as well all just share code in dropbox again

[–]doingthethingguys 0 points1 point  (0 children)

Just got off the incident call for my company after 10 hours. We have a massive monorepo and a lot of automation that kicks off when we merge to our trunk branch. Lots of stuff to unfuck. Didn't want to force push `main` and break stuff even more, so doing it carefully and correctly by replaying commits ourselves and resolving merge conflicts was what we did.

GitHub declared the incident resolved and still hasn't shared out a unified remediation strategy. As per my support ticket with them they're "still working on it" but don't have an ETA. by the time they have it ready the most of us will have fixed it our own way.

[–]YouDependent3284 0 points1 point  (1 child)

We’re seeing a similar issue today - our open PRs are suddenly showing many more commits than they did yesterday. It turns out the branch histories have diverged from main, with different commit hashes, which is causing conflicts and inflating the commit count...

[–]AntDracula 0 points1 point  (0 children)

This is so messed up.

[–]NoBox6165 0 points1 point  (1 child)

Is this related to the exponential growth in the amount of commits that GitHub has been receiving

[–]williamisraelmt[S] 0 points1 point  (0 children)

i feel it's more related to the amount of code Github's development team is producing with AI and having a less rigurous review process because there's less people to look at the code (due to layoffs).