GitHub Is Falling Apart: Outages, Security Flaws, and the AI Crisis Breaking the Platform

basanta sapkota
You push the merge button. The CI runs green. You close the tab and call it a day. But what if the code landed on main... Wasn't your code? What if months of your team's work just vanished? Poof. No error, no red flag, just... Gone.

That’s not some nightmare scenario. That actually happened on GitHub. April 23rd, 2026. And honestly? It was just the tip of a very unstable iceberg.

The Quick Version

Look, here’s the gist. Over the past year, GitHub has been a dumpster fire. We’re talking 257 total incidents. That includes 48 major outages with over 112 hours of total downtime. Remember that? The time you couldn’t even access your repos?

Then came the real doozy. A merge queue bug on April 23rd that silently nuked code across 2,092 pull requests in 658 repositories. The UI didn’t say a word. Just happily showed you a green checkmark while it rewrote history.

Oh, and there was CVE-2026-3854, a CVSS 8.7 nightmare. Any logged-in user could run any command they wanted on GitHub’s servers with a simple git push. Yeah.

And why is all this happening? AI. The flood of automated PRs and agent-driven commits is drowning GitHub. They planned for 10x growth.Yet needed 30x. Oof.

Big projects are already jumping ship. Ghostty? Gone. The Zig language? Moved to Codeberg. Even GitHub’s CTO had to publicly apologize, admitting they’d “failed to meet their own reliability standards.”

The Week It All Fell Apart

Let me paint the picture. Five days. From April 23rd to 28th, 2026. Three massive failures. Any one of them would be bad news. Together? They told a story of a platform buckling under pressure.

That Time GitHub Gaslit Your Merge Queue

So, April 23rd. 4:05 PM UTC. A bug sneaks into the merge queue feature. For the next three and a half hours, developers everywhere are reviewing PRs, clicking merge, watching it all look perfect. Green checks. Clean diffs.

What was actually happening was a horror show.

A PR with a reasonable +29 / -34 diff gets queued. What lands on main? A commit with +245 / -1,137. Thousands of lines of other people’s work, already shipped and reviewed, just erased. And every merge after that built on broken, haunted history.

The bug?And merge queue started building temporary branches from the wrong spot. Instead of branching from the latest main, it branched from where the feature branch started. Maybe hundreds of commits back. Then it shoved whole stale mess onto main.

What made this especially evil:

  • The UI straight-up lied. You approved one thing. Something else merged.
  • Total silence. No conflict. No failed check. You found out because code went missing.
  • It hit busy repos hardest. The more active your repo, the more drift, the more damage.

GitHub says 2,092 PRs across 658 repos were hit. But talk to devs? Some teams claim they lost over 200 PRs in one go.

And the kicker? GitHub’s status page was all smiles. “All systems operational.” Merges were “working.” They just weren’t merging what you thought they were merging. Fun, right?

The Vulnerability That Let Anyone Take Over GitHub

Four days later, April 28th. Security researchers drop [CVE-2026-3854]. A critical RCE vulnerability. CVSS 8.7.

The scary part? Any authenticated user could run any command on GitHub’s backend with a standard git push. The attack was stupidly simple. Git push options let users pass arbitrary strings. GitHub’s proxy service copied those values right into a security header called X-Stat without cleaning them. Since . was the field delimiter, you could inject a semicolon and override security policies. Code execution. Done.

On GitHub.com, this meant RCE on shared storage nodes. Researchers confirmed millions of public and private repositories were accessible on those nodes. On GitHub Enterprise Server? Full server compromise.

They patched github.com in 75 minutes. But at the time they told everyone? 88% of GitHub Enterprise Server instances were still wide open.

A cool, creepy twist: this was one of the first major bugs found using AI-powered reverse engineering. The researchers used tools like IDA MCP to rip through GitHub’s compiled code fast. What used to take weeks of manual staring took a fraction of the time. The future is here, and it’s double-edged.

When Search Just… Stopped

Sandwiched in between, on April 27th, GitHub’s Elasticsearch cluster choked. Probably a bot attack. Suddenly, search-backed stuff vanished. PR lists? Blank. Issues? Gone. Project boards? Empty.

The data was still there. Git and APIs worked fine. But the interface we all rely on? Useless.

GitHub’s CTO admitted this was “one of the systems we had not yet fully isolated.” Translation: they knew it was a weak point and hadn’t fixed it yet.

The Elephant in the Room: AI Is Eating GitHub

These weren’t bad luck. They’re symptoms of a platform being swamped.

GitHub’s CTO, Vladimir Fedorov, spilled the beans in a [blog post]. They started a plan in October 2025 to scale capacity by 10x. By February 2026? Not enough. Now they’re designing for 30x.

The culprit? AI dev workflows. Since late 2025, tools like Copilot, Cursor, Codex, and their agent buddies have been hammering GitHub with automated PRs, API calls, and repo ops. They’re now hitting peaks of 90 million merged PRs and 1.4 billion commits.

The past year’s damage report looks grim:

  • 257 incidents total
  • 48 major outages
  • 112+ hours of major outage downtime
  • GitHub Actions outages. 57
  • GitHub Copilot outages: 44
  • Average time to fix a major issue: 6 hours, 7 minutes

Actions was the weakest link. On October 1st, 2025, macOS runners hit a 46% error rate for over 10 hours straight. Copilot’s low point? A policy issue on February 9th, 2026, that took over six days to fully squash.

The Exodus Has Begun

When a tool becomes unreliable, people leave.

Mitchell Hashimoto [announced in April 2026] he’s pulling Ghostty off GitHub. After 18 years. His reason? GitHub “is no longer a place for serious work.”

The Zig programming language maintainers beat him out the door, migrating to Codeberg in November 2025. They cited awful bugs with Actions and a general skittishness about GitHub’s engineering culture.

Even OpenAI is reportedly sniffing around for alternatives after outages kept messing with their work.

The vibe on Reddit and Hacker News? One commenter on r/programming nailed it:

“At some point the pain of using GitHub is going to far exceed the value it provides to us.”

So What’s GitHub Doing?

They’re not just sitting around. Their public fixes include:

  • Isolating critical stuff like git and Actions from other services to contain failures.
  • Rewriting their old Ruby monolith into Go for performance.
  • Going multi-cloud for better resilience.
  • Fixing their caching and adding better backpressure systems.
  • Slowing down on new features to focus on stability. Big shift for them.

They’re also being more transparent now. The status page shows availability metrics, and they’re reporting more incidents.

But, as InfoQ points out, the platform is still a tangled mess of “tight coupling between services” and “inadequate backpressure mechanisms.” One part fails, and it’s dominoes.

Should You Bail?

Only you and your team can decide. GitHub still has the biggest network, the best integrations, and Copilot. Those are real perks.

But reliability isn’t optional. If your deploys run on GitHub Actions, if your code review lives in merge queues, if your team merges PRs like it’s going out of style… you have to ask: what happens when the platform lies to you for three hours?

A few practical thoughts:

  • Trust, but verify your merges. Don’t just believe the UI. Spot-check that what landed matches what you reviewed.
  • Keep local backups. If you’re not cloning critical repos regularly, start yesterday.
  • Look at the other players. GitLab, Codeberg, self-hosted Gitea. Worth a look for critical stuff.
  • Patch your GHES instances. If you run GitHub Enterprise Server, for the love of god, patch CVE-2026-3854 now.

The Unvarnished Truth

GitHub had a brutal year. 257 incidents. A vuln that exposed millions of repos.Plus merge bug that silently erased code. And an AI tidal wave they’re still trying to surf.

They’re working on it. Scaling, isolating, rebuilding. But trust is fragile. It takes years to earn and seconds to shatter. When your merge queue silently reverts two thousand PRs and your status page says “A-OK,” trust takes a major hit.

We all build on foundations we don’t own. GitHub’s wobbles are a stark reminder: stay sharp, keep backups, and never, ever assume that green checkmark is telling you the whole story.

So, what’s your war story? Has GitHub’s instability messed with your team? Thinking of switching? Sound off below – I’m genuinely curious how everyone else is coping.

If this hit home, you might also want to check out my pieces on [recent CVEs in critical infrastructure] or how open-source alternatives are shaping up.

Where I Got This

Post a Comment