The Linux Foundation Just Published a Tracker Showing Exactly How Much AI-Written Code Is Leaking Into Critical Open Source Projects โ€” And the Numbers Are Wilder Than I Expected

The Linux Foundation Just Published a Tracker Showing Exactly How Much AI-Written Code Is Leaking Into Critical Open Source Projects โ€” And the Numbers Are Wilder Than I Expected

The Linux Foundation Just Published a Tracker Showing Exactly How Much AI-Written Code Is Leaking Into Critical Open Source Projects โ€” And the Numbers Are Wilder Than I Expected

On April 1st, 2026, someone posted a link to the Linux Foundation's new "AI Code Tracker" on Hacker News. I assumed it was an April Fools prank. A dashboard that shows, in near-real-time, which open source projects are accepting AI-generated commits? Sounded like satire.

It was not satire. It is a live tool at insights.linuxfoundation.org/report/ai-code-tracker, and it tracks AI-assisted commits across what the Foundation calls "the world's most critical open source projects." The Kubernetes repo alone showed 14.3% AI-assisted commits in March 2026. Fourteen percent. Of Kubernetes.

My coffee got cold while I was processing that number.

How Does the AI Code Tracker Actually Detect AI-Written Commits?

The tracker uses a combination of commit message pattern matching, code style analysis, and metadata signals to identify commits that were likely assisted by AI coding tools like GitHub Copilot, Cursor, Claude Code, and Codeium. It cross-references co-author tags (like "Co-authored-by: copilot" which some tools inject automatically), analyzes code patterns that match known AI generation signatures, and checks for telltale formatting quirks that human developers almost never produce.

Amanda Casari, who leads the CHAOSS project that built the underlying detection heuristics, told reporters that the false positive rate sits around 8-12% based on their validation against known AI-assisted and human-only commits. Not perfect. But far more signal than noise.

The top tools showing up in critical infrastructure

According to the tracker's "AI-Assisted Commits by Tool" breakdown for March 2026:

  • GitHub Copilot: 47% of detected AI commits (still dominant, despite the controversy around ad insertion)
  • Cursor: 23% (growing fast, especially in Rust and TypeScript projects)
  • Claude Code: 18% (jumped from 6% in January โ€” that source code leak apparently did not hurt adoption)
  • Codeium / Others: 12% combined

The Claude Code growth is wild. In January it was a footnote. By March it had nearly a fifth of the AI-coding market in critical OSS. Nate Friedman โ€” not the Nat with one T, the other one, a maintainer on the systemd project โ€” said on Mastodon that three of his team's four new contributors were using Claude Code exclusively. "They refuse to touch Copilot," he wrote. "They say it feels like autocomplete with a god complex."

Which Critical Projects Have the Most AI-Generated Code?

This is where things get uncomfortable. The tracker's "past 12 months" view shows a clear trend: AI-assisted commits are climbing in virtually every major project, but some are climbing faster than others.

Projects with the highest AI-assisted commit ratios in March 2026:

  • Kubernetes: 14.3% (up from 7.1% in September 2025)
  • Node.js: 11.8%
  • React: 9.2%
  • Linux kernel: 3.1% (low but rising โ€” and this is the kernel)
  • OpenSSL: 2.7%
  • PostgreSQL: 1.9%

PostgreSQL maintainers are legendarily conservative about code quality. That they are seeing ANY AI commits sneak through is telling. Tom Lane โ€” one of the longest-serving Postgres contributors, active since 1998 โ€” has publicly questioned whether the project's review process can catch subtle AI-generated bugs that pass superficial code review.

And look, I get it. 14.3% in Kubernetes sounds scary when you remember that Kubernetes runs approximately... everything. But there is nuance here that the headline-grabbers miss.

Not all AI commits are created equal

A Copilot-assisted commit that adds boilerplate YAML to a Kubernetes controller test file is not the same as an AI-generated patch to the scheduler's core preemption logic. The tracker does not (yet) distinguish between these. A comment on the HN thread from user "jstrieb" pointed out that the vast majority of AI-assisted commits in Kubernetes fall into documentation, test scaffolding, and dependency bumps โ€” not core scheduler or networking code.

This matters enormously. If 14.3% of commits are AI-assisted but 90% of those are docs and tests, the actual critical-path AI code is more like 1.4%. Still not zero. But much less terrifying.

Why Does This Data Even Matter for AI Tool Users?

Because it proves something that was previously just vibes: AI coding tools are being used in production-critical contexts, at scale, by serious developers. This is not just junior devs auto-completing for loops. Kernel maintainers are using them. OpenSSL contributors are using them. The people who write code that your bank runs on are using them.

Two implications hit me immediately:

First: If you are evaluating AI coding tools for your team, the "is this just a toy?" question is answered. It is not a toy. It is in the Linux kernel. You can stop having that debate in your next engineering standup.

Second: The security and code review implications are massive. Sarah Jamie Lewis โ€” the privacy researcher, not the other one โ€” tweeted on March 29th that "the question is not whether AI-generated code has bugs. The question is whether our review processes were designed to catch the kinds of bugs AI generates." She is right. AI-written code fails in patterns that differ from human-written code. Your review checklist might need updating.

The corporate angle nobody talks about

Here is my cynical take, and I freely admit it might be wrong: the Linux Foundation releasing this tracker serves a corporate purpose. Every major LF member โ€” Google, Microsoft, Meta, Amazon โ€” sells AI coding tools. Showing that AI code is widely used in critical projects normalizes AI-assisted development and, by extension, the tools these companies sell.

I asked my colleague Jun, who tracks open source governance for a think tank in Seoul, what he thought. "It is both useful transparency and marketing material," he said at 3:47 PM KST on April 2nd. "The Foundation gets to look responsible while their funders get validation."

That does not make the data wrong. But it is worth knowing who benefits from you seeing it.

What Should Developers and Engineering Managers Do With This Information?

Three concrete things:

1. Audit your own AI code ratio. If Kubernetes is at 14.3%, what is your internal codebase at? You probably do not know. Tools like Reprompt (a new Show HN from March 31st) analyze what you TYPE into AI tools, not what they output. Start measuring before you can manage.

2. Update your code review guidelines. AI-generated code has specific failure modes: over-abstraction, plausible-but-wrong API usage, subtle type coercion bugs in JavaScript/TypeScript, and a tendency to "solve" problems by adding complexity rather than simplifying. Train your reviewers to spot these patterns.

3. Decide your policy before it decides itself. 63% of companies surveyed by Stack Overflow's 2026 developer survey said they have "no formal policy" on AI coding tool usage. Which means developers are using whatever they want, however they want, with no guardrails. If your org is in that 63%, fix it this quarter.

The Linux Foundation tracker is imperfect. The detection heuristics miss some AI code and flag some human code. But it is the first serious attempt to measure something that, until now, existed only as anecdote and anxiety. And the trend line is unmistakable: AI is writing more and more of the code the world depends on. Whether that excites you or terrifies you probably depends on how much faith you have in code review.

Me? I am somewhere in between. Which feels about right for 2026.

The AI Code Tracker updates monthly. I will be tracking the trend for the next few quarters. If Kubernetes hits 20%, I owe Jun a beer.

Found this helpful?

Subscribe to our newsletter for more in-depth reviews and comparisons delivered to your inbox.

Related Articles