April 7, 2026: The Day AI Split in Two
On the same day, Anthropic locked down Mythos while Zhipu released GLM-5.1 under MIT. The US-China geopolitical reversal is reshaping the open source vs proprietary AI debate.
1,490 points on Hacker News. That’s the score Project Glasswing hit on April 7, 2026 — the day Anthropic officially told the world: “Our model is too dangerous to be open.” Four posts below, sitting at 609 points: Zhipu AI was releasing GLM-5.1, a 744-billion parameter model, under the MIT license. Free. Open. For everyone.
Two announcements. Same day. Two radically opposed visions of AI. And a geopolitical reversal nobody truly anticipated: the United States is closing up, China is opening wide.
Same Day, Two Worlds
April 7, 2026 may go down in AI history as a tipping point. Not because of a single announcement — but because two irreconcilable visions crystallized at the exact same moment.
On one side, Anthropic launches Project Glasswing: a coalition bringing together AWS, Apple, Google, Microsoft, NVIDIA, Cisco, CrowdStrike, JPMorganChase and the Linux Foundation. Their mission: leverage Claude Mythos Preview — an unreleased model — to scan and secure the world’s critical software. Anthropic is committing $100 million in usage credits and $4 million in direct grants to open-source cybersecurity organizations. But Mythos itself? Access is restricted to roughly fifty hand-picked organizations. No public API. No open weights.
On the other side, Zhipu AI (智谱AI) — a Chinese startup founded by Tsinghua University researchers — publishes GLM-5.1 on Hugging Face. 744 billion parameters, 40 billion active (sparse MoE architecture). MIT license. You can download it right now, deploy it on your servers, modify it, commercialize it. Zero restrictions.
The calendar coincidence is probably unintentional. But it tells us something profound about the state of AI in 2026.
Claude Mythos: When a Lab Says “Our AI Is Too Dangerous”
We covered the accidental Mythos leak back in late March — a draft blog post found in a public bucket that revealed the model’s existence. On April 7, Anthropic made things official, and the numbers are staggering.
What Mythos found in a matter of weeks:
- Thousands of zero-day vulnerabilities across every major operating system and every leading web browser
- A 27-year-old flaw in OpenBSD — one of the most secure operating systems on the planet — that allowed crashing a machine remotely just by connecting to it
- A 16-year-old flaw in FFmpeg, hidden in a line of code that automated testing tools had executed 5 million times without ever catching it
- An autonomous exploit chain across multiple Linux kernel vulnerabilities, escalating from a standard user account to full root access
On the CyberGym benchmark, Mythos Preview scores 83.1% in vulnerability reproduction, compared to 66.6% for Claude Opus 4.6. All of this, Anthropic specifies, “entirely autonomously, without any human guidance.”
Greg Kroah-Hartman, the Linux kernel stable maintainer, had already raised the alarm in late March: “Something changed about a month ago. Now we’re getting real AI-generated security reports. Every open-source project is seeing the same thing.”
Anthropic’s decision to lock down Mythos makes sense — and that’s precisely what makes this fracture so interesting. The argument: this model is too effective at finding flaws. Releasing it into the wild would arm anyone. The only responsible option is to keep it controlled, accessible only to defenders.
GLM-5.1: The Open-Source Model Gunning for the Frontier
While Anthropic was building a vault, Zhipu AI was building the opposite.
GLM-5.1 isn’t just another open-source model. It’s one that explicitly targets the frontier — and comes seriously close.
The Numbers
| Specification | GLM-5.1 |
|---|---|
| Total Parameters | 744 billion |
| Active Parameters | 40 billion (sparse MoE) |
| Training Data | 28.5 trillion tokens |
| License | MIT |
| Availability | Hugging Face, ModelScope |
| Deployment | vLLM, SGLang, xLLM, Ktransformers |
On SWE-Bench Pro — the benchmark measuring a model’s ability to fix real bugs in open-source projects — GLM-5.1 achieves state-of-the-art performance for an open model. On Terminal-Bench 2.0 (real-world terminal tasks), it dominates. On Vending Bench 2 — a fascinating benchmark where the model manages a vending machine business over a simulated year — GLM-5 finishes with a balance of $4,432, approaching Claude Opus 4.5.
The real leap, according to the Zhipu team, isn’t in one-shot performance but in agentic endurance. GLM-5.1 is designed to stay productive across long sessions — hundreds of iteration turns, thousands of tool calls. Where previous models “exhaust their repertoire” and plateau, GLM-5.1 keeps improving.
What the Community Is Saying
On Hacker News, opinions are split — which is usually a sign that something real is happening. A European developer working on a complex B2B2C platform in F# claims that “GLM-5 is, at this point, significantly more capable than Claude and Codex for complex backend work, feature planning, and long-running tasks.” Others report underwhelming performance on basic TypeScript.
The emerging consensus: GLM-5.1 shines on complex, long-running tasks but suffers from “context rot” (degradation over extended context) and overtraining on standardized toolsets. It’s a model that excels when it can iterate — not necessarily when it needs to nail it on the first try.
Zhipu’s inference infrastructure (partially based on Huawei chips) also raises latency concerns — but third-party providers like Friendli, Fireworks, and Venice already offer fast deployment, with AWS Bedrock integration in the works.
The Geopolitical Inversion: When China Becomes the Open-Source Champion
This is the real story — the one nobody saw coming.
For decades, the geopolitical tech narrative was simple: the US innovates and opens, China copies and closes. Silicon Valley was synonymous with open source — Linux, Android, TensorFlow, PyTorch. China was the land of the Great Firewall, censorship, and control.
By April 2026, that narrative has flipped.
On the American side: the most powerful models are increasingly locked down. Mythos isn’t an isolated case. OpenAI raised $122 billion while staying firmly proprietary. Even Meta — long the standard-bearer of open weights with Llama — is showing signs of tightening on its most advanced models.
On the Chinese side: Zhipu publishes GLM-5.1 under MIT. DeepSeek disrupted the market in early 2025 with its open models. Alibaba has Qwen. While China certainly has a history of censoring its models (Chinese models carry strict guardrails on politically sensitive topics), the code, architecture, and weights are open.
Why this reversal? Three factors converge:
-
The challenger’s playbook. Open source is the weapon of the underdog. When you don’t yet hold the dominant position, opening your models creates an ecosystem, attracts developers, and drives adoption. China is running the same play Google ran with Android against the iPhone.
-
US regulatory pressure. American labs face mounting pressure — from Washington, from boards of directors, from public opinion — to “be responsible.” The result: more powerful models but less accessible ones. Mythos is the perfect example.
-
The chip wars. US restrictions on GPU exports to China have pushed Chinese labs to optimize furiously — and to share their breakthroughs to accelerate the broader ecosystem. GLM-5’s integration of DeepSeek Sparse Attention (DSA), an attention mechanism developed by a Chinese competitor, illustrates this cooperative dynamic.
Dangerous Open Source or Security Through Obscurity?
This debate isn’t new, but April 7 gives it an unprecedented concrete dimension.
Anthropic’s argument is strong: if Mythos can find vulnerabilities that 27 years of human review missed, releasing it publicly would amount to distributing a weapon. The cost of vulnerability discovery drops from “months of expert work” to “a few hours of compute.” The attacker/defender asymmetry would collapse.
The open-source argument is equally strong: security through obscurity has never worked. If Anthropic can train Mythos, others will too — China, Russia, private actors. The question isn’t whether these capabilities will exist in the wild, but when. And if that’s the case, the only lasting advantage is giving defenders the same tools.
Greg Kroah-Hartman confirms it: real AI-generated vulnerability reports are already coming in from everywhere. Mythos isn’t the only one. It’s the best — for now.
The problem with Anthropic’s position is that it creates a new asymmetry: 50 organizations get access to a revolutionary cybersecurity tool. The millions of others — startups, SMBs, indie developers, developing nations — get nothing. It’s the same logic as pharmaceutical patents: protecting the technology also protects inequality of access.
The problem with the open-source position is that it’s irreversible: once weights are published, you can’t unpublish them. If GLM-5.1 — or a future GLM-6 — reaches Mythos-level vulnerability discovery, there’s no going back.
Neither position is wrong. That’s what makes the fracture so deep.
What This Means for You
If You’re a Developer
GLM-5.1 is testable right now. Here’s how to get started in 5 minutes:
- Via the Zhipu API: docs.z.ai/guides/llm/glm-5.1
- Locally with vLLM (requires 8× A100/H100 GPUs):
docker pull vllm/vllm-openai:glm51 vllm serve zai-org/GLM-5.1-FP8 \ --tensor-parallel-size 8 \ --gpu-memory-utilization 0.85 - Via third-party providers: Friendli, Fireworks, Venice, GMICloud — already available, with AWS Bedrock integration coming
Best use cases? Community feedback points to complex backend tasks, feature planning, and long-running agentic sessions. For frontend work or quick scripting, Claude Opus or Codex are probably still more reliable today.
If You’re in Enterprise
The Mythos/GLM-5.1 fracture has direct strategic implications:
- Security: if you develop critical software, keep an eye on Project Glasswing. Anthropic plans to broaden access. In the meantime, existing scanning tools (Semgrep, CodeQL) are already integrating AI models — this trend will accelerate.
- Sovereignty: an MIT-licensed model like GLM-5.1 means zero vendor lock-in. You can host it on your infrastructure, fine-tune it on your data, without worrying about an API update breaking your workflow. That’s the knockout argument against proprietary models.
- Reputational risk: using a Chinese model in certain sectors (defense, government) remains sensitive. It’s a non-technical factor — but a very real one.
If You’re Watching the AI Market
The trend is clear: open models are catching up to the frontier, but the frontier keeps redefining itself. In 2024, the proprietary model lead was measured on standard benchmarks. In 2026, it’s measured on specific capabilities (cybersecurity, multi-step reasoning) that public benchmarks don’t necessarily capture.
The real moat is no longer the model — it’s infrastructure, data, and trust.
Key Takeaways:
- April 7, 2026 — Anthropic locks down Claude Mythos (too powerful for cybersecurity, access restricted to ~50 organizations via Project Glasswing), while Zhipu releases GLM-5.1 (744B params, MIT license, near-frontier performance)
- The geopolitical inversion is real — US labs are closing under regulatory and security pressure, Chinese labs are opening to win the global ecosystem. Historical roles are reversed
- Neither position is wrong — security justifies the lockdown, but openness is the only lasting advantage. The real question: who gets to decide what’s “too dangerous”?
- The model is no longer the moat — eventually, open-source models will reach the frontier. Power is shifting to infrastructure, data, and governance
FAQ
Is GLM-5.1 really comparable to Claude Opus?
On agentic benchmarks (SWE-Bench Pro, Terminal-Bench 2.0), GLM-5.1 approaches Claude Opus 4.5’s performance, especially on long-running tasks. On one-shot tasks and frontend coding, the gap remains significant. It’s a model worth testing on your specific use case.
Can you access Mythos via the Anthropic API?
No. Claude Mythos Preview is reserved for Project Glasswing partners and an expanded group of roughly 50 organizations. Anthropic hasn’t announced a date for public access.
Is a Chinese open-source model safe to use?
The MIT license is clear: you can audit the code, weights, and architecture. The risk isn’t technical — it’s the same level of security as any American open-source model. The risk is political: certain regulated sectors may prohibit using models of Chinese origin, regardless of the license.

