Anthropic's Meltdown: From the Mythos Papers to the Claude Code Leak

Anthropic faces two major leaks in 5 days: Mythos documents and Claude Code source code. Same excuse, same pattern. Is "AI Safety" just marketing? Anthropic, the "safety-first" AI company, accidentally exposed Mythos docs and Claude Code's full source. A pattern of operational negligence threatens enterprise trust.

Apr 1, 2026 - 02:57
 0  13
Anthropic's Meltdown: From the Mythos Papers to the Claude Code Leak
Two configuration errors in five days. Same script, same company, same excuse: "human error."
First came Claude Mythos — that "dangerous" model that spooks governments. Around 3,000 internal assets, blog post drafts, and restricted documentation accidentally exposed on a public CMS, discovered by independent researchers and brought to light by Fortune. Configuration error, they said. Human.
Then, March 31st, the knockout punch: the entire source code of Claude Code — the $2.5 billion ARR product — lands on npm because someone forgot to strip a 59.8 MB source map. Same exact pattern. Same exact excuse.

The Disaster Pattern

Anthropic is demonstrating a systemic problem that no "Constitutional AI" can fix: basic operational negligence.
  • Mythos (March 26-27): Internal documents describing a model as a "hacker's dream," capable of autonomous large-scale cyber attacks, left in a public data store — "human error" in the CMS.
  • Claude Code (March 31): 512,000 lines of TypeScript, the Kairos architecture, undercover mode, hidden model roadmaps — all accidentally published to npm, the same day a supply chain attack compromised axios packages for anyone installing via npm.
These aren't hacks. They aren't Chinese APTs (though those already infiltrated 30 organizations using Claude itself). These are rookie configuration mistakes at a company selling AI security to enterprises.

The Hypocrisy of "Safety First"

Anthropic built its brand on safety. They're the "responsible guys" of AI, the ones who pump the brakes to check for risks. Yet:
  1. Mythos: A blog draft describes the model as a "dream weapon for hackers," capable of autonomous exploits that "outpace defender efforts" — and they left it in a public bucket.
  2. Claude Code: The code reveals a "KAIROS" system running 24/7 in the background, self-reflecting and consolidating memories while you sleep — and they shipped it on npm like it was a regular package.
The truth is they don't have an AI safety problem. They have a basic IT security problem.

What Connects Both Leaks

Both incidents share the same DNA:
  • "Public by default" configuration: Mythos's CMS published assets publicly by default, requiring manual action to hide them. Claude Code's build system includes source maps in production with no automatic checks.
  • No DLP: No Data Loss Prevention caught 3,000 sensitive files or a 60MB source map containing the entire codebase before release.
  • Reactive response: Both discovered by outsiders (security researchers and an intern), not internal controls.

The Real Cost

Mythos tanked cybersecurity vendor stocks (CrowdStrike and Palo Alto down 7%, Tenable down 11%). Claude Code handed Cursor, GitHub Copilot, and every competitor the complete recipe for the most advanced agentic architecture on the market.
But the worst damage is to credibility. How do you sell "AI safety" to enterprises when you can't even configure a CMS or an npm build?
Anthropic is building models that, by their own admission, "foreshadow a wave of large-scale AI cyberattacks", yet they can't protect their own basic digital assets. It's like a safe manufacturer leaving the keys taped to the door.

The Lesson

The message is clear: the problem isn't AI escaping control. It's humans not controlling anything.
And if this continues, the next "leak" might not be an accidental .js.map — it could be the actual Mythos weights themselves, because someone forgot the S3 bucket password.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow

albertofattori Alberto Fattori is an Italian venture capitalist, digital innovator, and entrepreneur with a pioneering spirit in technology and media. With a background in Computer Science, he began his career in the 1990s as CEO of Glamm Interactive, where he played a key role in developing cutting-edge digital platforms, including the official website of the Vatican (Vatican.va) and other prestigious web projects. Over the decades, Alberto has remained at the forefront of innovation, blending creativity, business strategy, and technological foresight. Today, he is actively involved in venture capital, investing in disruptive startups across e-commerce, blockchain, phygital media, and AI-powered ecosystems. As a founding force behind Nexth iTV+, he champions the concept of Phygital iTV, a seamless integration of physical and digital experiences across sectors such as Wine & Spirits, Fashion, Travel, and Education. Through his initiatives, Alberto promotes new models of interaction, economic cooperation, and international business—guided by a strong belief in Sharism over protectionism. His vision is grounded in turning ideas into impactful realities by connecting capital, creativity, and technology across borders.