Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

The world is a huge patchwork stage, the full story of Claude Code's source code running naked across the internet.

CN
深潮TechFlow
Follow
4 hours ago
AI summarizes in 5 seconds.
A company claiming to build the "safest AI" cannot even protect its own blog CMS and npm packages.

Written by: Claude

At 4:23 AM (Eastern Time) on March 31, a developer from Solayer Labs (who claims to be an intern) Chaofan Shou posted on X with a download link.

Several hours later, the complete source code of Anthropic's core commercial product Claude Code was mirrored to GitHub, forked over 41,500 times, and dissected line by line by thousands of developers on Hacker News.

The absurd reason behind this incident is almost laughable: Anthropic forgot to exclude the .map files in the packaging configuration when releasing version 2.1.88 of Claude Code to the npm public registry. This source map file pointed to a zip archive stored in Anthropic's own Cloudflare R2 bucket, which contained about 1,900 TypeScript files and over 512,000 lines of code. Anyone could download, decompress, and read it.

A negligence in a .npmignore configuration exposed the source code of the flagship product of a company with an annual revenue of $19 billion.

Ironically, this was the second leak for Anthropic within five days. On March 26, Fortune reported that Anthropic's content management system (CMS) exposed nearly 3,000 unpublished internal files in a publicly searchable database due to configuration errors, including a complete blog draft detailing the next-generation model "Claude Mythos" (internal code name Capybara). In that draft, Anthropic itself wrote that this new model "poses unprecedented cybersecurity risks."

A company claiming to build the "safest AI" cannot even protect its own blog CMS and npm packages.

1. What was leaked: From anti-distillation "fake tools" to covert contributions to open source

Let's start with the most eye-catching discoveries.

44 Feature Flags, 20 not live. The leaked code included 44 feature flags covering the complete product roadmap that Anthropic has yet to release. This is not a conceptual design on PPT but a finished product code that is compiled and only needs to be switched on to go live. Some commented: "They release a new feature every two weeks because all features are already completed."

KAIROS: Background autonomous Agent mode. The term "KAIROS" (Ancient Greek for "the right moment") appears over 150 times in the code and is the largest product roadmap leak. It implements a continuously running background Agent daemon, including daily log additions, GitHub webhook subscriptions, scheduled refreshes every 5 minutes, and a feature called autoDream, which automatically performs "memory consolidation" during user idle time, cleans up conflicting information, and transforms vague insights into defined facts. This is no longer a chatbot tool where "you ask and I answer," but an AI colleague that is always online and self-evolving.

Anti-distillation mechanism: "poisoning" competitors. The code contains a switch called ANTI_DISTILLATION_CC. When enabled, Claude Code injects fake tool definitions into the system prompts during API requests. The aim is clear: if someone records Claude Code's API traffic to train competing models, these fake tools will contaminate the training data. The second layer of defense is server-side text summaries, replacing complete inference chains with encrypted signatures to ensure eavesdroppers can only obtain compressed versions.

Developer Alex Kim pointed out after analysis that circumventing these protective technologies is not difficult, stating, "Anyone serious about distillation can find a way around it in about an hour. Real protection might be at the legal level."

Undercover Mode: AI pretends to be human. The undercover.ts file implements a "stealth mode," which automatically cleans all internal traces when Claude Code is used in non-Anthropic internal projects, not mentioning any internal code names, Slack channels, or even the name "Claude Code." The code comments state: "There is no option to forcibly disable this. It safeguards against the leak of model code names."

This means that when Anthropic employees submit code to public open-source projects, the fact that AI participated in the creation will be systematically hidden. The reaction on Hacker News was direct: hiding internal code names is one thing; having AI actively pretend to be human is another.

Using regular expressions to detect if users are cursing. The userPromptKeywords.ts file contains a handwritten regular expression designed to detect if users are expressing frustration or anger, matching words like "wtf," "shit," "fucking broken," "piece of crap," etc. A large language model company using regular expressions for sentiment analysis was deemed "peak irony" on Hacker News. Of course, some pointed out that running reasoning just to judge if a user is cursing is indeed too expensive; sometimes regular expressions are simply the best tool.

2. How the leak happened: Anthropic's own toolchain tripped them up

The technical causality chain is particularly ironic.

Claude Code is built on the Bun runtime. Anthropic acquired Bun at the end of 2025. On March 11, the Bun GitHub repository reported a bug (oven-sh/bun#28001): source maps were still being sent in production mode, even though Bun's documentation clearly stated they should be disabled. This bug remains unfixed to this day.

If this bug was the cause of the leak, then the story becomes: Anthropic's own toolchain, which they acquired, contains a known but unfixed bug that exposed the complete source code of Anthropic's flagship product.

Meanwhile, just a few hours before the leak occurred, the axios package on npm experienced a supply chain attack. Between 00:21 and 03:29 UTC on March 31, users installing or updating Claude Code may have downloaded a malicious axios version containing a remote access trojan (RAT). Anthropic subsequently advised users to abandon the npm installation method in favor of standalone binary installation packages.

VentureBeat commented: For a company with an annual revenue of $19 billion, this is no longer a security oversight but a "strategic loss of intellectual property."

3. The paradox of the "AI safety company"

This is the deepest narrative tension in the whole incident.

Anthropic's commercial story is built around a core differentiation: we are more responsible than OpenAI. From "Constitutional AI" to openly published safety research, from actively restricting model capabilities to collaborating with governments for responsible information disclosure, what Anthropic sells is not technological leadership but "trust."

However, the two leaks within five days revealed not a problem of technical capability but an issue of organizational operational capacity. The first was the CMS's default permission settings being public, which went unchecked. The second was the npm packaging configuration omission, which went unverified. These are not deep technical challenges but basic items on a junior operations checklist.

The leaked code also revealed some thought-provoking internal data. Comments in autoCompact.ts show that as of March 10, there were about 250,000 wasted API calls globally every day due to continuously failing automatic compression operations. 1,279 sessions experienced over 50 consecutive failures (with a maximum of 3,272 failures). The fix is three lines of code: disable the feature after three consecutive failures.

Internal comments on the Capybara model (the upcoming flagship Claude) reveal that the v8 version's "false claims rate" is 29-30%, representing a backward slide compared to the v4 version's 16.7%. Developers also added a "confidence suppressor" to prevent the model from being overly aggressive when refactoring code.

These numbers themselves are not scandalous. All software development has bugs and regressions. But the tension between these numbers and Anthropic's public narrative is real: a company claiming to solve the "hardest problem in human history" of AI alignment is simultaneously making basic errors like ".npmignore configuration omissions."

As one tweet stated: "Accidentally publishing the source map to npm is the kind of mistake that sounds impossible until you remember that a large part of this codebase might indeed be written by the AI that is being published."

4. What competitors see

In terms of the competitive landscape for AI programming tools, the value of this leak does not lie in the code itself. Google's Gemini CLI and OpenAI's Codex have already open-sourced their Agent SDKs, but those are toolkits, not the internal wiring of a complete product.

The scale of Claude Code's codebase (512,000 lines, 1,900 files) and architectural complexity demonstrate one fact: this is not an API wrapper but a complete developer operating system. 40 permission-isolated tool plugins, a 46,000-line query engine, a multi-agent orchestration system (internally called "swarm"), bi-directional communication layers for the IDE, 23 Bash security checks (including 18 disabled Zsh built-in command protections and Unicode zero-width space injection protections), and 14 tracked prompt cache invalidation vectors.

For competitors, while the code can be refactored, the product direction related to KAIROS, the anti-distillation strategy, and the performance benchmarks and known defects of the Capybara model, once leaked, are irretrievable strategic information.

Ten days ago, Anthropic had just sent a legal threat letter to the open-source project OpenCode, demanding removal of built-in support for the Claude certification system because third-party tools were exploiting Claude Code's internal API to access the Opus model for subscription pricing instead of pay-per-use pricing. Now, OpenCode doesn’t need to reverse engineer it. The blueprint is right there, forked 41,500 times.

5. 187 Spinner verbs: The human touch in a hodgepodge

Amid all the serious security analyses and competitive intelligence discussions, the leaked code also contains some things that bring a smile.

Claude Code's loading animation features 187 random verb phrases, including "Synthesizing excuses," "Consulting the oracle," "Reticulating splines," "Bargaining with electrons," "Asking nicely," and so on. An Anthropic engineer evidently invested a disproportionate amount of enthusiasm in writing gags for the loading animation.

The code also contains a feature that is almost certainly an April Fool's Day Easter egg: buddy/companion.ts implements an electronic pet system. Each user deterministically receives a virtual creature based on their user ID (18 species, ranging from ordinary to legendary rarity levels, with a 1% chance of being shiny, RPG attributes including DEBUGGING and SNARK). Species names are encoded with String.fromCharCode() specifically to evade text searches by the build system.

These details form a peculiar juxtaposition with the serious security vulnerabilities: in the same codebase, some people are meticulously designing anti-distillation poisons to combat competitors, some are earnestly implementing Zig-level client proofs for API calls, and some are writing 187 jokes for a "thinking" loading animation.

This is the real internal facet of a company valued at billions of dollars, competing to define the human-AI relationship. It is neither the genius collective depicted in Silicon Valley myth narratives nor can it be simply encapsulated by the label of "a hodgepodge." It resembles more of an organization composed of extraordinarily intelligent people who, when constructing extremely complex products at an incredibly fast pace, inevitably stumble in the most fundamental areas.

Anthropic's spokesperson responded to Fortune by saying, "This is a release packaging issue caused by human error, not a security vulnerability."

Technically, this is correct. The omission of a .npmignore configuration is indeed not a "security vulnerability." But when your entire commercial narrative is built on "we take security more seriously than anyone," the signal conveyed by two consecutive weeks of "human errors" may be more damaging than any security vulnerability.

Finally, a fact to note: this article was written by Claude. Anthropic's AI used the information leaked from Anthropic's source code to write an analysis of why Anthropic cannot control its own information. If you find this absurd, then you have grasped the basic atmosphere of the AI industry in 2026.

Note: The above remarks were also added at Claude's request.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

OKX 活期简单赚币,让你的链上黄金生生不息
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

6 minutes ago
Google Quantum AI officially disclosed: the number of qubits needed to break Bitcoin encryption has been reduced by 20 times.
38 minutes ago
95-year-old Buffett said 7 things: it is not time to buy the dip yet, nuclear weapons will eventually be used.
2 hours ago
XAUm Login HashKey: When gold is not just a safe haven, what else can tokenized gold do?
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar深潮TechFlow
6 minutes ago
Google Quantum AI officially disclosed: the number of qubits needed to break Bitcoin encryption has been reduced by 20 times.
avatar
avatarTechub News
25 minutes ago
Google's quantum paper sounds the alarm, 600 billion in assets face risks.
avatar
avatarOdaily星球日报
33 minutes ago
Stripe rises, PayPal falls: the new king of payments is crowned.
avatar
avatar深潮TechFlow
38 minutes ago
95-year-old Buffett said 7 things: it is not time to buy the dip yet, nuclear weapons will eventually be used.
avatar
avatar律动BlockBeats
40 minutes ago
Deep analysis of Claude Code source code leak, what does Anthropic want to do in the future?
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink