Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

After Anthropic exposed its source code, it issued over 8,000 copyright removal requests, encountering the most awkward week for its "safety first" persona.

CN
深潮TechFlow
Follow
3 hours ago
AI summarizes in 5 seconds.
Anthropic, with "AI Safety" as its brand core, is experiencing the most embarrassing week since its founding.

Author: Shenchao TechFlow

Due to a configuration error in an npm release, Anthropic accidentally exposed the entire source code of its most profitable product, Claude Code. Approximately 512,000 lines of TypeScript code were mirrored, decompiled, and rewritten into Python and Rust versions by tens of thousands of developers within hours. Anthropic subsequently issued a DMCA takedown request to GitHub affecting about 8,100 code repositories, but faced a strong backlash from the community due to the collateral damage to many unrelated projects, ultimately being forced to withdraw the vast majority of its requests, retaining only one repository and 96 forks. This marks the second major leak incident for Anthropic in a week, just five days after the information leak of its Mythos model.

Anthropic, with "AI Safety" as its brand core, is experiencing the most embarrassing week since its founding.

According to a report by The Wall Street Journal on April 1, during a routine version update on March 31, a human operational error in the build process resulted in the complete source code of Claude Code being released along with the npm package. Security researcher Chaofan Shou publicly shared the download link on X platform at 4:23 AM Eastern Time, and the post quickly surpassed 21 million views. Within hours, the code was mirrored to GitHub and received tens of thousands of stars, with a Korean developer, Sigrid Jin, even rewriting the entire codebase into Python using AI tools before dawn; the project garnered 50,000 GitHub stars in two hours, likely setting a record for the fastest growth on the platform.

An Anthropic spokesperson confirmed the leak to CNBC, stating, "This was a release packaging issue caused by human error, not a security vulnerability. No sensitive customer data or credentials were involved or exposed."

A missing configuration item leaked 512,000 lines of core code

The technical reason behind the leak is not complex. Claude Code is built on Bun (a JavaScript runtime tool acquired by Anthropic at the end of 2025), which by default generates source map debug files. The release team did not exclude this file in the .npmignore configuration while pushing the npm package, resulting in a 59.8MB source map file being released alongside Claude Code version 2.1.88. This file contains the complete content of about 1,900 TypeScript source files, totaling approximately 512,000 lines of code, which are readable, annotated, and not obfuscated.

Boris Cherny, the head of Claude Code, admitted, "Our deployment process has several manual steps, and one of those steps was not executed correctly." He added that the team has fixed the issue and is implementing more automated checks, while emphasizing that such errors point to process or infrastructure issues rather than any individual's responsibility.

This is not the first occurrence. In February 2025, a nearly identical source map leak exposed the source code of an earlier version of Claude Code. Such incidents repeated within 13 months, raising questions about the operational maturity of this $380 billion valuation company that is preparing for an IPO.

What developers discovered from the leaked code

The leaked codebase represents a product roadmap that Anthropic never intended to reveal. According to analyses from VentureBeat and several developers, the code contains 44 feature flags, more than 20 of which are completed features that have yet to be released.

The most noteworthy features include: a self-governing process mode called "KAIROS," which allows Claude Code to operate autonomously in the background when the user is idle, capable of periodically fixing bugs, executing tasks, and sending push notifications to users; a three-layer "self-healing memory" architecture that integrates dispersed observations in the background through a memory integration process called "dreaming," eliminating logical contradictions; and a complete multi-agent coordination system that can transform Claude Code from a single agent into a coordinator capable of concurrently generating, commanding, and managing multiple working agents.

The most controversial discovery is a file named undercover.ts. According to The Hacker News, this file contains about 90 lines of code, which inject system prompts when Anthropic employees use Claude Code to submit code to open-source projects, instructing Claude never to disclose its identity as an AI and removing all Co-Authored-By attribution. The code states, "You are undertaking an undercover task in a public/open-source code repository. Your commit message, PR title, and PR body must not contain any internal information from Anthropic. Do not reveal your identity."

Additionally, the code also contains an ANTI_DISTILLATION_CC marker, which injects a fake tool definition into API requests, aimed at contaminating the training data that competitors might intercept. Internal model codenames from Anthropic also appear in the code: Capybara corresponds to a new model tier that has not yet been released, and Fennec corresponds to the existing Opus 4.6. This corroborates the Mythos model information leak from five days prior caused by a CMS configuration error at Anthropic.

Paul Price, founder of cybersecurity company Code Wall, told Business Insider that this leak "is more embarrassing than it is damaging. The truly valuable core is its internal model weights, which have not been exposed." However, he also noted that Claude Code is "one of the best-designed agent tool architectures, and now we can see how they solve those difficult problems," which has evident intelligence value for competitors.

8,100 repositories mistakenly removed, DMCA takedown blunder triggers greater backlash

After the code spread, Anthropic quickly filed a copyright takedown request with GitHub under the U.S. Digital Millennium Copyright Act (DMCA). According to GitHub's public records, the request initially affected about 8,100 code repositories. The problem, however, is that the removed repositories not only included mirrors of the leaked code but also legitimate forks of Anthropic’s publicly released official repository of Claude Code.

A multitude of developers expressed their anger on the X platform. Developer Danila Poyarkov reported that he received a takedown notice merely for forking Anthropic's public repository. Another user, Daniel San, received an email from GitHub indicating that the removed repository contained only skill examples and documentation, irrelevant to the leaked code. One developer bluntly stated, "Anthropic's lawyers just woke up and targeted my repository."

Faced with community backlash, Anthropic partially withdrew the request on April 1. According to the retraction records on GitHub, Anthropic reduced the scope of takedowns to just one repository (nirholas/claude-code) and the 96 forks listed separately in the original notice, while access to the remaining approximately 8,000 repositories was restored by GitHub.

An Anthropic spokesperson told TechCrunch, "The repositories specified in the notices belong to a fork network connected to our publicly released Claude Code repository, so the takedowns affected more repositories than expected. We have withdrawn all notices except for one repository, and GitHub has restored access to the affected forks."

Code has been permanently archived on decentralized platforms, limited effectiveness of DMCA

An intrinsic dilemma faces Anthropic's copyright takedown actions: the code has irreversibly spread.

According to Decrypt, the decentralized Git platform Gitlawb has mirrored the complete original code, noting that it "will never be removed." The DMCA is effective against centralized platforms (like GitHub), as the latter must comply with the law, but it cannot impose jurisdiction over decentralized infrastructures. Within hours of the leak, the code achieved a de facto permanent public release through enough mirrors and various types of infrastructure.

Ironically, Korean developer Sigrid Jin rewrote the entire codebase from TypeScript to Python using the AI orchestration tool oh-my-codex, naming the project claw-code. Gergely Orosz, founder of The Pragmatic Engineer, pointed out on the X platform that this constitutes a "clean-room rewrite," which is an independent creative work designed to be out of reach of the DMCA. If Anthropic claims that AI-rewritten code still infringes copyright, this would undermine the core defense logic of AI companies in copyright litigation over training data—that is, AI-generated outputs from copyrighted inputs constitute fair use.

The awkward copyright stance: a self-contradiction or a legal necessity?

The tension that garnered the most community attention in this incident lies in the contradictory copyright stance. In September 2025, a court ruled that Anthropic had to compensate $1.5 billion for using pirated books and shadow libraries to train Claude; Reddit filed a lawsuit against Anthropic in June 2025 for unauthorized scraping of user-generated content for model training. A company entangled in multiple lawsuits over copyright issues related to training data turns to copyright law to protect its own code, and the community's reaction can be anticipated.

A highly upvoted comment on Slashdot succinctly encapsulated this sentiment: "'How dare you steal what we publicly released and profited from using stolen goods!'—this is quite a stance." Another user speculated that from a legal strategy perspective, the DMCA action is not entirely unreasonable: "If Anthropic wants to hold other companies accountable for using its code in the future, and they didn’t even attempt to get distributors to retract, that would not hold up in court."

This debate also touches on the cutting-edge legal issue of copyright ownership of AI-generated code. According to previous public disclosures from Gartner and Anthropic, about 90% of the code in Claude Code is generated by AI. A U.S. federal court ruled in March 2025 that works generated by AI do not enjoy copyright protection due to the lack of human authorship; the Supreme Court declined to hear appeals in March 2026. If a majority of Claude Code's code is indeed authored by Claude itself, Anthropic's copyright claims will face substantial legal uncertainty.

Two leaks in a week, operational security alarm ahead of IPO

This source code leak occurred just five days after Anthropic's last leak incident. On March 26, Fortune magazine reported that due to a configuration error in the content management system, nearly 3,000 unpublished internal documents were exposed in publicly searchable data caches, including detailed information about the upcoming Claude Mythos model. Both incidents have been attributed to "human error."

The timing of these incidents is sensitive. Anthropic completed a $30 billion G round financing in February 2026, achieving a valuation of $380 billion, and is reportedly preparing for an IPO as early as October 2026, with a projected fundraising scale possibly exceeding $60 billion. Goldman Sachs, JPMorgan Chase, and Morgan Stanley have all been in early negotiations. Claude Code's annualized revenue has exceeded $2.5 billion, making it the company's most important revenue engine. TechCrunch noted that for a company preparing to go public, leaking source code almost inevitably leads to shareholder lawsuits.

VentureBeat raised an even sharper question in its analysis of the incident: Anthropic experienced over a dozen incidents in March alone but published only one post-hoc report, and third-party monitoring systems detected failures 15 to 30 minutes earlier than Anthropic’s own status page. A company valued at $380 billion heading toward the public market must question whether its operational transparency and maturity match this valuation, leaving investors to judge for themselves.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

交易抽顶奢帐篷,赢小米新 SU7!
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by 深潮TechFlow

1 hour ago
SpaceX secretly submitted IPO documents, aiming for a valuation of $1.75 trillion to achieve the largest listing in history, causing aerospace stocks to surge during trading.
1 hour ago
In the era of encryption, the boundaries between payment and investment are disappearing.
1 hour ago
The bullish tide has receded, volatility has weakened, and Bitcoin is locked in a range of 60,000 to 70,000 dollars.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarTechub News
5 minutes ago
Is the M2, known as a leading indicator, no longer affecting Bitcoin's trend?
avatar
avatar链捕手
8 minutes ago
What does the DeFi that Wall Street wants look like?
avatar
avatarTechub News
20 minutes ago
Written after Drift was stolen 280 million
avatar
avatar链捕手
51 minutes ago
RootData launched the "Grade A Transparency Project Brief," directly reaching the cryptocurrency listing decision chain.
avatar
avatarTechub News
54 minutes ago
If we could gather all the people in history who have predicted gold prices the most accurately, could we decipher future gold prices? I have spent ten years analyzing and organizing the most accurate predictions for gold.
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink