Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Claude is starting to require identification cards, how far are we from losing digital privacy?

CN
Techub News
Follow
6 hours ago
AI summarizes in 5 seconds.

Written by: Web4 Research Center

On April 14, 2026, the artificial intelligence company Anthropic announced that it would introduce an authentication mechanism for specific use cases of its large language model Claude. When users access certain premium features, the system will display a mandatory verification page—requiring the upload of government-issued identification documents (passport, driver's license, or national ID), and taking a real-time selfie for face comparison. The entire process takes about five minutes, and the verification service is provided by a third-party agency, Persona.

Anthropic promises that user identity data will not be used for model training and will not be shared with third parties for marketing or advertising purposes. That's all.

This announcement sparked immediate discussion in the tech community on Hacker News. An AI chat tool—not a bank account opening, not customs inspection—now requires checking your ID. This matter is worth pausing to contemplate.

1. The real controversy is not "whether to verify."

What truly unsettles users is neither the five-minute verification process nor the requirement to take a selfie in front of the camera, nor even the possibility that the ID photo might be used for something. The greatest concern lies in a more fundamental question.

Who do I give my ID to?

In the traditional centralized platform model, the answer is very simple. You directly hand over your identity credentials to the company operating this AI service—Anthropic—and its chosen third-party service provider Persona. You submit the original photo, take a real-time selfie, complete the facial comparison, and then wait for the system to determine whether you deserve to continue using it. This is no different in logic from completing real-name verification on WeChat or binding a bank card on Alipay. The platform obtains your data, the platform verifies your identity, and the platform decides whether you are compliant. Once the data is handed over, you are no longer the active party in this relationship.

Is the platform safe? This is a question that must be asked, and the answer cannot rely solely on the platform's own promises. The past experiences of the internet industry have provided enough warnings. The financial value of identity information is extremely high; once leaked, it is almost impossible to remedy. A stolen credit card can be reported and replaced, a leaked password can be changed, but an ID number and identification photo are permanent—they do not expire and cannot be reset, and once leaked, you can only bear the risk of being impersonated or scammed forever.

This is not alarmism. In 2025, a public court ruling exposed a shocking "student information resale industry chain": over 700,000 pieces of student information were resold multiple times, including information about students and parents, contact information, etc., involving illegal resales among vice principals, education consulting agency leaders, and others. This is just the tip of the iceberg. In the digital age, the leakage and resale of personal information have formed a complete black market supply chain, and every data breach poses a potential threat to the financial and personal safety of ordinary people.

The ultimate issue of trust has never been what promises the other party made, but whether they have the ability to safeguard what they hold.

2. The awkward moments of security perfectionists

With its foundation rooted in "security" and "responsible AI," Anthropic's recent safety record itself is a case worth examining.

On March 26, 2026, according to the Global Times, due to human configuration errors in the external content management system, nearly 3,000 of Anthropic's internal sensitive documents were made publicly accessible, including unpublished blog drafts for the next generation Claude Mythos model, details of closed-door events for CEO-level clients, and even core assessment reports on the model's cybersecurity capabilities.

Less than a week later, on March 31, Anthropic ran into even bigger trouble. The Securities Times reported that security researcher Chaofan Shou disclosed on social media that the npm package of Claude Code version 2.1.88 accidentally contained a 59.8MB cli.js.map debug file, which embedded 512,000 lines of unobfuscated TypeScript source code, covering 1,906 core source files and over 40 tool modules. The AI community was able to fully view the system's permission execution logic, orchestration path, and security trust boundaries.

What is even more absurd is that Anthropic later cited the U.S. Digital Copyright Act to require GitHub to take down repositories containing leaked code but mistakenly deleted over 8,000 unrelated repositories.

Two consecutive core information leaks due to "configuration errors" have raised serious questions about Anthropic's internal safety processes and release protocols. Although the company emphasizes that model weights and user data were not affected, the complete exposure of the core codebase still constitutes a significant loss in terms of intellectual property. A company that cannot even safeguard its internal code and R&D plans has no basis for making users believe it can safeguard their ID data.

3. When the verification provider is also not clean

Is Persona reliable?

According to a report from security research firm Malwarebytes, researchers investigating Discord's age verification system discovered that a test environment front end for Persona was accidentally exposed. Although Persona clarified afterwards that this environment was isolated from the production system and no user data was leaked, the exposed documents revealed that the platform's capabilities far exceed that of a simple age verification tool: it can perform 269 different verification checks, conduct facial recognition searches against monitoring lists and political public figures, screen for 14 types of "negative media" information, and assign risk and similarity scores. The longest retention period for Persona's data can reach up to 3 years.

During the same period, TechJuice reported on the privacy controversies faced by LinkedIn verification partner Persona. Privacy researchers found that Persona not only collects standard personal information such as names, addresses, and birth dates but also extracts facial geometric data, locates geographic locations, and analyzes behavioral biometrics, all of which can be shared with global partners, vendors, and subprocessors.

These discoveries reveal a deeper issue: when AI platforms introduce identity verification, users are faced not just with one company's security promises, but with a data processing network composed of multiple entities. The longer the chain, the more vulnerable points there are. And users have almost no right to be informed or choice over this matter.

4. Another path: not submitting data,

can still prove who you are

While centralized AI platforms are busy collecting ID data, in another technological world, a group of people are questioning a completely different issue.

Is it possible to prove who you are without submitting your ID data?

This is the direction of decentralized identity that the Web3 space has been deeply exploring for years, commonly referred to as DID in the industry. Its core logic sounds almost paradoxical—allowing verifiers to know "this identity is real," while preventing them from obtaining any specific information about your original ID. The key technology to achieve this is called zero-knowledge proof (ZKP). You can think of it like this: you want to enter a bar to drink, but you don’t want the security guard to see your ID. Zero-knowledge proof is a mathematical protocol that allows the guard to be convinced that "you are indeed over 18," while he never knows your date of birth, home address, or ID number. He knows a conclusion, but has not obtained any original data leading to that conclusion.

The ultimate elegance of technology lies in making people believe, without having to reveal the truth.

In the architecture of decentralized identity, your identity credentials are not stored on Anthropic or Persona's servers but are held and controlled by you in encrypted form. Each time a verification occurs, you do not need to upload ID photos or take selfies to send to a third-party server; you only need to generate a zero-knowledge proof locally and send this proof to the verifier. The information contained in the proof is precisely controlled to the minimum: it may only include "this person is over 18," or "this person is in a supported country," or "this person is not a machine."

One of the most radical practitioners in this direction is the World project. Its core concept is to establish a global "personality proof" system, allowing everyone to prove they are a real, unique human without exposing their identity. The verification method is no longer uploading identification documents, but rather using a device called Orb to scan the user’s iris. The iris pattern is transformed into a mathematical hash value, and the original image is deleted on the spot. Thereafter, users can prove "I am a human validated by iris recognition" to any service provider using zero-knowledge proof, without revealing their specific identity. According to media reports from Binance and php Chinese Network, the World Network had accumulated over 12 million verified users and more than 26 million registered users by May 2025. In March 2026, World launched the Agent Kit tool, allowing users to bind their iris-validated identity to an AI Agent, proving cryptographically that a real human stands behind the agent's actions.

This path sounds very ideal, but the reality is obviously not so clean. The World project has encountered regulatory resistance in multiple jurisdictions. Countries such as Chile, Germany, Indonesia, and Thailand have successively required it to stop collecting iris data or delete already collected biometric information. The greatest enemy of decentralized identity is not immature technology but the lack of deployment scenarios. When the vast majority of mainstream internet services still rely on traditional centralized KYC processes, a user may have the most powerful zero-knowledge identity credentials, but cannot use them to log into Claude, cannot pass compliance checks in most countries, and cannot meet the evidence collection requirements of law enforcement agencies.

5. Whose choice, whose dilemma

The true dilemma of this choice question may be hidden here.

If you stand from Anthropic's perspective, you are facing not a technology preference issue, but a complex network woven by regulatory rules from all over the world. The identity verification requirements differ significantly between countries. The Chinese market has its own real-name system regulations, the European Union has strict restrictions on data processing under GDPR, and the data privacy laws in various U.S. states are continuously tightening. Anthropic needs to prove its ability to prevent platform abuse, enforce age restrictions, and comply with the requirements of law enforcement agencies in various countries. Under the current institutional framework, the most direct and safest approach is to conduct KYC like a bank. Centralized verification may not be the best solution, but it is the only solution that leaves all regulatory agencies without any grounds for complaint.

If you stand from the user's perspective, you do not have much choice. If you want to use stronger AI capabilities, you must cross this identity threshold. Open-source local models are indeed a fallback, but the performance gap and deployment barriers make this path unrealistic for most people. In the competitive landscape of commercial large models, users often find themselves in a passive accepting position.

If you stand from the Web3 developer's perspective, you hold a theoretically more privacy-protective solution, but you face an immature soil. The large-scale popularization of decentralized identity requires not only better technology but also regulatory acceptance, cooperation from commercial platforms, and shifts in user cognition. Any one of these three matters cannot be accomplished in just a year or two. The greatest predicament of decentralized identity is not imperfect technology, but rather that the world is not ready to accept its perfection.

This question is ultimately not about one answer, but a series of deeper questions. In an era where AI capabilities are iterating at a monthly pace, where should the boundaries of identity verification be drawn? As centralized platforms gradually acquire more user identity data, what kind of power structure does this imply? Is the decentralized identity of Web3 a technological direction worth insisting on, or is it a mirage trapped in an idealistic filter?

No one can answer these questions for everyone. But the questions themselves may be more important than the answers.

What we need is not an answer, but the courage to ask good questions.

Both paths lead to the unknown. The difference is that on one path you hand over the key, while on the other path you hold your own lock. Before making a choice, take a clear look at what you hold in your hand.

Documents can be verified, but trust cannot be outsourced.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Techub News

3 hours ago
Weekend Recommended Reading: Sun Yuchen's Break with the Trump Family Project, Drift Protocol's Stolen Funds Face Class Action Lawsuit
3 hours ago
OpenAI Economist Internal Sharing: The Changing Employment Landscape
3 hours ago
After the collapse of Drift: Tether plans to invest 127.5 million dollars to rescue, while Circle's "legally non-freezing" has led to a class-action lawsuit.
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatar律动BlockBeats
2 hours ago
Arthur Hayes new article: It is now "no trading" time.
avatar
avatar律动BlockBeats
2 hours ago
Claude Opus 4.7 Real Test: Does it Deserve to be Called the Strongest Model?
avatar
avatarOdaily星球日报
2 hours ago
World models transition from prediction to planning, HWM and the challenges of long-term control.
avatar
avatarTechub News
3 hours ago
Weekend Recommended Reading: Sun Yuchen's Break with the Trump Family Project, Drift Protocol's Stolen Funds Face Class Action Lawsuit
avatar
avatarTechub News
3 hours ago
OpenAI Economist Internal Sharing: The Changing Employment Landscape
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink