Anthropic's AI Used in Iran Strikes After Trump Moved to Cut Ties: WSJ

CN
Decrypt
Follow
10 hours ago

Hours after President Donald Trump ordered federal agencies to halt use of Anthropic’s AI tools, the U.S. military carried out a major airstrike on Iran that reportedly relied on the company’s Claude platform.


U.S. Central Command used Claude for intelligence assessments, target identification, and simulating battle scenarios during the Iran strikes, people familiar with the matter confirmed to the Wall Street Journal on Saturday. 


It came despite Trump’s directive on Friday that agencies begin a six-month phase-out of Anthropic products following a breakdown in negotiations between the company and the Pentagon over how the latter can use commercially developed AI systems.


Decrypt has reached out to the Department of Defense and Anthropic for comment.





“When AI tools are already embedded in live intelligence and simulation systems, decisions at the top don’t instantly translate to changes on the ground,” Midhun Krishna M, co-founder and CEO of LLM cost tracker TknOps.io, told Decrypt. “There’s a lag—technical, procedural, and human.”


“By the time a model is embedded across classified intelligence and simulation systems, you’re looking at sunk integration costs, retraining, security re-certifications, and parallel testing, so a six-month phase-out may sound decisive, but the real financial and operational burden runs far deeper,” Krishna added.


“Defense agencies will now prioritize model portability and redundancy,” he said. “No serious military operator wants to discover during a crisis that its AI layer is politically fragile.”


Anthropic CEO Dario Amodei said Thursday the company would not strip safeguards preventing Claude from being deployed for mass domestic surveillance or fully autonomous weapons. 


"We cannot in good conscience accede to their request," Amodei wrote, after the Defense Department demanded contractors allow their systems for "any lawful use."


"The Leftwing nut jobs at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War," Trump later wrote on Truth Social, ordering agencies to "immediately cease" all use of Anthropic products. 


Defense Secretary Pete Hegseth followed, designating Anthropic a "supply-chain risk to national security,” a label previously reserved for foreign adversaries, barring every Pentagon contractor and partner from commercial activity with the company. 


Anthropic called the designation "unprecedented" and vowed to challenge it in court, saying it had "never before publicly applied to an American company." 


The company added that, to its knowledge, the two disputed restrictions had not affected a single government mission to date.


“The debate isn’t about whether AI will be used in defense, that’s already happening,” Krishna added. “It is whether frontier labs can maintain differentiated guardrails once their systems become operational assets under ‘any lawful use’ contracts.”


OpenAI moved quickly to fill the gap with CEO Sam Altman announcing a Pentagon deal on Friday night covering classified military networks, claiming it included the same guardrails Anthropic had sought. 



Asked whether the Pentagon’s effective blacklisting of Anthropic set a troubling precedent for future disputes with AI firms, OpenAI CEO Sam Altman responded on X, “Yes; I think it is an extremely scary precedent, and I wish they handled it a different way.


“I don't think Anthropic handled it well either, but as the more powerful party, I hold the government more responsible. I am still hopeful for a much better resolution,” he added.


Meanwhile, nearly 500 employees from OpenAI and Google signed an open letter warning that the Pentagon was attempting to pit AI companies against each other. 


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink