Written by: Deng Tong, Golden Finance
On February 27 local time, U.S. President Trump posted on a social media platform stating: "I instructed every federal agency of the U.S. government to immediately stop using the technology of Anthropic." Trump expressed that institutions, like the Department of Defense, which use Anthropic products at different levels, will have a six-month transition period to gradually cease usage. "Anthropic better recognize the situation and cooperate during the transition period; otherwise, I will use all presidential powers to force them to comply and pursue significant civil and criminal liabilities against them."
Why did Anthropic clash with the Pentagon? Did OpenAI become the Pentagon's new darling? Will AI become a tool for killing? How do relevant parties interpret this incident?
1. Daring to challenge the Pentagon, where did Anthropic's troubles come from?
This is because the Pentagon insists on the desire for AI to be used for "all legitimate military purposes," including broader automation tasks. As currently the only cutting-edge AI company approved to operate in the U.S. military's classified networks, Anthropic refused the Pentagon's request to abandon AI usage restrictions (especially prohibiting it for unmanned autonomous weapons, large-scale domestic surveillance, etc.).
Since 2024, Anthropic has been utilized by the U.S. government and military and is the first advanced AI company to deploy its tools to government agencies engaged in classified work. It is reported that during the January operation to capture Venezuelan President Maduro, Claude also played a role.
Anthropic published a statement from CEO Dario Amodei on its official website (full text attached at the end), stating:
The two uses were never included in our contract with the U.S. Department of Defense, and we believe they should not be included now either: large-scale domestic surveillance and fully autonomous weapons. We cannot in good conscience agree to their demands. The Department of Defense has the right to choose contractors that best align with its vision.
Defense Secretary Peter Hegseth posted on X, angrily accusing Anthropic of "arrogance," "betrayal," and "hypocrisy," and listed it as a "national security supply chain risk."
This week, Anthropic demonstrated a classic case of arrogance and betrayal, also setting a negative example for how to conduct business with the U.S. government or the Pentagon. Our position has never wavered and will never waver: the U.S. Department of Defense must have the authority to use Anthropic's models without restriction for all legitimate purposes to defend the Republic. However, @AnthropicAI and its CEO @DarioAmodei have chosen hypocrisy. They are cloaked in the guise of "effective altruism," attempting to force the U.S. military to comply—this is a show of weak corporate ethics that places Silicon Valley ideology above the lives of the American people. This flawed altruistic "terms of service" from Anthropic can never override the safety, readiness, and lives of our military on the battlefield. Their true intentions are clear: to seize veto power over the U.S. military's operational decisions. This is unacceptable. As President Trump said on Truth Social, only the Commander-in-Chief and the American people can decide the fate of our military, not unelected tech executives. Anthropic's position is fundamentally incompatible with American principles. Therefore, their relationship with the U.S. armed forces and federal government has undergone a permanent change. In light of the president's directive for the federal government to stop using all technologies from Anthropic, I instruct the Department of Defense to classify Anthropic as a national security supply chain risk. From today, any contractor, supplier, or partner conducting business with the U.S. military may not engage in any commercial activities with Anthropic. Anthropic will continue to serve the U.S. Department of Defense for no more than six months for a smooth transition to higher quality, more patriotic services. U.S. warriors will never be influenced by the ideologies of large tech companies. This decision is final.
The text mentions being classified as a "national security supply chain risk," what consequences could this have?
The impact is not merely "the government will not use it." This is a measure typically used against foreign hostile forces. First, federal agencies must cease usage: all executive departments need to stop procuring and deploying its models, existing contracts could be terminated or not renewed, and cloud platforms integrating its models may also be required to divest. Secondly, no federal contractor can use the company's products in systems; businesses participating in government projects must prove "no Anthropic components," which means trillion-dollar tech giants like Nvidia, Amazon, and Google would need to sever ties with Anthropic. Other impacts include investor avoidance, financing difficulties, etc.
The company stated on Friday evening, "We will legally challenge any designation of supply chain risk." The company said being identified as a supply chain risk "does not comply with legal provisions and sets a dangerous precedent for any American company negotiating with the government."
A former U.S. Department of Defense official, who wished to remain anonymous, pointed out: Anthropic seems to be in a strong position in this competition. "This is a great public relations opportunity for them, and they are not short on money at all."
Amodei himself admitted: since the company positioned itself regarding the deployment of AI on the battlefield with Trump officials, the company’s valuation and revenue have only increased. If the Department of Defense chooses to part ways, Anthropic will "ensure a smooth transition to other suppliers."
In reality, Anthropic has gained OpenAI’s support due to its anti-government position: OpenAI's owner Sam Altman has expressed support for the competitor. He sent a memo to employees stating he shares the same "bottom line" with Amodei regarding the application of the two companies’ products, "We have always believed that AI should not be used for large-scale surveillance or autonomous lethal weapons, and that human involvement should always be present in high-risk automated decision-making. These are our main bottom lines."
But then, Altman turned to embrace the U.S. government...
2. "Big-eyed Altman" also "betrayed"
Altman later confirmed on X that OpenAI has reached an agreement with the U.S. Department of Defense to use its AI models on confidential cloud networks. It is currently unclear how the security measures in OpenAI's agreement differ from those in negotiations with Anthropic.
Tonight, we reached an agreement with the U.S. Department of Defense to deploy our models on its confidential network. Throughout all interactions, the Department of Defense has shown full respect for security and is eager to work with us to achieve the best results. The safety and widespread benefits of AI are central to our mission. Our two most important security principles are: prohibition of large-scale domestic surveillance, and human accountability in the use of force, including in the use of autonomous weapon systems. The Department of Defense agrees with these principles and incorporates them into law and policy, and we have included them in the agreement. We will also construct technical safeguards to ensure our models operate correctly, which is what the Department of Defense desires. We will deploy Functional Deployment Environments (FDE) to assist model operations, and to ensure model safety, we will only deploy in cloud networks. We are requesting that the Department of Defense offer the same terms to all AI companies, and we believe all companies should be willing to accept these terms. We strongly hope that tensions can ease, avoid resorting to legal and governmental actions, and reach a reasonable agreement. We will continue to do our utmost to serve all of humanity. The world is a complex, chaotic, and sometimes dangerous place.
Under Altman's post, many X users expressed their views:
From non-profits to for-profits, and then to the Department of Defense. This change is really fast.
I can't wait to investigate this deal after being elected to Congress in November.
So OpenAI just transitioned from "we have bottom lines too" to "this is our model on the classified DoW network" in just 12 hours? Anthropic was threatened/blacklisted for sticking to the same principles, but when Sam negotiates, he earns everyone's applause for deeply respecting security.
First pretending to align with competitors, and then immediately embracing the government, Altman's play was truly brilliant.
Altman's "betrayal" may perhaps find traces in the early connections between the two tech giants.
Amodei is a veteran in the tech world; he was an employee at OpenAI early in his career and became well-known as a result. Later, he and several OpenAI employees left due to disagreements with Altman.
In 2021, Anthropic was established. The models and tools it developed are widely used by federal government, primarily benefiting from its collaboration with leading cloud service provider Amazon Web Services (AWS), through which Anthropic initially established its foothold in the Department of Defense and intelligence agencies. In July last year, Anthropic, along with xAI, Google, and OpenAI, secured a $200 million defense contract to support the Pentagon's advancement of AI applications.
The two startups are now directly competing for users and corporate clients through evolving AI chatbots, agents, and other tools.
3. Technology has become a bloody sword
In the conflict between tech giants and the U.S. government, Trump expressed his inner anger in capital letters: "The United States of America will never allow a radical left woke company to dictate how our great military fights and wins wars!"
Since 2018, conflicts between tech companies and the Pentagon have never ceased. At that time, employees of Alphabet's Google protested the Pentagon’s use of its AI to analyze videos taken by drones. Subsequently, relations eased somewhat, with companies like Amazon and Microsoft vying for defense business, and several CEOs of large tech companies also pledged to collaborate with the Trump administration last year.
However, as automated systems become more prevalent in the wars in Ukraine and Gaza, the theoretical "killer robots" have raised concerns among human rights and tech activists. Jack Shanahan, who led the Pentagon's algorithm warfare project "Project Maven," stated that the bolder military actions taken by the U.S. in the past year have exacerbated these concerns. "People may feel more uneasy about unrestricted actions, and White House legal approvals may become a 'cover for any actions that could lead to improper programs, civilian casualties, and collateral damage.'”
Anthropic's AI technology has been applied in intelligence and military, and it is the first of its kind to handle classified information through supply agreements with cloud service providers like Amazon. According to reports from Reuters, The Wall Street Journal, and others, Anthropic's Claude AI was used by the U.S. military in the January operation that captured Maduro. It was integrated into the military's intelligence and data analysis systems through collaboration with data analytics company Palantir Technologies to analyze massive amounts of intelligence data, interpret satellite images, support decision-making, possibly assist in operational deployment planning, etc., but specific details remain classified.
4. Opinions from relevant parties
On Friday evening, an open letter co-signed by several notable tech and AI leaders began circulating to the Pentagon and Congress. The signatories include 11 OpenAI employees, including Boaz Barak and William Feng, as well as Waymark CEO Alexander Persky-Stern. "We firmly believe that the federal government should not retaliate against a private company for refusing to accept contract changes. This situation sets a dangerous precedent. Punishing an American company for refusing contract modifications sends a clear signal to all U.S. tech companies: either accept any conditions proposed by the government or face retaliation. The reason the U.S. can win in the AI competition is that it is committed to free enterprise and the rule of law; undermining this commitment to punish a company is shortsighted and contrary to national security interests."
Elon Musk stated on his social media platform X, "Anthropic hates Western civilization." Altman took a different approach, supporting Anthropic's security measures, opposing the government's "threatening" actions, and striving to facilitate OpenAI's deal with the Pentagon.
Saif Khan, who previously served on President Biden’s National Security Council, stated: The Department of Defense's actions "may constitute the harshest domestic AI regulations ever enacted by any government." "It can be said that Anthropic is seen as a greater national security threat than any Chinese AI company, even though they have not classified any Chinese AI company as a supply chain risk." "Targeting Anthropic may catch attention, but in the end, everyone loses."
Dean Baer, who served as senior policy advisor on AI for Trump, wrote on X: "This is essentially an attempt to kill a business. I would never recommend any investor to invest in American AI; I would never recommend starting an AI company in the U.S."
Deputy Secretary of Defense Emil Michael tweeted a series of messages: Amodei "is a fraud and has a God complex. He is solely focused on personally controlling the U.S. military and is willing to put our nation's security at risk."
Massachusetts Democratic Senator Elizabeth Warren urged for hearings on the central issue of AI power in this dispute. "Did the Trump administration punish Anthropic for the company’s refusal to help with large-scale surveillance of American communities or create killer robots? The American people have the right to know what Trump administration officials have planned at the Pentagon."
Massachusetts Democratic Senator Ed Markey and Maryland Democratic Senator Chris Van Hollen stated: The Pentagon "threatens to punish an American AI company because that company refuses to relinquish basic security provisions for the use of its AI models, which represents a chilling abuse of governmental power."
California Democratic Congressman George Whitesides expressed that he is worried Hegseth "threatens to expedite changes in security policies, potentially pushing the Department of Defense to conduct broader deployments without adequate safeguards."
Senate Intelligence Committee senior member and Virginia Democratic Senator Mark Warner stated in an email news release: "The president's directive for the federal government to cease using a leading American AI company, coupled with incendiary remarks about the company, raises serious concerns about whether national security decisions are being made with careful analysis or political considerations."
5. Attached: Statement from Anthropic CEO Dario Amodei
I firmly believe that using AI to defend the United States and other democratic nations and to defeat our authoritarian adversaries is of existential importance.
Therefore, Anthropic proactively deployed our models to the U.S. Department of Defense and intelligence agencies. We are the first cutting-edge AI company to deploy models to the classified networks of the U.S. government and the first company to deploy models in national laboratories, as well as the first to provide customized models for national security clients. The Claude model has been widely used in key mission applications for the U.S. Department of Defense and other national security agencies, such as intelligence analysis, modeling and simulation, operational planning, cyber operations, etc.
Anthropic has also taken action to defend America's lead in AI, even when this contravenes the company's short-term interests. We abandoned hundreds of millions in revenue to prevent some companies from using Claude chips; we thwarted cyber attacks attempting to misuse Claude chips; we also advocated for stringent export controls on chips to ensure democratic advantages.
Anthropic understands that military decisions are made by the U.S. Department of Defense, not private companies. We have never objected to any specific military actions, nor have we tried to impose any temporary restrictions on the use of our technology.
However, in rare cases, we believe AI could undermine rather than defend democratic values. Certain uses are also completely beyond the current technology's capability to execute safely and reliably. Two of these uses were never included in our contract with the U.S. Department of Defense, and we believe should not be included now:
Large-scale domestic surveillance. We support the use of AI for legitimate foreign intelligence and counterintelligence tasks. However, using these systems for large-scale domestic surveillance is contrary to democratic values. AI-driven large-scale surveillance poses a severe, unprecedented risk to our fundamental freedoms. Currently, such surveillance is only legal because the law has not kept pace with the rapid development capabilities of AI. For example, under current law, the government does not need a warrant to purchase detailed tracking information, web browsing records, and social relationships of American citizens from public channels. The intelligence community has acknowledged that this practice raises privacy concerns and has faced bipartisan opposition in Congress. Powerful AI can integrate these disparate, individually harmless data points to create a full picture of any person's life—automatically and on a massive scale.
Fully autonomous weapons. Some autonomous weapons, such as those currently used in Ukraine, are crucial to defending democracy. Even fully autonomous weapons (which require no human intervention, automatically selecting and attacking targets) may be crucial for our national defense. However, the reliability of cutting-edge AI systems is currently insufficient to drive fully autonomous weapons. We would never deliberately provide products that could endanger the safety of American military personnel and civilians. We have proactively offered to work directly with the U.S. Department of Defense to improve the reliability of these systems, but they declined this proposal. Furthermore, without proper oversight, there's no expectation that fully autonomous weapons can make critical judgments daily like our trained professional forces. They need to be equipped with comprehensive safeguards before deployment, and these safeguards currently do not exist.
To our knowledge, these two exceptions have not hindered our armed forces' speed in adopting and using our models.
The U.S. Department of Defense has stated that they will only contract with AI companies that agree to "any lawful use" and remove safety provisions in the above cases. They threatened that if we continue to retain these safety provisions, we would be removed from their systems; they also threatened to designate us as a "supply chain risk"—a label typically used for hostile forces towards the U.S., and not previously applied to American companies—and invoked the Defense Production Act to force removal of these safety provisions. The last two threats, in themselves, are contradictory: one treats us as a security risk; the other treats Claude as indispensable to national security.
However, these threats will not change our position: we cannot in good conscience agree to their demands.
The Department of Defense has the right to choose the contractors that best align with its vision. However, given the immense value Anthropic's technology brings to our military, we hope they reconsider. We strongly wish to continue serving the Department of Defense and our operational personnel, provided that the two safeguards we have requested are in place. If the Department of Defense chooses to terminate its cooperation with Anthropic, we will strive to ensure a smooth transition to other suppliers, avoiding any disruption to ongoing military plans, operations, or other critical missions. We will continue to provide our products for the necessary duration under the favorable terms we proposed.
We are always ready to continue working to support U.S. national security.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。