Author: Anita, Senior Executive of Sentient Asia Pacific
The war in Iran has thrown large models from the laboratory directly onto the battlefield.
The "Epic Fury" operation at the end of February 2026 was not just a joint airstrike; it resembled an AI stress test conducted on a real battlefield. Whoever can compress the "sensor—decision—shooter" chain to minute or even second levels holds the pricing power for the next round of geopolitical conflicts.

1. Epic Fury: The First "AI Full-Stack War"
In this operation, US and Israeli officials claimed that the concentrated strikes on Iran's critical military and nuclear facilities "achieved strategic success," and repeatedly hinted that Iran's Supreme Leader Khamenei was very likely killed in an attack on an underground command facility in northern Tehran—but Iran long refused to provide a clear confirmation of life or death, making this "decapitation" more like a game of authority narrative.
From an operational perspective, the characteristic of Epic Fury is not the duration but the intensity: high-intensity airstrikes, drone swarming breaches, special operations, and cyber warfare interspersed over more than ten days, supported by a highly software-driven operational stack—Palantir's battlefield ontology and digital twin platform, intelligence integration systems from US defense agencies, automated target generation tools from Israel, combined with new roles from cutting-edge large model companies like OpenAI.

This war marks a symbolic turning point: from this point forward, "AI's involvement in military decision-making" is no longer a buzzword in Pentagon PowerPoints, but a concrete source of cash flow and political risk that cannot be ignored by markets, regulators, and ethical debates.
2. OpenAI: From "Ethical Declaration" to the Department of Defense's Most Expensive SaaS Subscription
In a mere two or three years, OpenAI's public stance has undergone a stunning reversal.
From maintaining distance on "military uses," it has now acknowledged that it can support national security and defense projects provided that safety principles are met, and it has secured what may be the most sensitive major client contract of our era.
Around February 27, 2026, Sam Altman announced that the company had reached an agreement with the US Department of Defense to deploy GPT series models on a secure network for intelligence analysis, translation, war simulations, and other "defense-related scenarios." In some public documents and media reports, this traditional Department of Defense has been deliberately referred to as the "Department of War," symbolically returning to a more offensively charged language of warfare, even though legally the agency's name remains the Department of Defense.
The "red lines" identified in public reports can be summarized into three main points:
No involvement in large-scale surveillance within the United States;
No direct driving of fully autonomous lethal weapon systems; the use of force must maintain "human in the loop";
Retain human oversight and accountability in high-risk decisions.
These principles are both OpenAI's ethical posture to the outside world and bargaining chips in contract negotiations—what it signals to Washington is that the company is willing to collaborate but wishes to do so "within controllable limits."
What role did these models play in real combat like Epic Fury? Public information will only stay at safe descriptions—assisting in intelligence processing, analyzing complex data, and helping decision-makers form situational awareness more quickly.
But from a technical perspective, feeding vast amounts of satellite imagery, signal intelligence, and social media streams into large models, then having them rank, predict pathways, and assess risks of potential "high-value targets," has already come very close to creating a "battlefield brain."
For Wall Street, the significance of this agreement is very direct.
After Anthropic was labeled as a "supply chain risk" by the Pentagon for sticking to stricter red lines, OpenAI has adopted a posture of "limited ethical compromise and significant commercial profit," securing this multi-hundred million dollar defense contract that is extremely hard for competitors to shake.
3. Anthropic: The "Principled" Company Skirting the Defense Budget
In sharp contrast to OpenAI's "pragmatism" is Anthropic's predicament: it was originally one of the most valued frontier model suppliers in the eyes of the Pentagon, but due to refusal to budge on red lines, it was violently excluded from the entire system.
Multiple media reports indicate that in negotiations with the Department of Defense, Anthropic took a hard stance on two points:
Claude does not engage with fully autonomous weapon systems;
Claude does not participate in massive surveillance and profiling of American citizens.
Meanwhile, the Pentagon's demands were closer to "no legal use should be preemptively outlined by the model vendor."
After negotiations broke down, Secretary of Defense Pete Hegseth announced Anthropic would be listed as a national security "supply chain risk" after the deadline, requiring all contractors involved with the military to complete migration away from Claude within six months—this label had previously been used mainly on companies from rival nations, such as Huawei, and this is the first time it has been applied to an American AI startup, sparking discussions in Silicon Valley about a "chilling effect."
Internal Pentagon assessments show that fully replacing the large model stack integrated into secure systems could take months, meaning that the ban's effective date overlaps highly with the timeline of Epic Fury.
From a technical reality standpoint, Claude likely participated in US national security work in some form before being "swept out" by executive order, though no one was willing to clarify this chain of events in hearings; this is also a typical "gray area" of the modern military-industrial-technology complex.
What capital markets perceive is a simple yet dangerous lesson: when "safety red lines" clash with "maximizing defense contracts," the company that is more willing to negotiate often becomes the safer investment target; while those that adhere to principles may be stamped with "supply chain risk" overnight and have investors press the "re-evaluation" button.
4. The True Central Nervous System: Microsoft, Google, and the "Cloud Military-Industrial Complex"
If OpenAI and Anthropic are the "brains" of the war, then Microsoft and Google represent the true central nervous system of this system:
Without their clouds, all large models and domestic AI tools could only remain on PowerPoint.
Microsoft Azure: From Office Cloud to Kill Chain Operating System
AP and several institutions' investigations indicate that since October 2023, the scale of Israeli troops using machine learning tools on Azure has surged to dozens of times its previous scale, peaking at 64 times, with overall AI functionality calls approaching 200 times.
At the same time, large-scale data storage has reached a magnitude equivalent to that of the Library of Congress.
These computing powers are utilized to transcribe and translate vast amounts of communications, process signals from surveillance infrastructure, and interact with Israel’s local AI systems (such as Lavender and Gospel) to automatically generate target lists and risk assessments, significantly increasing the throughput of the "target production line."
Although Microsoft later reduced services to certain Israeli military units (especially those related to surveillance) under public and employee pressure, the core cloud and AI contracts remain operational, enabling the company to garner substantial orders commercially while incurring significant reputational costs.
Google Project Nimbus: The Wartime Cloud with the Highest Political Risk Premium
Since 2021, Google and Amazon, through Project Nimbus, have provided approximately $1.2 billion worth of unified cloud infrastructure to the Israeli government and military, encompassing computing, storage, and machine learning tools. Employees, scholars, and human rights organizations have persistently warned:
Nimbus’s universal cloud and AI capabilities can easily be used for monitoring and military target selection, although Google has repeatedly emphasized that the contract "does not include offensive military uses."
By the time of Epic Fury, it was widely believed that cloud platforms like Nimbus were critical computing bases supporting the complex target planning, battlefield simulation, and real-time intelligence fusion of the Israeli military, though the specific calling paths and details of combat cases remain classified.
From a risk perspective, this means that Google is exchanging "a slightly higher political risk premium" for stable income from Middle Eastern security clients, while the protests and resignation waves around this project within the company serve as a reminder to investors: this is not a business that can be simply treated as a typical corporate cloud contract.
5. Israel's AI Killing Factory: The Portability of Lavender Logic
To understand how AI changes the battlefield, one might begin with one of the most controversial Israeli systems: Lavender, Gospel, and Where’s Daddy.

Investigations by +972 Magazine and Local Call reveal:
"Lavender" analyzes behavior and relationship graphs for nearly all adult males in Gaza, assigning each a "suspected militant score" from 1–100, marking up to 37,000 targets suspected of being members of armed organizations within a short time;
"Gospel" focuses on buildings and infrastructure, automatically marking buildings deemed used for military purposes, forming bombing lists that can be mass-consumed by the air force;
"Where’s Daddy" is responsible for optimizing the time dimension: tracking when listed targets return home and triggering strikes while they are with family in their residences—greatly increasing the probability of a "successful kill," while also placing family members and neighbors at extremely high risk of fatality.
Frontline intelligence officers in the Israeli military acknowledge in interviews that human review of targets recommended by Lavender often amounts to just a formalistic "tick" in a matter of seconds;
while human rights organizations and UN experts describe this system as a "highly automated mass assassination factory," pointing out its structural problems in amplifying algorithmic biases, compressing human judgment spaces, and elevating civilian casualty risks.
It should be emphasized that public reports more clearly associate this system with the Gaza war, while the official stance has long maintained silence on its specific application in the Iran battlefield.
However, from a technical portability standpoint, it is not hard to imagine translating the Lavender logic to Tehran's power elite, provided there is enough massive communication data, positional trajectories, and social graphs within Iran—this is why many analysts consider Epic Fury more like an "algorithmic killing factory" experiment spilling over into the capital of a sovereign nation.
6. Market and Regulation: The Pricing Power of the AI-Cloud-Defense Complex
Putting these fragments together, you get a picture that is quite "un-Silicon Valley":
On one end, there are large model companies like OpenAI that are willing to make limited compromises on the red line, quickly establishing a foothold in defense budgets;
On the other end is Anthropic, which insists on stricter safety principles, having been booted out by the Secretary of Defense under the label of "supply chain risk," teaching the whole industry a lesson about "do not confront the sole buyer directly";
At the base, cloud giants like Microsoft and Google have built modern warfare's "operating system" using GPU clusters and secure cloud networks, capturing most of the cash flow from wartime AI while also bearing increasingly high reputational and regulatory risks.
From an asset pricing perspective, this is no longer just a dichotomy of "tech stocks vs defense stocks," but a new AI-Cloud-Defense complex:
Tactically, low-cost drone swarms, automated target production, and AI decision systems are eroding traditional great power deterrence, making expensive fifth-generation fighter jets and aircraft carrier battle groups appear like last-generation capital-intensive assets;
Industrially, large models and cloud vendors have gained counter-cyclical cash flows typically reserved for a very few players through military contracts, entering a profit black box where regulation is difficult to fully transparently apply under the guise of "security and confidentiality";
Politically, when "who cooperates more with the national security agenda" becomes the decisive variable for securing key contracts, corporate adherence to ethical red lines is systematically discounted, and this kind of incentive structure will be quietly remembered by all future entrepreneurs and investors.
The battlefield in Iran may just be the prologue. Whether the next outbreak occurs in the Taiwan Strait, Eastern Europe, or another floor tile in the Middle East, what truly determines the rhythm of war is no longer just the number of tanks and caliber of artillery, but the models trained on how many PB of secure data and the clouds connected to how many GPU racks.
The question is, before we outsource more and more killing chains to just a few large model and cloud companies, does global regulation and democratic politics still have time to seriously answer one question—when the algorithm's suggestions turn into a string of explosion coordinates in real combat, who is ultimately responsible for those decisions?
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。