While European markets prepare to close in thirty minutes — Paris and Frankfurt wrap up at 5:30 PM local time — and Wall Street continues its morning session, a new illustration of American strategic incoherence has just emerged from the Pentagon. Read more: europe discovers stock Read more: iran plays fire Last week's designation of Anthropic as a "supply chain risk," while simultaneously maintaining the use of its Claude AI by Palantir for Iranian operations, reveals a contradiction that goes beyond simple bureaucratic bungling.
The Double-Think of the Military-Industrial Complex
According to CNBC, the Pentagon officially classified Anthropic on its risk supplier list in early March, citing concerns about Claude's potential impact on the defense supply chain. Yet at the same time, Palantir — that data analysis giant led by Alex Karp — continues using this very same AI for its operations related to the Iranian conflict.
Michael, the Pentagon's chief technology officer, attempted to minimize the scope of this decision: "This isn't meant to be punitive," he declared. This convoluted formulation betrays the embarrassment of an institution caught red-handed in incoherence. Because if Anthropic really represents a national security risk, why authorize its use in sensitive military operations?
The Real Stakes Behind the Facade
This apparent contradiction actually conceals a deeper battle for control of the military AI ecosystem. Anthropic, founded by former OpenAI executives, develops "constitutional" AI models supposedly safer and more aligned with human values. An approach that may displease certain Pentagon circles accustomed to less ethically constrained tools.
Palantir, for its part, has established itself as the indispensable intermediary between Silicon Valley and the military-industrial complex. With revenues of $2.2 billion in 2025 — 55% of which comes from government contracts — Karp's company has every interest in maintaining its privileged relationships with its technology suppliers, even if it means navigating the murky waters of official designations.
Iran as an Experimentation Laboratory
The continued use of Claude for Iranian operations is not insignificant. Since the escalation of tensions in the Middle East, Iran has become a preferred testing ground for new surveillance and predictive analysis technologies. Claude's natural language processing capabilities allow real-time analysis of intercepted communications, social networks, and information flows in Persian.
This situation reveals the fundamental hypocrisy of American technological policy: on one hand, we wave the banner of national security to justify restrictions; on the other, we turn a blind eye when operational needs demand it. An approach strangely reminiscent of economic sanctions management, where exemptions multiply as soon as commercial interests come into play.
Financial Markets, Silent Witnesses
While Asian exchanges have been closed for several hours — Tokyo wrapped up at 3:00 PM local time, Shanghai at the same hour — and Abu Dhabi won't reopen until 10:00 AM tomorrow morning, Western investors are still digesting the implications of this news. Palantir shares have gained 2.3% since Wall Street's opening, a sign that the market interprets this contradiction as a disguised green light to continue technological collaborations.
This stock market reaction perfectly illustrates the disconnect between official discourse and economic reality. Investors know how to decode the signals: when the Pentagon says "this isn't punitive," they hear "business as usual."
A Variable Geometry Strategy
Anthropic's designation as a supply chain risk is part of a broader strategy to control the AI ecosystem. By multiplying classifications and restrictions, the Pentagon gives itself the means to modulate its relationships with technology companies according to its momentary needs.
This variable geometry approach raises fundamental questions about the coherence of national security policy. How can we justify to European allies — whose markets close in a few minutes — a policy that seems to change according to circumstances? How can we maintain the credibility of American institutions when their own decisions contradict each other?
The Future of Military AI in Question
This affair above all reveals the immaturity of the American regulatory framework facing military AI challenges. Unlike traditional weapons sectors, where rules have been established for decades, artificial intelligence evolves in a legal vacuum that bureaucracies struggle to fill.
The result? A schizophrenic policy where the same technologies are simultaneously considered risks and indispensable tools. A contradiction that cannot persist without weakening the credibility of American institutions and their ability to define coherent international standards.
While the last transactions are being executed on European exchanges before closing, one certainty emerges: the battle for control of military AI is just beginning, and the first casualties will be the coherence and transparency of public policies.
Frequently Asked Questions
Q: Why did the Pentagon classify Anthropic as a supply chain risk?
The Pentagon classified Anthropic as a supply chain risk due to concerns about its Claude AI's potential impact on the defense supply chain. This decision was made in early March, highlighting the complexities and contradictions within military AI usage.
Q: How is Palantir using Anthropic's Claude AI despite the Pentagon's classification?
Despite Anthropic being classified as a risk, Palantir continues to use Claude AI for its operations related to the Iranian conflict. This contradiction raises questions about the Pentagon's decision-making and the coherence of its policies regarding AI in military applications.
Q: What is the significance of the Pentagon's AI contradictions?
The contradictions surrounding the Pentagon's AI policies reveal a deeper struggle for control over the military AI ecosystem. The situation underscores tensions between traditional military practices and newer, ethically aligned AI approaches developed by companies like Anthropic.
