The Controversy Over Red Lines for AI Weaponization! It is rumored that the US Department of Defense has issued a final ultimatum: Anthropic must allow the military to use AI technology without restrictions.

date
07:25 25/02/2026
avatar
GMT Eight
According to informed sources, if Anthropic fails to comply with government terms by Friday, the Pentagon threatens to invoke a Cold War-era law that would force the artificial intelligence (AI) startup company to allow the US military to use its technology.
According to informed sources, if Anthropic fails to comply with government terms by Friday, the Pentagon threatens to invoke a Cold War-era law to force the artificial intelligence (AI) startup to allow the U.S. military to use its technology. The sources revealed that during a meeting on Tuesday between Anthropic CEO Dario Amodei and U.S. Defense Secretary Pete Hegseth, American officials outlined a series of consequences, including threatening to designate Anthropic as a supply chain risk and invoking the Defense Production Act to use its AI software even if the company disagrees. This ultimatum marks the escalating dispute between the U.S. Department of Defense and the AI startup. The controversy revolves around Anthropic insisting on setting up guardrails for its Claude AI tool, while the military believes these restrictions are unnecessary. If the Pentagon takes action, it could jeopardize Anthropic's project contracts with the military worth up to $200 million. According to one source, during the meeting, Amodei outlined Anthropic's conditions: the U.S. military must not use its products for autonomous strikes against enemy combatants, and must not conduct mass surveillance on U.S. citizens. The source stated that Amodei emphasized that these scenarios have not yet occurred in actual operations. In a statement following the meeting, Anthropic stated, "We have continued to engage in good-faith discussions on usage policies to ensure that Anthropic can continue to support the government's national security mission within models of reliability and responsible ability." Based on the latest financing valuation, Anthropic is currently valued at approximately $380 billion. The company is the first AI enterprise authorized to handle classified materials within the U.S. government, and its Claude Gov tool has rapidly become a favored choice among Pentagon officials. However, in the field of national security, it faces increasingly fierce competition from Elon Musk's xAI (which has just obtained a classified business license), as well as rivals like OpenAI and Google's Gemini. This dispute erupted shortly after the Pentagon released a new AI strategy. The strategy calls for the military to become an "AI-first" force by increasing experimentation with cutting-edge models and reducing bureaucratic obstacles to usage. The strategy specifically urges the Department of Defense to choose models that do not have usage policy restrictions and do not hinder legitimate military applications. An American official stated that the Pentagon became concerned about whether Anthropic supports U.S. objectives after questioning its AI usage in a special forces operation to capture Venezuelan President Maduro in early January. Anthropic, on the other hand, interpreted the Pentagon's concerns about the capture operation differently. In a statement on Monday through a spokesperson, Anthropic stated, "Anthropic has not discussed with the Department of Defense the use of Claude in specific actions. We have not discussed this with any industry partners or expressed concerns unless in routine technical exchanges at a strictly technical level." Anthropic positions itself as a company focused on the responsible use of AI technology, aiming to avoid catastrophic consequences. The company has created Claude Gov specifically for U.S. national security purposes and aims to provide services to government clients within its ethical boundaries. Despite concerns about the potential use of their technology for mass surveillance and autonomous strikes, Pentagon officials insist that the Department of Defense follows the law and always has humans involved in decision-making. If the Pentagon designates Anthropic as a supply chain risk, its products will be banned from use by other military suppliers. These companies would then have to verify that they are not using Anthropic's products. Additionally, under the 1950 Defense Production Act, the government can force U.S. companies to provide necessary products or services based on national security reasons. Past presidents have used this law to secure energy supplies, including forcing the renovation of oil tankers in the 1960s and redirecting contracted oil supplies to the military in the 1970s.