Dario Amodei, CEO of Anthropic, has accused rival OpenAI of misleading the public about its new agreement with the United States Department of Defense.

In a memo to employees, Amodei reportedly described OpenAI’s public messaging about the military partnership as “straight up lies,” arguing that the company is presenting the deal as safer and more restricted than it actually is. He also accused OpenAI’s chief executive, Sam Altman, of portraying himself as a diplomatic problem-solver in the dispute with the Pentagon while obscuring the real terms of the agreement.
The controversy stems from negotiations between Anthropic and the Pentagon over the military’s access to advanced AI systems. Anthropic had previously held a $200 million contract with the U.S. military, but talks over a new agreement collapsed after the company demanded strict safeguards. Specifically, Anthropic wanted assurances that its AI models would not be used for domestic mass surveillance or autonomous weapons.
According to reports, the Pentagon insisted the technology be available for “any lawful use,” a clause Anthropic refused to accept. Soon after the talks broke down, OpenAI struck its own deal with the Department of Defense, promising that safeguards would be in place to prevent misuse of its AI systems.
Amodei rejected that characterization. In his message to staff, he argued that OpenAI accepted the Pentagon’s terms because it was more concerned with appeasing stakeholders than preventing potential abuses of AI technology. He dismissed the company’s safety claims as “safety theater,” suggesting the protections described publicly may not be meaningful in practice.
OpenAI, however, has defended the agreement. The company says its contract explicitly prohibits domestic surveillance of U.S. citizens and maintains that its AI tools will only be used in ways consistent with existing law. Critics counter that legal definitions can change, meaning what is considered lawful today could expand in the future.
The dispute highlights a growing divide among leading AI firms over military collaboration. As governments increasingly seek advanced AI capabilities for defense and intelligence purposes, companies are being forced to balance lucrative contracts with ethical concerns about how the technology may ultimately be used.
The clash between Anthropic and OpenAI also underscores a broader struggle over who sets the rules for AI deployment in warfare; should it be tech companies, governments, or the public? For now, the debate shows no sign of cooling as the race to supply next-generation AI systems to national security agencies intensifies.
Get the latest news and insights that are shaping the world. Subscribe to Impact Newswire to stay informed and be part of the global conversation.
Got a story to share? Pitch it to us at info@impactnews-wire.com and reach the right audience worldwide
Discover more from Impact AI News
Subscribe to get the latest posts sent to your email.

