OpenAI has secured an agreement with the Department of War to deploy its artificial intelligence models on the Pentagon's classified network. CEO Sam Altman announced the deal Friday, just as President Trump ordered every federal agency to phase out rival Anthropic's technology over the next six months.
The timing tells the story. One AI company said yes. The other said no. The federal government acted accordingly.
According to Fox Business, Altman said he had been in talks with the Pentagon and that the department's leadership approached the partnership seriously. He described DoW officials as having "displayed a deep respect for safety and a desire to partner to achieve the best possible outcome."
The agreement includes safeguards that OpenAI has built into its terms. Altman laid out the specifics:
"Two of our most important safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems. The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement."
OpenAI will operate only on cloud networks and deploy additional safeguards as part of the arrangement. Altman framed the deal as consistent with the company's broader mission, noting that "AI safety and wide distribution of benefits are the core of our mission."
The important detail: OpenAI got the same prohibitions on mass surveillance and autonomous weapons that Anthropic claimed it was fighting for. The difference is that OpenAI negotiated terms and signed. Anthropic postured and stalled.
Anthropic CEO Dario Amodei had refused earlier demands from the Department of War to allow its AI to be used for "all lawful purposes," citing concerns about "mass domestic surveillance" and "fully autonomous weapons." The company told Fox News Digital that its position was the result of months of failed negotiations:
"This follows months of negotiations that reached an impasse over two exceptions we requested to the lawful use of our AI model, Claude: the mass domestic surveillance of Americans and fully autonomous weapons."
On its face, that sounds reasonable. Nobody wants mass surveillance of American citizens. Nobody wants AI pulling a trigger with no human in the loop. But those concerns weren't unique to Anthropic. OpenAI shared them. The difference is that OpenAI put its objections into a signed agreement instead of treating the Pentagon like a Silicon Valley terms-of-service negotiation where the vendor dictates the outcome.
Anthropic also claimed it "tried in good faith to reach an agreement with the Department of War" and that its objections "have not affected a single government mission to date." The company called the federal response an "unprecedented action, one historically reserved for US adversaries, never before publicly applied to an American company."
That framing is revealing. Anthropic wants to be treated as a partner while behaving like a gatekeeper, deciding which lawful uses of its product the United States government may pursue. A defense contractor that told the Pentagon which missions it would and wouldn't support would find itself replaced. AI companies are not exempt from that logic.
President Trump didn't mince words. In a Truth Social post, he directed every federal agency to stop using Anthropic technology, giving them a six-month phase-out period. He added a warning:
"Anthropic better get their act together, and be helpful during this phase out period, or I will use the Full Power of the Presidency to make them comply, with major civil and criminal consequences to follow."
Secretary of War Pete Hegseth followed with action. He announced the Department of War would designate Anthropic a "supply-chain risk to National Security" and issued a sweeping directive:
"Effective immediately, no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic."
That's not just losing the Pentagon contract. That's losing every company that has a Pentagon contract. The blast radius is enormous, and it's designed to be. Hegseth added that Anthropic would continue providing services for no more than six months "to allow for a seamless transition to a better and more patriotic service."
There is a particular kind of arrogance that thrives in the AI industry, the belief that building a powerful tool entitles you to set the moral conditions under which the government may use it. Anthropic positioned itself as the conscience of the national security state, as if the Department of War hadn't already codified prohibitions on mass surveillance and autonomous weapons into law and policy before Dario Amodei ever raised the issue.
OpenAI recognized what Anthropic didn't: those protections already exist. The task was to formalize them in a contract, not to stage a public standoff over principles the Pentagon already shared. Altman made this point directly, saying he was asking the DoW to offer these same terms to all AI companies, "which, in our opinion, we think everyone should be willing to accept."
He also signaled a desire to cool things down:
"We have expressed our strong desire to see things de-escalate away from legal and governmental actions and towards reasonable agreements."
That's the posture of a company that wants to build technology for the country in which it operates. Anthropic's posture, by contrast, was that of a company that wanted to build technology for the country only on its own terms, even when those terms addressed problems the government had already solved.
The six-month clock is ticking. Anthropic says it has "not yet received direct communication from the Department of War or the White House on the status of our negotiations." That silence is the communication. When the President of the United States announces a phase-out on Truth Social, and the Secretary of War designates you a supply-chain risk, the negotiation is over.
The broader signal reaches every tech company with federal ambitions. The government needs AI. It will pay for AI. But it will not be held hostage by companies that confuse a contract negotiation with a moral crusade. OpenAI understood the assignment. Anthropic decided it knew better than its customer.
In government contracting, that decision has a name. It's called losing.