The Pentagon’s latest AI agreements are a supplier-selection story as much as an AI story. According to AP and Reuters-syndicated reports, the Defense Department reached agreements with seven technology companies to deploy advanced AI capabilities on classified networks, while Anthropic was not included [2][
7]. The AP-listed companies are Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and SpaceX [
7].
What the Pentagon announced
AP reported that the Pentagon said it had reached deals with seven tech companies to use their artificial intelligence in classified computer networks, allowing the military to tap AI-powered capabilities for warfighting and operational support [7]. The Defense Department said the companies would provide resources to help “augment warfighter decision-making in complex operational environments,” according to the AP account [
7].
Reuters described the agreements as part of the Pentagon’s effort to broaden the range of AI providers working across the military [2]. That matters because the announcement does not point to a single favored AI vendor; it names a group of companies that includes major cloud, model, chip and technology providers [
2][
7].
The seven companies included
The clearest AP account names these seven companies in the Pentagon’s classified-network AI agreements [7]:
- Microsoft
- Amazon Web Services
- Nvidia
- OpenAI
- Reflection
- SpaceX
Some secondary summaries describe the group as a major expansion of classified military AI partnerships, but the most important confirmed point is narrower: these seven companies were named in the announced group, and Anthropic was not [2][
7].
Why Anthropic was left out
Reuters-syndicated reporting says Anthropic has been in dispute with the Pentagon over guardrails for the military’s use of its AI tools [2][
10]. Reuters also reported that the Pentagon had labeled Anthropic a supply-chain risk earlier this year [
2][
10]. AP similarly noted Anthropic’s absence after a public dispute and legal fight with the Trump administration over the terms of military AI use [
7].
Several reports frame the disagreement around usage language. The Pentagon’s preferred framing has been described as allowing “lawful operational use,” while Anthropic reportedly sought stronger safety limits or guardrails for certain military applications [4][
13]. The available public reports do not include the full contract text, so the exact legal wording should be treated as unresolved.
What the AI could be used for
The Pentagon’s quoted description is broad: AI resources that help “augment warfighter decision-making in complex operational environments” [7]. AFP-syndicated reports go further, saying the classified systems involved are used for mission planning, weapons targeting and other purposes [
5][
12].
That does not mean every company in the group will support the same function. The public reports reviewed here do not confirm which vendor will provide which capability, which models will be deployed, or what human oversight rules will apply in specific operational contexts.
What remains unknown
Several important details are still not established in the available reporting:
- The full contract values and durations.
- The exact technical scope of each company’s agreement.
- Which AI models or infrastructure will run on which classified networks.
- The precise military-use restrictions, if any, accepted by each company.
- Whether Anthropic’s exclusion is temporary or long-term.
On that last point, a CNN-syndicated report said the White House had reopened discussions with Anthropic in recent weeks, even as AP and Reuters accounts confirm Anthropic was not part of the announced seven-company group [2][
7][
13].
The bigger signal
The announcement shows the Pentagon moving commercial AI deeper into classified defense environments while trying to avoid dependence on a single provider [2][
7]. It also turns AI safety policy into a concrete procurement issue: companies that want defense work may face pressure to align their usage rules with the Pentagon’s definition of lawful operational use [
4][
13].
For now, the solid takeaway is simple: seven companies were selected for classified Defense Department AI work, Anthropic was not, and the dispute over military-use guardrails remains the key unresolved story behind the omission [2][
7][
10].






