- The Pentagon announced agreements with seven AI companies to deploy capabilities into Impact Level 6 and 7 classified environments.
- Included firms are SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, and Amazon Web Services.
- Anthropic was excluded from the agreements following public disputes over autonomous weapons and surveillance safeguards.
The Department of War has finalized agreements with seven AI companies to embed frontier models into classified military networks. The move marks a shift from experimental prototypes to the integration of commercial AI into Secret-level operational environments. This infrastructure will support warfighting, intelligence synthesis, and enterprise operations across the force.
Integrating the Classified Stack
The department is moving commercial AI from unclassified workforce tools into Impact Level 6 and 7 environments. Impact Level 6 serves as the baseline for Secret-level workflows. While the exact scope of Impact Level 7 remains undefined in public documentation, it indicates a move into more sensitive, compartmented data.
This procurement model relies on a multi-vendor architecture to prevent AI vendor lock. It builds upon the Open DAGIR modernization effort which prioritizes modular, interoperable digital capabilities. OpenAI and Google have already secured independent classified-work agreements that allow for lawful military use, while Amazon Web Services provides the underlying cloud hosting for Secret-level regions.
The inclusion of SpaceX acts as the contracting successor to xAI, which was an earlier prototype awardee. By merging xAI into its corporate structure, SpaceX has secured a position as a primary defense-stack provider.
The Anthropic Power Play
Anthropic’s absence from this list is a calculated result of its clash with the Pentagon. The department identified the company as a supply-chain risk after Anthropic resisted Pentagon demands regarding domestic surveillance and the use of its models in weapons targeting.
The government is using its procurement power to enforce a shift in model governance. While companies like OpenAI and Google have retained some internal safety guardrails, they have agreed to terms that support broad lawful military use. Anthropic sought to maintain stricter technical limits on its tools. By sidelining Anthropic, the Pentagon has signaled that labs resisting these terms risk losing access to the most critical, high-value military data environments.
Wiring the Future Force
The Pentagon is building a resilient, multi-vendor supply chain to ensure it is not dependent on any single lab. Integrating these tools into GenAI.mil and other classified fabrics transforms commercial AI into a core military utility. The department’s goal is to turn AI into an “AI-first fighting force” that can operate at the speed of modern data requirements.
The Pentagon now has seven vendors competing for operational integration. It has also effectively demonstrated that it will trade away safety objections for deployment rights. Good luck to any lab that wants to argue about guardrails now.


