The U.S. Department of Defense announced on Friday that it has finalized contracts with seven technology companies to integrate their artificial intelligence systems into classified military networks. The agreements will enable the military to leverage AI-powered tools to enhance decision-making in complex operational environments, according to a statement from the Defense Department.
The participating companies include Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection, and SpaceX. These firms will provide AI resources to support military operations, though the specifics of each contract remain undisclosed.
Notably absent from the list is Anthropic, which has been engaged in a public dispute with the Trump administration over the ethical and safety implications of AI in warfare. The company previously sought assurances that its technology would not be used for fully autonomous weapons or domestic surveillance.
The Pentagon’s push to adopt AI aligns with its broader strategy to modernize military capabilities. According to a Brennan Center for Justice report from March, AI can significantly reduce the time required to identify and engage targets, streamline weapons maintenance, and optimize supply chains. However, critics argue that AI deployment raises serious concerns, including potential privacy violations and the risk of machines making life-and-death decisions on the battlefield.
One of the contracted companies emphasized that its agreement with the Pentagon includes provisions for human oversight in critical decision-making processes.
The ethical dilemmas surrounding military AI have gained prominence following its use in conflicts such as Israel’s war against militants in Gaza and Lebanon. Reports indicate that U.S. tech giants have quietly supported Israel’s efforts to track targets, though the surge in civilian casualties has intensified debates over the unintended consequences of AI-driven warfare.
Ongoing Concerns and Unresolved Questions
Helen Toner, interim executive director of Georgetown University’s Center for Security and Emerging Technology, highlighted the complexities of integrating AI into military operations. Toner, a former OpenAI board member, noted that modern warfare often relies on personnel in command centers making rapid, high-stakes decisions based on vast amounts of data.
"AI systems can be helpful in terms of summarizing information or looking at surveillance feeds and trying to identify potential targets."
However, she stressed that key questions remain unanswered, including the appropriate levels of human involvement, risk management, and operator training. Toner posed critical questions about balancing rapid AI deployment with the need for thorough training to prevent over-reliance on the technology:
"How do you roll out these tools rapidly for them to be effective and provide strategic advantage? While also recognizing that you need to train the operators and make sure they know how to use them and don’t over trust them?"
Anthropic had previously raised similar concerns, insisting on contractual guarantees that its AI would not be used for autonomous weapons or domestic surveillance. Defense Secretary Pete Hegseth countered that the company must comply with any lawful uses the Pentagon deems necessary.
Anthropic’s exclusion follows a legal battle with the Trump administration, which attempted to block federal agencies from using the company’s chatbot, Claude. The administration also sought to designate Anthropic as a supply chain risk, a move intended to protect national security systems from foreign interference. In March, OpenAI announced a deal with the Pentagon to replace Anthropic’s services with ChatGPT in classified environments, a move confirmed by OpenAI in a public statement.