Feb 28 • 12:33 UTC 🇫🇷 France Le Figaro

The Pentagon Chooses OpenAI After Ending Its Collaboration with Anthropic

The Pentagon has selected OpenAI’s AI models for collaboration after Anthropic refused to allow the use of its models for mass surveillance purposes.

The Pentagon has announced its decision to partner with OpenAI, moving away from its previous collaboration with Anthropic, primarily due to ethical disagreements regarding the use of AI in military operations. Anthropic denied the Pentagon access to its AI models for mass surveillance, citing ethical concerns over such applications, while OpenAI's CEO, Sam Altman, confirmed the new agreement designed to deploy their AI models within the U.S. defense network.

Sam Altman announced the agreement via social media, emphasizing that the collaboration involves strict guidelines against the use of AI for mass surveillance and stresses the importance of human responsibility in the execution of force applications, including autonomous weapons systems. This move indicates a significant shift in defense strategies, with OpenAI’s technology aiming to enhance military operations under ethical considerations, which have been a growing concern in the AI landscape.

This development comes at a time when the dialogue surrounding AI ethics in defense is becoming increasingly critical, with various stakeholders questioning the ramifications of integrating AI technologies into military systems. The Pentagon’s choice of OpenAI may set a precedent for future partnerships in defense, potentially influencing the larger narrative on responsible AI use in sensitive areas such as national security.

📡 Similar Coverage