Mar 2 • 08:09 UTC 🇶🇦 Qatar Al Jazeera

Artificial Intelligence 'Anthropic' Plays a Pivotal Role in the War Against Iran

The U.S. military has been using Anthropic's AI tool 'Claude' to coordinate attacks on Iran, contradicting a previous directive from President Trump.

A recent report by the Wall Street Journal reveals that the U.S. military has been utilizing the AI tool 'Claude' from Anthropic to coordinate military strikes on Iran amidst ongoing conflict. This usage raises significant legal and ethical concerns, particularly as it appears to contravene directives from former President Donald Trump, who had specifically called for a cessation of reliance on Anthropic's services. The situation illustrates the complexities surrounding military operations and the integration of emerging technologies, especially when prior agreements are seemingly ignored in practice.

This marks the second occasion the U.S. military has deployed Anthropic's tools in operations, with the first instance being related to the apprehension of Venezuelan President Nicolás Maduro earlier in the year. The report highlights the reliance of American command centers, including those in the Middle East, on Anthropic's capabilities for mission planning and execution. The escalating collaboration between military forces and AI developers raises questions about accountability and oversight in the digital age of warfare.

Furthermore, the pivotal role of 'Claude' in conducting intelligence assessments, identifying targets, and simulating combat scenarios underscores the increasingly central position of AI in modern military strategy. The tensions between technological advancement and governance in military applications continue to be a pressing issue, necessitating a debate on the ethical implications of such technologies in warfare and the potential for misalignment with legal frameworks.

📡 Similar Coverage