Feb 28 • 01:39 UTC 🇫🇮 Finland Iltalehti

Artificial intelligence must not decide life and death alone, claims company – Trump snapped: 'Leftist lunatics'

A U.S. company asserts that decisions regarding life and death should always involve human input, amidst a conflict with the Trump administration labeling it a national security threat.

In a significant controversy, a U.S. company is demanding that human involvement is mandatory in any operational decisions that could impact life and death scenarios. This stance arises amidst tensions with the Trump administration, which has designated the artificial intelligence company, Anthropic, as a national security threat. This classification is unusual as it typically applies to companies from China and Russia, known to pose direct security risks to the United States.

The U.S. Secretary of Defense, Pete Hegseth, proclaimed Anthropics's AI known as Claude a potential national security concern, which has escalated tensions significantly. The company, which provides AI technology for Pentagon operations, is negotiating its use within sensitive defense systems. There's a clear divide with Anthropic, insisting on its two primary limitations: that Claude must not be utilized in fully autonomous weapon systems or for mass surveillance of American citizens.

This dispute reflects broader concerns about the role of artificial intelligence in military applications and raises ethical questions about autonomy in life-and-death decision-making processes. The company’s insistence on human oversight is a pivotal argument in the ongoing dialogue about ensuring that technology serves humanity's ethical standards, especially in high-stakes situations.

📡 Similar Coverage