How AI firm Anthropic wound up in the Pentagon’s crosshairs
Anthropic, an AI firm, faces scrutiny from the Pentagon over its refusal to allow its chatbot Claude to be used for domestic surveillance and lethal autonomous weapon systems.
Anthropic, a relatively less prominent player in the artificial intelligence sector, has found itself at the center of a contentious situation involving the Pentagon. Despite its significant valuation of around $350bn, the company has operated mostly under the radar as compared to its more well-known counterparts like OpenAI and xAI. Its chatbot, Claude, has not gained as much traction as ChatGPT, which has kept Anthropic out of the limelight. However, this has changed dramatically as the firm has taken a strong stand against using its technology for military applications, particularly for mass surveillance and autonomous weapons, positioning itself in a moral and ethical debate surrounding AI use in defense.