Pentagon gives AI firm ultimatum: lift military limits by Friday or lose $200M deal
The Pentagon has warned AI firm Anthropic that it must remove military usage restrictions by Friday, or risk losing a $200 million contract.
The Pentagon has issued a clear ultimatum to the AI company Anthropic, demanding that it lift any restrictions on the usage of its Claude AI system for military applications by the end of the week. This warning comes amid concerns that Anthropic was questioning whether their technology was used in the military operation to capture Venezuelan leader Nicolás Maduro, suggesting that they may not consent to its military use. The Pentagon has emphasized that AI companies must permit all lawful military applications of their products without interference or approval from the companies themselves.
In a recent meeting led by War Secretary Pete Hegseth with Anthropic CEO Dario Amodei, the urgency of this matter was made clear, with the potential of losing a significant $200 million defense contract looming over the discussion. Anthropic has raised concerns over the ethical implications of its technology, particularly regarding fully autonomous weapons systems and surveillance of American citizens. Despite acknowledging the company's contributions, Hegseth framed the issue as being vital to national security, indicating that the military's ability to use cutting-edge AI technology is non-negotiable.
The situation reflects larger tensions between the military's evolving needs and the ethical considerations that AI companies are grappling with. As AI technology becomes increasingly integral to modern warfare, this ultimatum highlights the delicate balance between fostering innovation and maintaining oversight on the moral use of such technologies. The outcome could set significant precedents for future engagements between the military and private tech firms regarding the use of AI in defense operations.