Trump orders his government to stop using Anthropic's AI "immediately" after a conflict with the Pentagon
Trump has directed federal agencies to immediately cease using Anthropic's AI following a dispute over military use restrictions.
In a significant directive, President Donald Trump ordered all federal agencies to immediately halt their use of Anthropic's artificial intelligence technologies. This decision came in response to a conflict between the Pentagon and Anthropic, where the latter refused to comply with the military's demands for unrestricted use of its AI models, specifically the Claude model. Anthropic had previously maintained that the use of its technology should not extend to activities such as mass surveillance of American citizens or autonomous weapon systems.
The disagreement highlighted the tension between governmental needs for advanced AI capabilities and the ethical considerations expressed by tech companies. Anthropic's stance was clear in advocating for responsible deployment of its technologies, reflecting a growing concern within the tech community about the potential implications of military applications of artificial intelligence. The Pentagon, however, emphasized adherence to legal frameworks over corporate conditions, indicating a willingness to push back against what it views as unnecessary restrictions on military operations.
This conflict raises broader questions about the future of AI regulation, particularly in military contexts, signaling that technology firms may resist unrestricted governmental use of their products beyond legal parameters. As discussions evolve, it remains to be seen how this incident will influence the regulatory landscape and relationships between tech companies and military contracts in the U.S.