Will AI give commands in war, not humans? Anthropic revealed a dangerous plan to the US!
Anthropic has joined a major drone project with the US Defense Department, proposing a voice-controlled drone swarm technology for military applications.
In the competitive landscape of artificial intelligence, Anthropic has emerged as a controversial player, especially after participating in a significant drone initiative funded by the US Defense Department. This project, worth approximately $100 million, aims to develop a sophisticated system capable of controlling swarms of drones simultaneously—a notable shift from traditional military tactics towards a more technologically advanced warfare approach. Anthropic is promoting a unique technology that allows an individual to command multiple drones through voice control, marking a significant evolution in combat operations.
The concept of drone swarming is transforming warfare by employing small, cost-effective, and smart drones that can operate collectively as a network. These drones share data among themselves, adjust their flight paths in real-time, recognize targets, and complete missions as required. Such capabilities not only enhance military efficacy but also raise critical concerns regarding the ethical implications of using AI in warfare, particularly regarding the decision-making process in combat situations where human intervention may be minimal.
In the context of recent global conflicts, including the war in Ukraine, nations like the US and China are rapidly advancing their drone swarm technologies. The implications of this development extend beyond traditional military engagements; they usher in a new era of warfare that relies heavily on AI and automation, prompting discussions on governance, safety, and the moral responsibilities that come with deploying such technologies in real-life scenarios.