Strikes with a speed greater than the 'speed of thought' β The 'cognitive disconnection' of decision-makers
Experts warn that the use of AI in military strikes may lead to a cognitive disconnection among decision-makers.
Recent reports highlight concerns surrounding the use of Artificial Intelligence tools in military operations, particularly regarding strikes against Iran. Experts indicate that such technology enables military actions to be executed at a speed surpassing human cognitive capabilities, leading to fears that decision-makers may become sidelined in the process. Notably, AI models like Anthropic's Claude were allegedly integrated into U.S. military strategies, significantly shortening the time needed from target identification to strike approval.
During a recent offensive, the United States and Israel reportedly utilized AI for identifying targets in Gaza, launching nearly 900 missile strikes against Iranian sites within just 12 hours. Among these operations, the death of Iran's Supreme Leader, Ayatollah Ali Khamenei, was noted. Academics emphasize that the use of AI leads to a phenomenon referred to as 'decision compression,' which dramatically reduces the planning time needed for complex strikes, thereby risking the possibility for thorough human oversight.
The implications of these rapid-response military capabilities raise significant ethical questions and concerns about the future of warfare. As AI continues to evolve, there is an urgent need for discussions on regulating these technologies to prevent potential misuses and to ensure that human judgment remains central to critical military decisions. The balance between technological advancement and ethical responsibilities will be pivotal in shaping future conflicts and military engagements.