Feb 23 • 22:56 UTC 🇬🇧 UK Guardian

Canada seeks answers from OpenAI for failing to alert police after suspending school shooter’s account

Canada's AI minister has summoned OpenAI for not notifying police after suspending a user's account linked to a school shooter.

Canada's artificial intelligence minister is questioning OpenAI's handling of a user account linked to a tragic school shooting, which resulted in the deaths of eight individuals, including five children. The shooter, Jesse Van Rootselaar, had had his account suspended in June 2025 for discussing violent content but law enforcement was not alerted about the potential risk. This incident raises serious concerns about the responsibility of technology companies in monitoring and reporting concerning user behavior, especially when it comes to threats of violence.

Evan Solomon expressed deep concern following the shooting, emphasizing the need for greater accountability from platforms like OpenAI, which operates the well-known ChatGPT chatbot. The company had flagged the user’s violent discussions during an automated review but failed to communicate this information with Canadian authorities, which many believe could have been critical in preventing the horrific attack. Fourteen months later, the ramifications of this failure are painfully evident as Canada mourns the loss of innocent lives.

This incident also ignites broader discussions about the role of AI technology in society and the expectations placed on companies to intervene when red flags appear. As incidents of violence, particularly within schools, have gained international attention, both governments and technology firms are under increased scrutiny to establish protocols that not only protect individuals but also ensure that such tools are not manipulated for harmful ends. The outcome of Canada's inquiry into OpenAI may set important precedents for the regulation of AI systems in the future.

📡 Similar Coverage