Feb 22 • 00:24 UTC 🇦🇷 Argentina La Nacion (ES)

OpenAI considered alerting Canadian police about the suspect of a school shooting months before the attack

OpenAI assessed whether to alert Canadian police about a user linked to a school shooting, but decided against it, citing a lack of imminent risk.

OpenAI, the creator of ChatGPT, revealed that it evaluated in June 2025 whether to notify the Royal Canadian Mounted Police about Jesse Van Rootselaar, whose account was flagged for promoting violent activities. This assessment took place months before he committed one of Canada’s deadliest school shootings. OpenAI had identified the account through its abuse detection mechanisms and ultimately chose to block it for violating its usage policy.

The decision not to alert the police was based on OpenAI's conclusion that the user's activities did not pose an imminent threat that warranted immediate action. This highlights the challenges tech companies face in balancing user privacy and safety, particularly when it comes to potential violence. The case raises questions about the responsibility of technology firms in monitoring and addressing violent content in their platforms.

The implications of this incident are significant as it underscores the need for clearer guidelines and protocols for tech companies when they identify potential threats. It also poses critical inquiries into how such companies could better collaborate with law enforcement to preemptively address situations that could escalate into violence, considering the increasing use of technology in monitoring online behavior.

📡 Similar Coverage