The prototype of the "AI warning mechanism" in the American TV series "Person of Interest"? OpenAI flagged the suspect in the Canadian shooting case eight months ago.

date
13:53 21/02/2026
avatar
GMT Eight
At that time, OpenAI considered transferring the account to law enforcement agencies, but did not find any credible or imminent plans, so it was determined that the account did not meet the threshold for reporting.
The global popular ChatGPT developer and leader in global AI applications, OpenAI, has continued to flag and ban a suspect in one of the most severe mass shooting cases in Canadian history since last June due to a user violating ChatGPT usage policies. However, at that time OpenAI did not hand the suspect over to the police, which has made AI safety responsibility and privacy issues a focus of global financial markets in recent days. In addition, this incident, reminiscent of the popular American TV series "Person of Interest," which features an "AI machine" capable of predicting future crimes with omniscient abilities, has sparked heated discussions among global AI application users regarding AI risk monitoring mechanisms, with concerns raised about national security, privacy mechanisms, and legal boundaries closely related to artificial intelligence. The leading AI application developer, a startup company, stated that the suspected perpetrator of the mass shooting Jesse Van Rootselaar had a frequently used ChatGPT account, which was detected and banned by OpenAI's AI security system about eight months ago for scanning for abuses (with a focus on potentially furthering terror and violent activities). Canadian authorities stated that this 18-year-old teenager had used firearms to kill eight people and injure about 25 others in the remote western Canadian town of Tumbler Ridge earlier this month, before taking his own life. OpenAI stated that about eight months ago, the company used intelligent tools to detect abuse of AI large models and identified an account related to Rootselaar, which was then banned. Media reports, citing information revealed by an internal source at OpenAI, stated that the suspected shooter "described alarming scenes related to gun violence for several consecutive days," sparking intense debates among about a dozen internal OpenAI employees, with some urging leadership to alert the authorities and others believing that such non-substantive detection evidence was not enough to warrant police attention. In an email statement, OpenAI stated that it had considered handing over the account to Canadian law enforcement but did not identify credible or imminent terrorist attacks or violent plans, thus determining that it did not meet the threshold for law enforcement intervention. Following the shooting incident, the company chose to contact Canadian law enforcement authorities. "Our hearts go out to all those affected by the tragedy in Tumbler Ridge," a spokesperson for OpenAI stated via email. "We chose to proactively reach out to the Royal Canadian Mounted Police, providing them with all information we have on the suspect in the shooting case and the ChatGPT account he used, and we will continue to support their investigation." The company stated that one of the primary tasks of training ChatGPT was to deter or prevent immediate actions that could lead to real-world harm. The early detection of violent tendencies in the account by OpenAI led to discussions among investors of this "Person of Interest"-style AI warning mechanism. However, OpenAI banning the account without reporting it has brought greater attention to the value and limitations of security systems empowered by AI and their connection with legal aspects. OpenAI's discovery and blocking of an account of a suspect who later carried out a large-scale shooting several months ago using its internal abuse detection system indeed demonstrates the ability of modern AI-based intelligent systems in monitoring content and identifying potential risky behaviors. However, this still differs fundamentally from the "AI machine" in the popular TV series "Person of Interest," which predicts future crimes at a level of omniscience. In "Person of Interest," the fictional AI system named "the Machine" uses global data streams, real-time monitoring, and complex reasoning models to predict specific crimes individuals may commit in the future, and takes preemptive actions to intervene; whereas real AI systems (including OpenAI's large models) currently do not possess the all-knowing and all-powerful level of proactive inference of future behaviors like "the Machine." In reality, AI monitoring is more based on input information, keyword patterns, and semantic analysis to identify rule violations or potential abuses, such as recognizing conversations with terrorist or violent tendencies and automatically triggering blocking or reporting mechanisms. This ability falls within the scope of content filtering and behavior detection, and still does not fully achieve future behavior prediction or causal reasoning like "the Machine." Furthermore, OpenAI's decision not to report to law enforcement in this case was not because they predicted concrete plans for violence, but rather based on internal risk thresholds to assess the presence of a "credible and imminent risk of serious bodily harm," indicating that the current safety mechanisms of AI systems focus more on evaluating current risk signals rather than future behavior prediction. In this case, OpenAI believed that the legal and safety standards requiring reporting to the authorities were not met at that time which is vastly different from the premise of predicting future crimes and intervening proactively as depicted in the TV series "Person of Interest." However, OpenAI's latest warning capabilities for potential crimes also indicate that as AI large models enhance their screening and response capabilities for existing risk signals and progress towards predicting the probability of future crimes, mechanisms for accurately predicting future criminal behavior trajectories and intervening before crimes transition from intent to actual bodily harm may soon be formally introduced and rapidly developed.