OpenAI announced that it would tighten its safety policy and reporting procedure after admitting that it failed to report to Canadian law enforcement authorities a ChatGPT account belonging to the suspect in the recent mass shooting in Tumbler Ridge, Canada.
The suspect was involved in a mass shooting in February in Tumbler Ridge, which is in the Canadian province of British Columbia.
The suspect’s account was flagged internally by OpenAI months prior to the incident, but the company failed to report it to law enforcement authorities.
The suspect was able to create a second account despite OpenAI’s internal safety policy to prevent repeat offenses by banning the original account.
The announcement comes after increased pressure from Canadian authorities following the 10 February attack, where eight people were killed in the secluded town of northeastern British Columbia. Police named the shooter as 18-year-old Jesse Van Rootselaar, who shot himself in the head and died at the crime scene.
Read Also: Canada Visa Intelligence Dossier
The victims were reportedly his mother, 11-year-old stepbrother, five school children, and an educator.
The attack happened at a house and later at the town’s secondary school, qualifying it as one of the deadliest attacks in modern Canadian history.
According to OpenAI, the social media account of the shooter was permanently suspended in June 2025 for violating the service’s terms of use related to the way the service is used.
This happened about seven months before the attack. At the time, the company explained that the activity reviewed by the company’s safety teams did not meet the company’s criteria for reporting to law enforcement because it was not considered “credible and imminent planning of a serious attack.”
Company officials said that this standard of assessment has been revised since then. The letter, written by OpenAI’s vice-president for global policy, said that the company had made a series of changes to its process for evaluating potentially dangerous behavior by users, including working with mental health and behavioral specialists and making more flexible assessments for when to contact law enforcement.
OpenAI said that under its current system, this banned account would likely have been referred to law enforcement, and that a second account made by the suspect after the initial ban was not caught by OpenAI’s systems at the time but was later identified and passed on to law enforcement after the shooting.
“We commit to strengthening our detection systems to better prevent attempts to evade our safeguards and prioritize identifying the highest risk offenders,” the company said in the letter.
The company also indicated its intention to establish a special communication channel with the Canadian law enforcement authorities, aiming at quicker information sharing in the event of
potential real-world harm.
The Canadian authorities had asked the company for such direct contact in meetings held this week in Ottawa between the country’s authorities and the company’s top management.
The meetings were called after the company revealed that it had indeed taken enforcement action against the suspect’s account but had failed to notify the authorities before the attack happened.
Canadian Artificial Intelligence Minister Evan Solomon characterized the situation as a ‘failure’ and expressed dissatisfaction with the company’s initial explanations, saying so after the meeting held on Tuesday.
“I was left disappointed,” Solomon said, noting that he did not hear “any substantial new safety protocols” during the discussions.
He said the federal government was considering regulatory options if the technology companies did not take swift action to bolster security. “All options for us are on the table, because at the end of the day, Canadians want to feel safe,” he said.
Provincial leaders have also expressed concern over the way the company handled the situation.
Read Also: Canada’s Immigration Structure (2026 System Design)—Part 3
British Columbia Premier David Eby said he thinks police may have been able to prevent the attack if they had been informed of the situation sooner, although police have yet to say whether that would have made a difference.
“They tragically missed the mark in [not] bringing this information forward. The consequences of that will be borne by the families of Tumbler Ridge for the rest of their lives,” Eby told reporters on Thursday.
Eby explained that the company’s chief executive, Mr. Altman, had agreed to hold a meeting to discuss the company’s safety practices and decision-making processes. “I think it’s important that Mr. Altman hear about how his team’s decision not to bring this information forward has resulted in devastation,” he said.
The case has sparked debate on the role of artificial intelligence firms when they suspect that their users’ behavior could lead to violence.
Technology firms usually rely on internal risk assessment to decide when to notify law enforcement about their users’ behavior, considering both privacy and safety.
OpenAI announced that the new measures were designed to close the gaps that were exploited by banned users to create new accounts and to ensure that potentially dangerous cases were reviewed by a broader number of people.








