In a span of two days following news that the Tumbler Ridge perpetrator’s ChatGPT account had been flagged prior to the shooting, OpenAI CEO Sam Altman met with Federal AI Minister Evan Solomon and British Colombia Premier David Eby.
He secured commitments on both sides: reporting threats directly to the RCMP, retroactive review of previously flagged accounts, distress-redirect protocols, access to the company’s safety office for Canadian experts and an agreement to work with B.C. on regulatory recommendations to Ottawa.
He also agreed to apologize to the community of Tumbler Ridge, where 18-year-old Jesse Van Rootselaar killed eight people and wounded many others before dying of a self-inflicted wound. Months prior to the shooting, Van Rootselaar’s ChatGPT account had been flagged for scenarios involving gun violence. The account was banned, but not reported to law enforcement.
OpenAI’s new commitments are significant gestures. But they resolve a narrower question than the one Tumbler Ridge actually raised. As I argued earlier, the core problem was not a reporting failure. It was a governance vacuum.
What’s changed since? OpenAI has agreed to make the same type of unilateral determination it made before, but to act on it more aggressively, routing the result directly to the RCMP. That is not a fix. It is the same unaccountable architecture with a faster trigger.
The human-in-the-loop fallacy
Consider what we now know about the internal process. The shooter’s account was flagged. Human moderators reviewed the interactions. Some advocated escalating to law enforcement. Other humans, guided by the company’s own opaque thresholds, decided against it. The breakdown was not mechanical. It was institutional.
“Human in the loop” is one of the most repeated reassurances in AI safety discourse. The Tumbler Ridge case exposes its limits. Humans in the loop are only as accountable as the institutional structure around them. When that structure is a private corporation with no legally binding reporting obligations, no transparency requirements and no external oversight, the human in the loop is simply a more sympathetic face on an unaccountable system.
OpenAI has since announced that its thresholds have been updated. But updated by whom, according to what criteria, subject to what review? These remain internal decisions, invisible to the public and unreachable by Parliament.
The surveillance substitution
There is a deeper problem that receives almost no attention. The proposed settlement does not regulate AI. It regulates users.
The entire apparatus being constructed (internal threat identification, flagging, direct RCMP referral) is oriented toward monitoring what people say to AI, not toward how AI systems are designed, trained or constrained in their responses.
True AI regulation asks whether a model might facilitate or amplify harmful ideation through its interaction patterns. It asks how the system is built, what it’s tested for and what obligations attach to its deployment.
The current arrangement asks none of these questions. Instead, it builds a pipeline from private AI interactions to law enforcement, administered by a corporation, governed by proprietary policy.
I call this the surveillance substitution: a governance vacuum gets filled not with democratic regulation, but with corporate surveillance of users. It is not regulation of AI. It is regulation of the people who use AI, conducted by the AI company itself, with the police as the endpoint.
The civil liberties implications are substantial. Research on compassion-sensitive AI, including my own work on how AI systems should respond to users in vulnerable states, consistently shows that people disclose distress to chatbots precisely because the interaction feels private and non-judgmental.
If that space becomes a monitored channel where concerning disclosures trigger law enforcement referrals based on opaque corporate criteria, the most vulnerable users may stop disclosing. The chilling effect on help-seeking behaviour has not been studied, and it has not been discussed in any of the public negotiations following Tumbler Ridge.
Rational strategy, absent framework
It’s important to be precise about what OpenAI is doing. The company is not acting in bad faith. It is behaving as a rational private entity in the absence of a regulatory framework, offering the minimum viable response to political pressure while preserving as much operational autonomy as possible.
Look south and the logic becomes clearer. In the United States, the relationship between AI companies and government power is being forcibly renegotiated. The Pentagon has sought AI models with safety guardrails removed for military applications. When Anthropic resisted, OpenAI moved to fill the gap. In that context, the U.S. government commands and AI companies comply.
In Canada, the dynamic is inverted: OpenAI is not being commanded. It is volunteering concessions designed to pre-empt the kind of binding legislation that would actually constrain its operations. Support broad norms with no immediate legal force; resist specific domestic obligations that carry real consequences. This is how regulatory capture begins: not with corruption, but with convenience.
Canada has genuine leverage here: an unusual cross-party consensus that something must change, public attention that has given AI governance a human face, and a provincial government that understands the stakes.
But leverage evaporates. If the federal government accepts OpenAI’s pledges as a sufficient response, it normalizes corporate self-regulation as the baseline. Future companies will cite this arrangement as precedent. The window for legislation narrows.
What durable governance requires
The response that Tumbler Ridge demands is not more efficient surveillance of users. It is a regulatory architecture that addresses the systems themselves.
That means binding legislation with legally defined thresholds for when AI companies must refer flagged interactions to authorities: thresholds defined by Parliament, developed with mental health professionals, privacy experts and law enforcement, not inherited from a company’s terms of service.
It means an independent triage body so that flagged interactions are assessed by professionals equipped to distinguish ideation from intent, accountable to public law rather than corporate liability. And it means model-level accountability: regulatory attention that moves upstream from users to systems. How are these models designed to respond to escalating disclosures of violent ideation? What testing obligations apply? What auditing requirements exist?
These questions are absent from the current political negotiations, and their absence defines the limits of what the current pledges can achieve.
OpenAI’s commitments following Tumbler Ridge are the beginning of a conversation, not the end of one. Canada holds good cards. The question is whether it plays them, or lets the other side set the rules while the table is still being built.
![]()
Jean-Christophe Bélisle-Pipon does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.


