Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

EDITED TO ADD (3/16): Slashdot thread.

Posted on March 11, 2026 at 7:04 AM17 Comments

Comments

K.S March 11, 2026 7:19 AM

Nationalized means run by the government. In turn, this means expensive, inefficient, and full of political patronage jobs (just look at CBC). At this stage of rapid development, nationalized AI won’t work.

mrex March 11, 2026 8:54 AM

Nationalized AI infrastructure seems like the only way to avoid a situation in which the status quo becomes impossibly entrenched against change agents. They will use “all lawful purpose” AI to detect change agent emergence, use distributed algorithmic suppression, outpace the change agent’s narrative, and analyze their weaknesses in a way that anyone with a hobbled, guard-railed AI will not be able to match. Democratic systems exist to fragment concentrations of power into manageable units that check and balance each other, and the time to establish those checks and balances with AI is now.

Clive Robinson March 11, 2026 9:03 AM

@ Bruce, Nathan,

With regards,

“Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.”

I already have years ago back when “Expert Systems” was “The AI of the day”.

And two things happened,

1, It sounded like hell and mostly was.
2, It was ripe for failure and mostly was.

In the intervening time little has realy changed in human society.

So although it was not Einstein who came up with this definition of madness many assume it was for good reason… So,

Einstein’s definition of Madness is given all be it incorrectly as,

“Doing the same thing over and over and expecting different results”

AI was mostly a failure back in the early 1980’s, with “Expert Systems”, and little or nothing has actually changed since then. So this time around I fully expect the same results…

Only,

1, The price will be eye wateringly more expensive, and
2, Guaranteed hell for most people.
3, A very limited number of both bespoke and niche AI projects will survive and make money.

Most people will think AI is over priced, and of too lower quality, and they would not be wrong, based on current showing.

K.S March 11, 2026 9:10 AM

>Nationalized AI infrastructure seems like the only way to avoid a situation in which the status quo becomes impossibly entrenched against change agents.

I disagree, as government is a lot more likely to cause entrenched status quo. Removing profit motive does not guarantee the outcome you are looking for.

Jos March 11, 2026 9:59 AM

K.S. Just noting that nothing guarantee’s what we are looking for, not private sector OR public. I dont disagree that the public version is likely to be worse, slower and a boondoggle, but at least our money stays within Canada, and not into the pockets of non-canadiana industry and/or workers… ideally.

fib March 11, 2026 10:44 AM

AI infrastructure is a perfect example of Giddens’ principle in action.

Giddens, a key architect of Third Way politics, argued that when the private sector fails to make strategic investments, the government has both the right and the responsibility to step in. This applies to long-term, high-impact projects the private sector avoids because payoffs are uncertain, societal benefits outweigh private profit, or risks are high.

Building national-scale, publicly governed AI fits this bill: it’s costly and uncertain, corporate AI often prioritizes profit over public good, and critical capabilities (safety, security, equitable access) may never be developed privately.

Samuel Johnson March 11, 2026 11:15 AM

Nationalized means run by the govt

Ayn Randian tripe believed by the kind of people who subscribe, like Grover Norquist, to the idea that government should be small enough to drag to the bathtub and drown.

A people who allow corporations to rule over them eventually learn the hard way, that is they who get drowned. In the case of the UK the privatisation of the water industry has resulted in rivers full of shit, unsafe beaches with deaths from e. coli, and a clamour to return to public ownership — as if this was the only alternative.

The job of the govt is regulation on behalf of the electorate. That means holding corporations to account, which is hard to do when their “free speech”, as in Citizens United, results in the best government money can buy.

In Ireland the job of regulating utilities is done by the private sector and is self-financing; businesses contracted to do inspections, issue fines etc are themselves accountable to a public body. Nationalization, as in state ownership or control, isn’t required. But if the state isn’t ultimately accountable and able to legislate for the benefit of citizens there is little point in pretending to have democracy. We’ve known since the days of the East India Company who gets called on to bail out corporations when they blow up.

The peculiar American delusion that neoliberal oligarchy is the only alternative to Communism is just that, a delusion. The current experiment in getting rid of as much government as possible is going to prove very expensive, and very unpopular.

Rontea March 11, 2026 12:48 PM

We’re terrified to say out loud that capitalism sucks, because the moment we do, the software industry will start rattling its tin cup for a bailout. The truth is that our AI ecosystem has been captured by market incentives that prioritize short-term profit over long-term societal benefit. When companies fail under this system, they demand public rescue, while the public sees very little of the upside. This is the same dynamic we’ve seen in finance, in manufacturing, and now in AI. If we continue to rely on private monopolies to build our critical infrastructure, we will keep socializing the losses and privatizing the gains. Honest conversations about capitalism’s failures are the first step toward building systems that serve the public interest rather than corporate balance sheets.

Matt March 11, 2026 12:59 PM

Bruce, I’m still trying to figure out why you think LLMs (let’s not call them AI) are actually useful tools. Even in the very limited number of cases where they can ostensibly accurately do what they claim to do (and even in those cases if you scratch a little deeper it turns out they can’t), they have such immense negative externalities that it’s unethical to use them at all for anything.

And yet you keep saying “oh, we need public AI.” No, we don’t. We don’t need LLMs at all.

Clive Robinson March 11, 2026 4:16 PM

@ Matt, ALL,

The Pachyderm in the room is to big to be seen.

You note,

“And yet you keep saying “oh, we need public AI.” No, we don’t. We don’t need LLMs at all.”

Thus we drop into the oh so modern version of the allegory / parable of “Three blind men describe an Elephant”,

https://en.wikipedia.org/wiki/Blind_men_and_an_elephant

The important parts to note are,

“They then describe the animal based on their limited experience and their descriptions of the elephant are different from each other. In some versions, they come to suspect that the other person is dishonest and they come to blows.”

Which is where we are with Current AI and LLM Systems, with one exception their descriptions of the pachyderm in the room is not real or even what they wildly gesticulate what it is, it’s not even a new idea.

Which is where it’s important to understand “going the other way”.

The “Tech Boi’s” are shilling an idea for the VC’s they hope will make them rich enough that they will be able to buy anything they ever want for the rest of their life and never run out of money…

Thus the nonsense of the sigularity or the chameleon idea of AGI that currently is still being pushed with waving arms and now faux sincerity. Because it can and has been shown that Current AI LLMs and ML Systems can not do it no matter how you “scale up”

It unfortunately causes others to make statments that are insufficiently qualified such as,

<

blockquote>”Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development.“/

<

blockquote>

No they are not necessary, but then they never were. The problem is the “expectation gap” between what is being shilled and the cold hard reality of what the LLM & ML architecture can actually give us…

There is a reason I always mention LLM and ML as they are two distinct parts logically that are around a “Digital Neural Network”(DNN). The DNN is simply an overly large “Digital Signal Processing”(DSP) array that works as a “Multi- Spectral Adaptive Filter”(MSAF) that has seen service in Sonar, Radar and the now defunct “Plain Old Telephone System”(POTS) since the Cold War started (if not before).

The LLM is,

1, A query system and interface built around a DNN.

The ML is,

2, A build system of the weights in the DNN that give it a particular base spectral profile.

There is no “magic” and “scaling up” won’t give you any it’s logically impossible and the graphs that you can find and the recent failures like GPT 5 show this, there is a wall there that is a hard limit of the way the DNN works. That as we get closer tilts the graph from going up nearly linearly to “flat lining” just below that limit. Just as basic engineering principles say it would do…

So what happens when you go the other way and “scale back / down”. I could go through what happens in depth but all you need to remember is that the DNN is “finite” thus there is a hard limit on what can be “stored in the weights” even as “data shadows”. Yes you can use “coding tricks” to make the storage of weights “more efficient” but they quickly go from faux-lossless to obvious lossy.

What does this mean in practical terms?

Well firstly the amount of “information” in the DNN gets fairly rapidly reduced, along with any “rules” it effectively encodes in the token spectrums (vectors).

But at some point it becomes almost directly equivalent of “Expert Systems” perturbed by “Fuzzy Logic” which was where the AI hype bubble was back in the 1980’s.

The difference is there is “no human expert” selecting the information and generating the “rules”. That is done by “extracting the information and rules” from the statistics of the training data set information and is performed by the ML component.

Thus implicitly the LLM query system is limited to a fraction of what could be statistically derived from the training information.

The “magician’s trick” which is not magic in any way is to “add noise to purturb the spectrum” hence the “stochastic” in “stochastic parrot” with the parrot being the “expert system”.

The important thing to note is that “Expert Systems” even with the help of “fuzzy logic” did not scale and were never going to be “general” in nature. Which is why when that was realised “the music died” and the merry-go-round or ferris wheel kind of stopped.

That said stripped down Expert Systems and Fuzzy logic are still both in use in highly specific “control system” functions that have complex and oft non linear control plane manifolds for which there are a variety of solutions[1] including what is in effect a small DNN. That are used for running machines like automatic trains and the breaking systems in your car and flight stability etc in aircraft and drones.

That is in the near future, the hype of the current AI bubble will die out, and what remains of LLM and ML systems will be for a very very small but useful subset of complex problems for which traditional techniques are too complex or complicated. We’ve already seen this play out with Alpha-fold and we will see it with other “constrained and correctly collated training data sets” but in “general” they will be a failure that throwing more data at can not solve.

Something that “Wannabe Emperors” that are just faux-magicians hiding behind curtains of hype don’t want you knowing… So they have formed their own “magic circle” that now has a very “Not Suitable For Work”(NSFW) visual meme based on OpenAI’s logo.

Don’t go “searching for it” because you can see it at,

https://garymarcus.substack.com/p/is-the-great-ai-meltdown-imminent

But REMEMBER as I’ve said “NSFW”.

[1] A recent paper that describes the issue of generating and using control plane manifolds is,

https://arxiv.org/pdf/2510.04278

Warning it’s maths heavy and getting your head around it if you do not work in the problem domain is likely to “bend your brain”. An ML can do similar by the process of reiterated “gradient walking” doing multi-dimensional “curve fitting” approximation. However it has many caveats about the manifold and generally requires the first approximation to have a certain topological structure that over a fine structure appears open and linear (think a very local map of part of the globe where basic plane geometry still holds with sufficient accuracy to navigate by straight line).

nowhere man March 11, 2026 7:45 PM

@Clive Robinson — March 11, 2026 4:16 PM

Here is, Sir, another little-known version of your pachyderm story that you might not know and that you might appreciate, should you read the full novel which is excellent satire :

gutenberg.org/files/1930/1930-h/1930-h.htm#link2H_4_0016

kiwano March 11, 2026 8:08 PM

Not only would I like to see a publicly run LLM here in Canada, I’d also like a good chunk of its development to fund research into incremental training for LLMs, in order to facilitate the sort of collaborative and specialized development processes for model training that open source enabled for software. Openly published research into incremental training is absolutely not a thing that I believe private industry has any interest at all in doing, as they’ve been operating with a general business model that the cost of monolithic training runs provides them with a capital moat, protecting them from competition.

I think it would be pretty sweet if there were a public LLM training cluster where e.g. a research engineer could take an already-good model, and train it to output CAD diagrams. I see it being even better if the research into incremental training advanced to the point where adding features like that an LLM training cluster could become something within the means of a hackerspace.

In a slightly different direction, another thing that seems easier to do with a publicly run LLM is to abstain from installing guardrails that prevent someone from vibe coding TPM circumvention software — though that’s something that I’d have an easier time expecting from a Norwegian public LLM than from a Canadian one.

Winter March 12, 2026 1:44 AM

There are more such initiatives in Europe.

GPT-NL is the one for the Netherlands
https://www.tno.nl/en/digital/artificial-intelligence/gpt-nl/

It’s funding is miniscule compared to Open ai et al.. And I understand they are looking for a legal and ethical business model.

As for people who think this is all a waste of money, the 14M Euro funding is something the Dutch economy can survive.

Personally, I think such national initiatives should collaborate over national and language borders. Multilingual LLMs have shown good functionality beyond translation.

Bryan Sexton March 12, 2026 1:31 PM

When privacy guardrails fail at the company that controls the software- shouldn’t it be cause for cancern? Before implementing, countries should ask how this affects citizens down the road. Open-source Public AI is a solid plan for the Canard.

Nick Felker March 21, 2026 7:06 PM

With regards to the Tumbler Ridge incident: if they used a nationalized AI instead, does that mean the AI is secretly surveiling every resident? I understand it’s a good scenario to ask questions around, but I’m not sure a public chatbot is an easy fix.

Leave a comment

Blog moderation policy

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.