Entries Tagged "Canada"

Page 1 of 4

Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

EDITED TO ADD (3/16): Slashdot thread.

Posted on March 11, 2026 at 7:04 AMView Comments

CSE Releases Malware Analysis Tool

The Communications Security Establishment of Canada—basically, Canada’s version of the NSA—has released a suite of malware analysis tools:

Assemblyline is described by CSE as akin to a conveyor belt: files go in, and a handful of small helper applications automatically comb through each one in search of malicious clues. On the way out, every file is given a score, which lets analysts sort old, familiar threats from the new and novel attacks that typically require a closer, more manual approach to analysis.

Posted on October 25, 2017 at 6:07 AMView Comments

Canada Spies on Internet Downloads

Another story from the Snowden documents:

According to the documents, the LEVITATION program can monitor downloads in several countries across Europe, the Middle East, North Africa, and North America. It is led by the Communications Security Establishment, or CSE, Canada’s equivalent of the NSA. (The Canadian agency was formerly known as “CSEC” until a recent name change.)

[…]

CSE finds some 350 “interesting” downloads each month, the presentation notes, a number that amounts to less than 0.0001 per cent of the total collected data.

The agency stores details about downloads and uploads to and from 102 different popular file-sharing websites, according to the 2012 document, which describes the collected records as “free file upload,” or FFU, “events.”

EDITED TO ADD (1/30): News article.

EDITED TO ADD (2/1): More news articles.

Posted on January 29, 2015 at 6:26 AMView Comments

CSEC Surveillance Analysis of IP and User Data

The most recent story from the Snowden documents is from Canada: it claims the CSEC (Communications Security Establishment Canada) used airport Wi-Fi information to track travelers. That’s not really true. What the top-secret presentation shows is a proof-of-concept project to identify different IP networks, using a database of user IDs found on those networks over time, and then potentially using that data to identify individual users. This is actually far more interesting than simply eavesdropping on airport Wi-Fi sessions. Between Boingo and the cell phone carriers, that’s pretty easy.

The researcher, with the cool-sounding job-title of “tradecraft developer,” started with two weeks’ worth of ID data from a redacted “Canadian Special Source.” (The presentation doesn’t say if they compelled some Internet company to give them the data, or if they eavesdropped on some Internet service and got it surreptitiously.) This was a list of userids seen on those networks at particular times, presumably things like Facebook logins. (Facebook, Google, Yahoo and many others are finally using SSL by default, so this data is now harder to come by.) They also had a database of geographic locations for IP addresses from Quova (now Neustar). The basic question is whether they could determine what sorts of wireless hotspots the IP addresses were.

You’d expect airports to look different from hotels, and those to look different from offices. And, in fact, that’s what the data showed. At an airport network, individual IDs are seen once, and briefly. At hotels, individual IDs are seen over a few days. At an office, IDs are generally seen from 9:00 AM to 5:00 PM, Monday through Friday. And so on.

Pretty basic so far. Where it gets interesting his how this kind of dataset can be used. The presentation suggests two applications. The first is the obvious one. If you know the ID of some surveillance target, you can set an alarm when that target visits an airport or a hotel. The presentation points out that “targets/enemies still target air travel and hotels”; but more realistically, this can be used to know when a target is traveling.

The second application suggested is to identify a particular person whom you know visited a particular geographical area on a series of dates/times. The example in the presentation is a kidnapper. He is based in a rural area, so he can’t risk making his ransom calls from that area. Instead, he drives to an urban area to make those calls. He either uses a burner phone or a pay phone, so he can’t be identified that way. But if you assume that he has some sort of smart phone in his pocket that identifies itself over the Internet, you might be able to find him in that dataset. That is, he might be the only ID that appears in that geographical location around the same time as the ransom calls and at no other times.

The results from testing that second application were successful, but slow. The presentation sounds encouraging, stating that something called Collaborative Analysis Research Environment (CARE) is being trialed “with NSA launch assist”: presumably technology, money, or both. CARE reduces the run-time “from 2+ hours to several seconds.” This was in May 2012, so it’s probably all up and running by now. We don’t know if this particular research project was ever turned into an operational program, but the CSEC, the NSA, and the rest of the Five Eyes intelligence agencies have a lot of interesting uses for this kind of data.

Since the Snowden documents have been reported on last June, the primary focus of the stories has been the collection of data. There has been very little reporting about how this data is analyzed and used. The exception is the story on the cell phone location database, which has some pretty fascinating analytical programs attached to it. I think the types of analysis done on this data are at least as important as its collection, and likely more disturbing to the average person. These sorts of analysis are being done with all of the data collected. Different databases are being correlated for all sorts of purposes. When I get back to the source documents, these are exactly the sorts of things I will be looking for. And when we think of the harms to society of ubiquitous surveillance, this is what we should be thinking about.

EDITED TO ADD (2/3): Microsoft has done the same research.

EDITED TO ADD (2/4): And Microsoft patented it.

Posted on February 3, 2014 at 5:09 AMView Comments

Security Risks of Too Much Security

All of the anti-counterfeiting features of the new Canadian $100 bill are resulting in people not bothering to verify them.

The fanfare about the security features on the bills, may be part of the problem, said RCMP Sgt. Duncan Pound.

“Because the polymer series’ notes are so secure … there’s almost an overconfidence among retailers and the public in terms of when you sort of see the strip, the polymer looking materials, everybody says ‘oh, this one’s going to be good because you know it’s impossible to counterfeit,'” he said.

“So people don’t actually check it.”

Posted on May 20, 2013 at 6:34 AMView Comments

Young Man in "Old Man" Mask Boards Plane in Hong Kong

It’s kind of an amazing story. A young Asian man used a rubber mask to disguise himself as an old Caucasian man and, with a passport photo that matched his disguise, got through all customs and airport security checks and onto a plane to Canada.

The fact that this sort of thing happens occasionally doesn’t surprise me. It’s human nature that we miss this sort of thing. I wrote about it in Beyond Fear (pages 153–4):

No matter how much training they get, airport screeners routinely miss guns and knives packed in carry-on luggage. In part, that’s the result of human beings having developed the evolutionary survival skill of pattern matching: the ability to pick out patterns from masses of random visual data. Is that a ripe fruit on that tree? Is that a lion stalking quietly through the grass? We are so good at this that we see patterns in anything, even if they’re not really there: faces in inkblots, images in clouds, and trends in graphs of random data. Generating false positives helped us stay alive; maybe that wasn’t a lion that your ancestor saw, but it was better to be safe than sorry. Unfortunately, that survival skill also has a failure mode. As talented as we are at detecting patterns in random data, we are equally terrible at detecting exceptions in uniform data. The quality-control inspector at Spacely Sprockets, staring at a production line filled with identical sprockets looking for the one that is different, can’t do it. The brain quickly concludes that all the sprockets are the same, so there’s no point paying attention. Each new sprocket confirms the pattern. By the time an anomalous sprocket rolls off the assembly line, the brain simply doesn’t notice it. This psychological problem has been identified in inspectors of all kinds; people can’t remain alert to rare events, so they slip by.

A customs officer spends hours looking at people and comparing their faces with their passport photos. They do it on autopilot. Will they catch someone in a rubber mask that looks like their passport photo? Probably, but certainly not all the time.

Yes, this is a security risk, but it’s not a big one. Because while—occasionally—a gun can slip through a metal detector or a masked man can slip through customs, it doesn’t happen reliably. So the bad guys can’t build a plot around it.

One last point: the young man in the old-man mask was captured by Canadian police. His fellow passengers noticed him. So in the end, his plot failed. Security didn’t fail, although a bunch of pieces of it did.

EDITED TO ADD (11/10): Comment (from below) about what actually happened.

Posted on November 8, 2010 at 2:55 PMView Comments

Patrolling the U.S./Canada Border

Doesn’t the DHS have anything else to do?

As someone who believes that our nation has a right to enforce its borders, I should have been gratified when the Immigrations official at the border saw the canoe on our car and informed us that anyone who crossed the nearby international waterway illegally would be arrested and fined as much as $5,000.

Trouble is, the river wasn’t the Rio Grande, but the St. Croix, which defines the border between Maine and New Brunswick, Canada. And the threat of arrest wasn’t aimed at illegal immigrants or terrorists but at canoeists like myself.

The St. Croix is a wild river that flows through unpopulated country. Primitive campsites are maintained on both shores, some accessible by logging roads, but most reached only by water or by bushwhacking for miles through thick forest and marsh. There are easier ways to sneak into the U.S. from Canada. According to Homeland Security regulations, however, canoeists who begin their trip in Canada cannot step foot on American soil, thus putting half the campsites off limits. It is not an idle threat; the U.S. Border Patrol makes regular helicopter flights down the river.

Posted on June 17, 2010 at 6:57 AMView Comments

Electronic Health Record Security Analysis

In British Columbia:

When Auditor-General John Doyle and his staff investigated the security of electronic record-keeping at the Vancouver Coastal Health Authority, they found trouble everywhere they looked.

“In every key area we examined, we found serious weaknesses,” wrote Doyle. “Security controls throughout the network and over the database were so inadequate that there was a high risk of external and internal attackers being able to access or extract information without the authority even being aware of it.”

[…]

“No intrusion prevention and detection systems exist to prevent or detect certain types of [online] attacks. Open network connections in common business areas. Dial-in remote access servers that bypass security. Open accounts existing, allowing health care data to be copied even outside the Vancouver Coastal Health Care authority at any time.”

More than 4,000 users were found to have access to the records in the database, many of them at a far higher level than necessary.

[…]

“Former client records and irrelevant records for current clients are still accessible to system users. Hundreds of former users, both employees and contractors, still have access to resources through active accounts, network accounts, and virtual private network accounts.”

While this report is from Canada, the same issues apply to any electronic patient record system in the U.S. What I find really interesting is that the Canadian government actually conducted a security analysis of the system, rather than just maintaining that everything would be fine. I wish the U.S. would do something similar.

The report, “The PARIS System for Community Care Services: Access and Security,” is here.

Posted on March 23, 2010 at 12:23 PMView Comments

1 2 3 4

Sidebar photo of Bruce Schneier by Joe MacInnis.