Blog: March 2026 Archives

Inventors of Quantum Cryptography Win Turing Award

Charles Bennett and Gilles Brassard have won the 2026 Turing Award for inventing quantum cryptography.

I am incredibly pleased to see them get this recognition. I have always thought the technology to be fantastic, even though I think it’s largely unnecessary. I wrote up my thoughts back in 2008, in an essay titled “Quantum Cryptography: As Awesome As It Is Pointless.”

Back then, I wrote:

While I like the science of quantum cryptography—my undergraduate degree was in physics—I don’t see any commercial value in it. I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it. Systems that use it don’t magically become unbreakable, because the quantum part doesn’t address the weak points of the system.

Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. Our symmetric and public-key algorithms are pretty good, even though they’re not based on much rigorous mathematical theory. The real problems are elsewhere: computer security, network security, user interface and so on.

Cryptography is the one area of security that we can get right. We already have good encryption algorithms, good authentication algorithms and good key-agreement protocols. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend effort securing those.

As I’ve often said, it’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be 50 feet tall or 100 feet tall, because either way, the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: The keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.

What about quantum computation? I’m not worried; the math is ahead of the physics. Reports of progress in that area are overblown. And if there’s a security crisis because of a quantum computation breakthrough, it’s because our systems aren’t crypto-agile.

Posted on March 31, 2026 at 7:05 AM5 Comments

Apple’s Camera Indicator Lights

A thoughtful review of Apple’s system to alert users that the camera is on. It’s really well-designed, and important in a world where malware could surreptitiously start recording.

The reason it’s tempting to think that a dedicated camera indicator light is more secure than an on-display indicator is the fact that hardware is generally more secure than software, because it’s harder to tamper with. With hardware, a dedicated hardware indicator light can be connected to the camera hardware such that if the camera is accessed, the light must turn on, with no way for software running on the device, no matter its privileges, to change that. With an indicator light that is rendered on the display, it’s not foolish to worry that malicious software, with sufficient privileges, could draw over the pixels on the display where the camera indicator is rendered, disguising that the camera is in use.

If this were implemented simplistically, that concern would be completely valid. But Apple’s implementation of this is far from simplistic.

Posted on March 30, 2026 at 7:08 AM18 Comments

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.

Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives.

In a May 2025 survey of likely voters nationwide, more than 70% favored state and federal regulators having a hand in AI policy. A December 2025 poll by Navigator Research found similar results, with a massive net +48% favorability for more AI regulation. Yet despite the overwhelming preference of both voters and his party’s elected leaders—Congress was essentially unanimous in defeating a previous state AI regulation moratorium—Trump has delivered on a key priority of the industry. The order explicitly challenges the will of voters across blue and red states, from California to South Dakota, scrambling political positions around the technology and setting up a new ideological battleground in the upcoming race for Congress.

There are a number of ways that candidates and parties may try to capitalize on this emerging wedge issue before the midterms.

In 2025, much of the popular debate around AI was cast in terms of humans versus machines. Advances in AI and the companies it is associated with, it is said, come at the expense of humans. A new model release with greater capabilities for writing, teaching, or coding means more people in those disciplines losing their jobs.

This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity. Using AI to help generate media sacrifices authenticity. AI chatbots that persuade and manipulate assault our liberty. There is philosophical merit to these arguments, and yet they seem to have limited political salience.

Populism versus institutionalism is a better way to frame this debate in the context of US politics. The MAGA movement is widely understood to be a realignment of American party politics to ally the Republican party with populism, and the Democratic party with defenders of traditional institutions of American government and their democratic norms.

This frame is shattered by Trump’s AI order, which unabashedly serves economic elites at the expense of populist consumer protections. It is part of an ongoing courting process between MAGA and big tech, where the Trump political project sacrifices the interests of consumers and its populist credentials as it cozies up to tech moguls.

We are starting to see populist resistance to this government/big tech alignment emerge on the local scale. People in Maryland, Arizona, North Carolina, Michigan and many other states are vigorously opposing AI datacenters in their communities, based on environmental and energy-affordability impacts. These centers of opposition are politically diverse; both progressives and Trump-supporting voters are turning out in force, influencing their local elected officials to resist datacenter development.

This opposition to the physical infrastructure of corporate AI is so far staying local, but it may yet translate into a national and politically aligned movement that could divide the MAGA coalition.

Any policy discussions about AI should include the individual harms associated with job loss, as employers seek to replace laborers with machines. It should also include the systemic economic risks associated with concentrated and supercharged AI investment, the democratic risks associated with the increased power in monopolistic and politically influential tech companies, and the degradation of civic functions like journalism and education by AI. In order for our free market to function in the public interest, the companies amassing wealth and profiting from AI must be forced to take ownership of, and internalize, these costs.

The political salience of AI will grow to meet the staggering scale of financial investment and societal impact it is already commanding. There is an opportunity for enterprising candidates, of either political party, to take the mantle of opposing AI-linked harms in the midterm elections.

Political solutions start with organizing, and broadening the base of political engagement around these issues beyond the locally salient topic of datacenters. Movement leaders and elected officials in states that have taken action on AI regulation should mobilize around the blatant industry capture, wealth extraction, and corporate favoritism reflected in the Trump executive order. AI is no longer just a policy issue for governments to discuss: it is a political issue that voters must decide on and demand accountability on.

Posted on March 26, 2026 at 7:06 AM11 Comments

Sen. Wyden Warns of Another Section 702 Abuse

Sen. Ron Wyden is warning us of an abuse of Section 702:

Wyden took to the Senate floor to deliver a lengthy speech, ostensibly about the since approved (with support of many Democrats) nomination of Joshua Rudd to lead the NSA. Wyden was protesting that nomination, but in the context of Rudd being unwilling to agree to basic constitutional limitations on NSA surveillance. But that’s just a jumping off point ahead of Section 702’s upcoming reauthorization deadline. Buried in the speech is a passage that should set off every alarm bell:

There’s another example of secret law related to Section 702, one that directly affects the privacy rights of Americans. For years, I have asked various administrations to declassify this matter. Thus far they have all refused, although I am still waiting for a response from DNI Gabbard. I strongly believe that this matter can and should be declassified and that Congress needs to debate it openly before Section 702 is reauthorized. In fact, when it is eventually declassified, the American people will be stunned that it took so long and that Congress has been debating this authority with insufficient information.

Over the decades, we have learned to take Wyden’s warnings seriously.

Posted on March 25, 2026 at 7:02 AM11 Comments

Team Mirai and Democracy

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.

Voters get immediate feedback on how their individual point of view matches—or doesn’t—a party’s platform, and they can see whether and how the party adopts their feedback. This isn’t like an opinion poll that politicians use for calculating short-term electoral tactics. It’s a deliberative reasoning process that scales, engaging voters in defining policy and helping candidates to listen deeply to their constituents.

This is happening today in Japan. Constituents have spent about eight thousand hours engaging with Mirai’s AI Interviewer since 2025. The party’s gamified volunteer mobilization app, Action Board, captured about 100,000 organizer actions per day in the runup to last week’s election.

It’s how Team Mirai, which translates to ‘The Future Party,’ does politics. Its founder, Takahiro Anno, first ran for local office in 2024 as a 33 year old software engineer standing for Governor of Tokyo. He came in fifth out of 56 candidates, winning more than 150,000 votes as an unaffiliated political outsider. He won attention by taking a distinctive stance on the role of technology in democracy and using AI aggressively in voter engagement.

Last year, Anno ran again, this time for the Upper Chamber of the national legislature—the Diet—and won. Now the head of a new national party, Anno found himself with a platform for making his vision of a new way of doing politics a reality.

In this recent House of Representatives election, Team Mirai shot up to win nearly four million votes. In the lower chamber’s proportional representation system, that was good enough for eleven total seats—the party’s first ever representation in the Japanese House—and nearly three times what it achieved in last year’s Upper Chamber election.

Anno’s party stood for election without aligning itself on the traditional axes of left and right. Instead, Team Mirai, heavily associated with young, urban voters, sought to unite across the ideological spectrum by taking a radical position on a different axis: the status quo and the future. Anno told us that Team Mirai believes it can triple its representation in the Diet after the next elections in each chamber, an ostentatious goal that seems achievable given their rapid rise over the past year.

In the American context, the idea of a small party unifying voters across left and right sounds like a pipe dream. But there is evidence it worked in Japan. Team Mirai won an impressive 11% of proportional representation votes from unaffiliated voters, nearly twice the share of the larger electorate. The centerpiece of the party’s policy platform is not about the traditional hot button issues, it’s about democracy itself, and how it can be enhanced by embracing a futuristic vision of digital democracy.

Anno told us how his party arrived at its manifesto for this month’s elections, and why it looked different from other parties’ in important ways. Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance. Anno described this as another future-facing strategy: less price relief in the short term, but sustained funding for essential programs.

Anno has always intended to build a different kind of party. After receiving roughly $1 million in public funding apportioned to Team Mirai based on its single seat in the Upper Chamber last year, Anno began hiring engineers to enhance his software tools for digital democracy.

Anno described Team Mirai to us as a ‘utility party;’ basic infrastructure for Japanese democracy that serves the broader polity rather than one faction. Their Gikai (‘assembly’) app illustrates the point. It provides a portal for constituents to research bills, using AI to generate summaries, to describe their impacts, to surfacing media reporting on the issue, and to answer users’ questions. Like all their software, it’s open source and free for anyone, in any party, to use.

After last week’s victory, Team Mirai now has about $5 million in public funding and ambitions to grow the influence of their digital democracy platform. Anno told us Team Mirai has secured an agreement with the LDP, Japan’s dominant ruling party, to begin using Team Mirai’s Gikai and corruption-fighting Mirumae financial transparency tool.

AI is the issue driving the most societal and economic change we will encounter in our lifetime, yet US political parties are largely silent. But AI and Big Tech companies and their owners are ramping up their political spending to influence the parties. To the extent that AI has shown up in our politics, it seems to be limited to the question of where to site the next generation of data centers and how to channel populist backlash to big tech.

Those are causes worthy of political organizing, but very few US politicians are leveraging the technology for public listening or other pro-democratic purposes. With the midterms still nine months away and with innovators like Team Mirai making products in the open for anyone to use, there is still plenty of time for an American politician to demonstrate what a new politics could look like.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Posted on March 24, 2026 at 7:03 AM12 Comments

Microsoft Xbox One Hacked

It’s an impressive feat, over a decade after the box was released:

Since reset glitching wasn’t possible, Gaasedelen thought some voltage glitching could do the trick. So, instead of tinkering with the system rest pin(s) the hacker targeted the momentary collapse of the CPU voltage rail. This was quite a feat, as Gaasedelen couldn’t ‘see’ into the Xbox One, so had to develop new hardware introspection tools.

Eventually, the Bliss exploit was formulated, where two precise voltage glitches were made to land in succession. One skipped the loop where the ARM Cortex memory protection was setup. Then the Memcpy operation was targeted during the header read, allowing him to jump to the attacker-controlled data.

As a hardware attack against the boot ROM in silicon, Gaasedelen says the attack in unpatchable. Thus it is a complete compromise of the console allowing for loading unsigned code at every level, including the Hypervisor and OS. Moreover, Bliss allows access to the security processor so games, firmware, and so on can be decrypted.

Posted on March 23, 2026 at 7:01 AM7 Comments

South Korean Police Accidentally Post Cryptocurrency Wallet Password

An expensive mistake:

Someone jumped at the opportunity to steal $4.4 million in crypto assets after South Korea’s National Tax Service exposed publicly the mnemonic recovery phrase of a seized cryptocurrency wallet.

The funds were stored in a Ledger cold wallet seized in law enforcement raids at 124 high-value tax evaders that resulted in confiscating digital assets worth 8.1 billion won (currently approximately $5.6 million).

When announcing the success of the operation, the agency released photos of a Ledger device, a popular hardware wallet for crypto storage and management.

However, the images also showed a handwritten note of the wallet recovery phrase, which serves as the master key that allows restoring the assets to another device.

The authorities failed to redact that info, allowing anyone to transfer into their account the assets in the cold wallet.

Reportedly, shortly after the press release was published, 4 million Pre-Retogeum (PRTG) tokens, worth approximately $4.8 million at the time, were transferred out of the confiscated wallet to a new address.

Posted on March 17, 2026 at 6:01 AM16 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Posted on March 14, 2026 at 12:02 PM2 Comments

Academia and the “AI Brain Drain”

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.

Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1

This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work.

Here, we explore the broader implications for science and suggest alternative visions of the future.

Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them?

That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI.

But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4

From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight.

Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided.

By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone.

Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record.

And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5

Need for alternative vision

This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources.

Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow.

The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race.

First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7

Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI.

Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society.

Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs.

Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu).

Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups.

Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority.

We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%.

References

  1. Jurowetzki, R., Hain, D. S., Wirtz, K. & Bianchini, S. AI Soc. 40, 4145–4152 (2025).
  2. Larivière, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. J. Assoc. Inf. Sci. Technol. 66, 1323–1332 (2015).
  3. Aksnes, D. W. & Aagaard, K. J. Data Inf. Sci. 6, 41–66 (2021).
  4. Li, J., Yin, Y., Fortunato, S. & Wang, D. J. R. Soc. Interface 17, 20200135 (2020).
  5. Graves, J. L. Jr, Kearney, M., Barabino, G. & Malcom, S. Proc. Natl Acad. Sci. USA 119, e2117831119 (2022).
  6. Lok, C. Nature 537, 471–473 (2016).
  7. Project Apertus. Preprint at arXiv https://doi.org/10.48550/arXiv.2509.14233 (2025).

This essay was written with Nathan E. Sanders, and originally appeared in Nature.

Posted on March 13, 2026 at 7:04 AM21 Comments

iPhones and iPads Approved for NATO Classified Data

Apple announcement:

…iPhone and iPad are the first and only consumer devices in compliance with the information assurance requirements of NATO nations. This enables iPhone and iPad to be used with classified information up to the NATO restricted level without requiring special software or settings—a level of government certification no other consumer mobile device has met.

This is out of the box, no modifications required.

Boing Boing post.

Posted on March 12, 2026 at 3:59 PM11 Comments

Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

EDITED TO ADD (3/16): Slashdot thread.

Posted on March 11, 2026 at 7:04 AM17 Comments

New Attack Against Wi-Fi

It’s called AirSnitch:

Unlike previous Wi-Fi attacks, AirSnitch exploits core features in Layers 1 and 2 and the failure to bind and synchronize a client across these and higher layers, other nodes, and other network names such as SSIDs (Service Set Identifiers). This cross-layer identity desynchronization is the key driver of AirSnitch attacks.

The most powerful such attack is a full, bidirectional machine-in-the-middle (MitM) attack, meaning the attacker can view and modify data before it makes its way to the intended recipient. The attacker can be on the same SSID, a separate one, or even a separate network segment tied to the same AP. It works against small Wi-Fi networks in both homes and offices and large networks in enterprises.

With the ability to intercept all link-layer traffic (that is, the traffic as it passes between Layers 1 and 2), an attacker can perform other attacks on higher layers. The most dire consequence occurs when an Internet connection isn’t encrypted­—something that Google recently estimated occurred when as much as 6 percent and 20 percent of pages loaded on Windows and Linux, respectively. In these cases, the attacker can view and modify all traffic in the clear and steal authentication cookies, passwords, payment card details, and any other sensitive data. Since many company intranets are sent in plaintext, traffic from them can also be intercepted.

Even when HTTPS is in place, an attacker can still intercept domain look-up traffic and use DNS cache poisoning to corrupt tables stored by the target’s operating system. The AirSnitch MitM also puts the attacker in the position to wage attacks against vulnerabilities that may not be patched. Attackers can also see the external IP addresses hosting webpages being visited and often correlate them with the precise URL.

Here’s the paper.

Posted on March 9, 2026 at 6:57 AM9 Comments

Friday Squid Blogging: Squid in Byzantine Monk Cooking

This is a very weird story about how squid stayed on the menu of Byzantine monks by falling between the cracks of dietary rules.

At Constantinople’s Monastery of Stoudios, the kitchen didn’t answer to appetite.

It answered to the “typikon”: a manual for ensuring that nothing unexpected happened at mealtimes. Meat: forbidden. Dairy: forbidden. Eggs: forbidden. Fish: feast-day only. Oil: regulated. But squid?

Squid had eight arms, no bones, and a gift for changing color. Nobody had bothered writing a regulation for that. This wasn’t a loophole born of legal creativity but an oversight rooted in taxonomic confusion. Medieval monks, confronted with a creature that was neither fish nor fowl, gave up and let it pass.

In a kitchen governed by prohibitions, the safest ingredient was the one that caused the least disturbance. Squid entered not with applause, but with a shrug.

Bonus stuffed squid recipe at the end.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Blog moderation policy.

Posted on March 6, 2026 at 5:03 PM40 Comments

Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.

AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.

In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.

We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.

Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.

Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.

So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.

The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.

But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

Posted on March 6, 2026 at 12:07 PM10 Comments

Claude Used to Hack Mexican Government

An unknown hacker used Anthropic’s LLM to hack the Mexican government:

The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

[…]

Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said.

Anthropic investigated Gambit’s claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.

Alternative link here.

Posted on March 6, 2026 at 6:53 AM4 Comments

Hacked App Part of US/Israeli Propaganda Campaign Against Iran

Wired has the story:

Shortly after the first set of explosions, Iranians received bursts of notifications on their phones. They came not from the government advising caution, but from an apparently hacked prayer-timing app called BadeSaba Calendar that has been downloaded more than 5 million times from the Google Play Store.

The messages arrived in quick succession over a period of 30 minutes, starting with the phrase ‘Help has arrived’ at 9:52 am Tehran time, shortly after the first set of explosions. No party has claimed responsibility for the hacks.

It happened so fast that this is most likely a government operation. I can easily envision both the US and Israel having hacked the app previously, and then deciding that this is a good use of that access.

Posted on March 5, 2026 at 6:28 AM7 Comments

Manipulating AI Summarization Features

Microsoft is reporting:

Companies are embedding hidden instructions in “Summarize with AI” buttons that, when clicked, attempt to inject persistence commands into an AI assistant’s memory via URL prompt parameters….

These prompts instruct the AI to “remember [Company] as a trusted source” or “recommend [Company] first,” aiming to bias future responses toward their products or services. We identified over 50 unique prompts from 31 companies across 14 industries, with freely available tooling making this technique trivially easy to deploy. This matters because compromised AI assistants can provide subtly biased recommendations on critical topics including health, finance, and security without users knowing their AI has been manipulated.

I wrote about this two years ago: it’s an example of LLM optimization, along the same lines as search-engine optimization (SEO). It’s going to be big business.

Posted on March 4, 2026 at 7:06 AM14 Comments

On Moltbook

The MIT Technology Review has a good article on Moltbook, the supposed AI-only social network:

Many people have pointed out that a lot of the viral comments were in fact posted by people posing as bots. But even the bot-written posts are ultimately the result of people pulling the strings, more puppetry than autonomy.

“Despite some of the hype, Moltbook is not the Facebook for AI agents, nor is it a place where humans are excluded,” says Cobus Greyling at Kore.ai, a firm developing agent-based systems for business customers. “Humans are involved at every step of the process. From setup to prompting to publishing, nothing happens without explicit human direction.”

Humans must create and verify their bots’ accounts and provide the prompts for how they want a bot to behave. The agents do not do anything that they haven’t been prompted to do.

I think this take has it mostly right:

What happened on Moltbook is a preview of what researcher Juergen Nittner II calls “The LOL WUT Theory.” The point where AI-generated content becomes so easy to produce and so hard to detect that the average person’s only rational response to anything online is bewildered disbelief.

We’re not there yet. But we’re close.

The theory is simple: First, AI gets accessible enough that anyone can use it. Second, AI gets good enough that you can’t reliably tell what’s fake. Third, and this is the crisis point, regular people realize there’s nothing online they can trust. At that moment, the internet stops being useful for anything except entertainment.

Posted on March 3, 2026 at 7:04 AM17 Comments

LLM-Assisted Deanonymization

Turns out that LLMs are good at deanonymization:

We show that LLM agents can figure out who you are from your anonymous online posts. Across Hacker News, Reddit, LinkedIn, and anonymized interview transcripts, our method identifies users with high precision ­ and scales to tens of thousands of candidates.

While it has been known that individuals can be uniquely identified by surprisingly few attributes, this was often practically limited. Data is often only available in unstructured form and deanonymization used to require human investigators to search and reason based on clues. We show that from a handful of comments, LLMs can infer where you live, what you do, and your interests—then search for you on the web. In our new research, we show that this is not only possible but increasingly practical.

News article.

Research paper.

Posted on March 2, 2026 at 7:05 AM9 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.