Entries Tagged "AI"

Page 1 of 26

Cybersecurity in the Age of Instant Software

AI is rapidly changing how software is written, deployed, and used. Trends point to a future where AIs can write custom software quickly and easily: “instant software.” Taken to an extreme, it might become easier for a user to have an AI write an application on demand—a spreadsheet, for example—and delete it when you’re done using it than to buy one commercially. Future systems could include a mix: both traditional long-term software and ephemeral instant software that is constantly being written, deployed, modified, and deleted.

AI is changing cybersecurity as well. In particular, AI systems are getting better at finding and patching vulnerabilities in code. This has implications for both attackers and defenders, depending on the ways this and related technologies improve.

In this essay, I want to take an optimistic view of AI’s progress, and to speculate what AI-dominated cybersecurity in an age of instant software might look like. There are a number of unknowns that will factor into how the arms race between attacker and defender might play out.

How flaw discovery might work

On the attacker side, the ability of AIs to automatically find and exploit vulnerabilities has increased dramatically over the past few months. We are already seeing both government and criminal hackers using AI to attack systems. The exploitation part is critical here, because it gives an unsophisticated attacker capabilities far beyond their understanding. As AIs get better, expect more attackers to automate their attacks using AI. And as individuals and organizations can increasingly run powerful AI models locally, AI companies monitoring and disrupting malicious AI use will become increasingly irrelevant.

Expect open-source software, including open-source libraries incorporated in proprietary software, to be the most targeted, because vulnerabilities are easier to find in source code. Unknown No. 1 is how well AI vulnerability discovery tools will work against closed-source commercial software packages. I believe they will soon be good enough to find vulnerabilities just by analyzing a copy of a shipped product, without access to the source code. If that’s true, commercial software will be vulnerable as well.

Particularly vulnerable will be software in IoT devices: things like internet-connected cars, refrigerators, and security cameras. Also industrial IoT software in our internet-connected power grid, oil refineries and pipelines, chemical plants, and so on. IoT software tends to be of much lower quality, and industrial IoT software tends to be legacy.

Instant software is differently vulnerable. It’s not mass market. It’s created for a particular person, organization, or network. The attacker generally won’t have access to any code to analyze, which makes it less likely to be exploited by external attackers. If it’s ephemeral, any vulnerabilities will have a short lifetime. But lots of instant software will live on networks for a long time. And if it gets uploaded to shared tool libraries, attackers will be able to download and analyze that code.

All of this points to a future where AIs will become powerful tools of cyberattack, able to automatically find and exploit vulnerabilities in systems worldwide.

Automating patch creation

But that’s just half of the arms race. Defenders get to use AI, too. These same AI vulnerability-finding technologies are even more valuable for defense. When the defensive side finds an exploitable vulnerability, it can patch the code and deny it to attackers forever.

How this works in practice depends on another related capability: the ability of AIs to patch vulnerable software, which is closely related to their ability to write secure code in the first place.

AIs are not very good at this today; the instant software that AIs create is generally filled with vulnerabilities, both because AIs write insecure code and because the people vibe coding don’t understand security. OpenClaw is a good example of this.

Unknown No. 2 is how much better AIs will get at writing secure code. The fact that they’re trained on massive corpuses of poorly written and insecure code is a handicap, but they are getting better. If they can reliably write vulnerability-free code, it would be an enormous advantage for the defender. And AI-based vulnerability-finding makes it easier for an AI to train on writing secure code.

We can envision a future where AI tools that find and patch vulnerabilities are part of the typical software development process. We can’t say that the code would be vulnerability-free—that’s an impossible goal—but it could be without any easily findable vulnerabilities. If the technology got really good, the code could become essentially vulnerability-free.

Patching lags and legacy software

For new software—both commercial and instant—this future favors the defender. For commercial and conventional open-source software, it’s not that simple. Right now, the world is filled with legacy software. Much of it—like IoT device software—has no dedicated security team to update it. Sometimes it is incapable of being patched. Just as it’s harder for AIs to find vulnerabilities when they don’t have access to the source code, it’s harder for AIs to patch software when they are not embedded in the development process.

I’m not as confident that AI systems will be able to patch vulnerabilities as easily as they can find them, because patching often requires more holistic testing and understanding. That’s Unknown No. 3: how quickly AIs will be able to create reliable software updates for the vulnerabilities they find, and how quickly customers can update their systems.

Today, there is a time lag between when a vendor issues a patch and customers install that update. That time lag is even longer for large organizational software; the risk of an update breaking the underlying software system is just too great for organizations to roll out updates without testing them first. But if AI can help speed up that process, by writing patches faster and more reliably, and by testing them in some AI-generated twin environment, the advantage goes to the defender. If not, the attacker will still have a window to attack systems until a vulnerability is patched.

Toward self-healing

In a truly optimistic future, we can imagine a self-healing network. AI agents continuously scan the ever-evolving corpus of commercial and custom AI-generated software for vulnerabilities, and automatically patch them on discovery.

For that to work, software license agreements will need to change. Right now, software vendors control the cadence of security patches. Giving software purchasers this ability has implications about compatibility, the right to repair, and liability. Any solutions here are the realm of policy, not tech.

If the defense can find, but can’t reliably patch, flaws in legacy software, that’s where attackers will focus their efforts. If that’s the case, we can imagine a continuously evolving AI-powered intrusion detection, continuously scanning inputs and blocking malicious attacks before they get to vulnerable software. Not as transformative as automatically patching vulnerabilities in running code, but nevertheless valuable.

The power of these defensive AI systems increases if they are able to coordinate with each other, and share vulnerabilities and updates. A discovery by one AI can quickly spread to everyone using the affected software. Again: Advantage defender.

There are other variables to consider. The relative success of attackers and defenders also depends on how plentiful vulnerabilities are, how easy they are to find, whether AIs will be able to find the more subtle and obscure vulnerabilities, and how much coordination there is among different attackers. All this comprises Unknown No. 4.

Vulnerability economics

Presumably, AIs will clean up the obvious stuff first, which means that any remaining vulnerabilities will be subtle. Finding them will take AI computing resources. In the optimistic scenario, defenders pool resources through information sharing, effectively amortizing the cost of defense. If information sharing doesn’t work for some reason, defense becomes much more expensive, as individual defenders will need to do their own research. But instant software means much more diversity in code: an advantage to the defender.

This needs to be balanced with the relative cost of attackers finding vulnerabilities. Attackers already have an inherent way to amortize the costs of finding a new vulnerability and create a new exploit. They can vulnerability hunt cross-platform, cross-vendor, and cross-system, and can use what they find to attack multiple targets simultaneously. Fixing a common vulnerability often requires cooperation among all the relevant platforms, vendors, and systems. Again, instant software is an advantage to the defender.

But those hard-to-find vulnerabilities become more valuable. Attackers will attempt to do what the major intelligence agencies do today: find “nobody but us” zero-day exploits. They will either use them slowly and sparingly to minimize detection or quickly and broadly to maximize profit before they’re patched. Meanwhile, defenders will be both vulnerability hunting and intrusion detecting, with the goal of patching vulnerabilities before the attackers find them.

We can even imagine a market for vulnerability sharing, where the defender who finds a vulnerability and creates a patch is compensated by everyone else in the information-sharing/repair network. This might be a stretch, but maybe.

Up the stack

Even in the most optimistic future, attackers aren’t going to just give up. They will attack the non-software parts of the system, such as the users. Or they’re going to look for loopholes in the system: things that the system technically allows but were unintended and unanticipated by the designers—whether human or AI—and can be used by attackers to their advantage.

What’s left in this world are attacks that don’t depend on finding and exploiting software vulnerabilities, like social engineering and credential stealing attacks. And we have already seen how AI-generated deepfakes make social engineering easier. But here, too, we can imagine defensive AI agents that monitor users’ behaviors, watching for signs of attack. This is another AI use case, and one that I’m not even sure how to think about in terms of the attacker/defender arms race. But at least we’re pushing attacks up the stack.

Also, attackers will attempt to infiltrate and influence defensive AIs and the networks they use to communicate, poisoning their output and degrading their capabilities. AI systems are vulnerable to all sorts of manipulations, such as prompt injection, and it’s unclear whether we will ever be able to solve that. This is Unknown No. 5, and it’s a biggie. There might always be a “trusting trust problem.”

No future is guaranteed. We truly don’t know whether these technologies will continue to improve and when they will plateau. But given the pace at which AI software development has improved in just the past few months, we need to start thinking about how cybersecurity works in this instant software world.

This essay originally appeared in CSO.

EDITED TO ADD: Two essays published after I wrote this. Both are good illustrations of where we are re AI vulnerability discovery. Things are changing very fast.

Posted on April 7, 2026 at 1:07 PMView Comments

As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters

In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.

Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives.

In a May 2025 survey of likely voters nationwide, more than 70% favored state and federal regulators having a hand in AI policy. A December 2025 poll by Navigator Research found similar results, with a massive net +48% favorability for more AI regulation. Yet despite the overwhelming preference of both voters and his party’s elected leaders—Congress was essentially unanimous in defeating a previous state AI regulation moratorium—Trump has delivered on a key priority of the industry. The order explicitly challenges the will of voters across blue and red states, from California to South Dakota, scrambling political positions around the technology and setting up a new ideological battleground in the upcoming race for Congress.

There are a number of ways that candidates and parties may try to capitalize on this emerging wedge issue before the midterms.

In 2025, much of the popular debate around AI was cast in terms of humans versus machines. Advances in AI and the companies it is associated with, it is said, come at the expense of humans. A new model release with greater capabilities for writing, teaching, or coding means more people in those disciplines losing their jobs.

This is a humanist debate. Making us talk to an AI customer-support agent is an affront to our dignity. Using AI to help generate media sacrifices authenticity. AI chatbots that persuade and manipulate assault our liberty. There is philosophical merit to these arguments, and yet they seem to have limited political salience.

Populism versus institutionalism is a better way to frame this debate in the context of US politics. The MAGA movement is widely understood to be a realignment of American party politics to ally the Republican party with populism, and the Democratic party with defenders of traditional institutions of American government and their democratic norms.

This frame is shattered by Trump’s AI order, which unabashedly serves economic elites at the expense of populist consumer protections. It is part of an ongoing courting process between MAGA and big tech, where the Trump political project sacrifices the interests of consumers and its populist credentials as it cozies up to tech moguls.

We are starting to see populist resistance to this government/big tech alignment emerge on the local scale. People in Maryland, Arizona, North Carolina, Michigan and many other states are vigorously opposing AI datacenters in their communities, based on environmental and energy-affordability impacts. These centers of opposition are politically diverse; both progressives and Trump-supporting voters are turning out in force, influencing their local elected officials to resist datacenter development.

This opposition to the physical infrastructure of corporate AI is so far staying local, but it may yet translate into a national and politically aligned movement that could divide the MAGA coalition.

Any policy discussions about AI should include the individual harms associated with job loss, as employers seek to replace laborers with machines. It should also include the systemic economic risks associated with concentrated and supercharged AI investment, the democratic risks associated with the increased power in monopolistic and politically influential tech companies, and the degradation of civic functions like journalism and education by AI. In order for our free market to function in the public interest, the companies amassing wealth and profiting from AI must be forced to take ownership of, and internalize, these costs.

The political salience of AI will grow to meet the staggering scale of financial investment and societal impact it is already commanding. There is an opportunity for enterprising candidates, of either political party, to take the mantle of opposing AI-linked harms in the midterm elections.

Political solutions start with organizing, and broadening the base of political engagement around these issues beyond the locally salient topic of datacenters. Movement leaders and elected officials in states that have taken action on AI regulation should mobilize around the blatant industry capture, wealth extraction, and corporate favoritism reflected in the Trump executive order. AI is no longer just a policy issue for governments to discuss: it is a political issue that voters must decide on and demand accountability on.

Posted on March 26, 2026 at 7:06 AMView Comments

Team Mirai and Democracy

Japan’s election last month and the rise of the country’s newest and most innovative political party, Team Mirai, illustrates the viability of a different way to do politics.

In this model, technology is used to make democratic processes stronger, instead of undermining them. It is harnessed to root out corruption, instead of serving as a cash cow for campaign donations.

Imagine an election where every voter has the opportunity to opine directly to politicians on precisely the issues they care about. They’re not expected to spend hours becoming policy experts. Instead, an AI Interviewer walks them through the subject, answering their questions, interrogating their experience, even challenging their thinking.

Voters get immediate feedback on how their individual point of view matches—or doesn’t—a party’s platform, and they can see whether and how the party adopts their feedback. This isn’t like an opinion poll that politicians use for calculating short-term electoral tactics. It’s a deliberative reasoning process that scales, engaging voters in defining policy and helping candidates to listen deeply to their constituents.

This is happening today in Japan. Constituents have spent about eight thousand hours engaging with Mirai’s AI Interviewer since 2025. The party’s gamified volunteer mobilization app, Action Board, captured about 100,000 organizer actions per day in the runup to last week’s election.

It’s how Team Mirai, which translates to ‘The Future Party,’ does politics. Its founder, Takahiro Anno, first ran for local office in 2024 as a 33 year old software engineer standing for Governor of Tokyo. He came in fifth out of 56 candidates, winning more than 150,000 votes as an unaffiliated political outsider. He won attention by taking a distinctive stance on the role of technology in democracy and using AI aggressively in voter engagement.

Last year, Anno ran again, this time for the Upper Chamber of the national legislature—the Diet—and won. Now the head of a new national party, Anno found himself with a platform for making his vision of a new way of doing politics a reality.

In this recent House of Representatives election, Team Mirai shot up to win nearly four million votes. In the lower chamber’s proportional representation system, that was good enough for eleven total seats—the party’s first ever representation in the Japanese House—and nearly three times what it achieved in last year’s Upper Chamber election.

Anno’s party stood for election without aligning itself on the traditional axes of left and right. Instead, Team Mirai, heavily associated with young, urban voters, sought to unite across the ideological spectrum by taking a radical position on a different axis: the status quo and the future. Anno told us that Team Mirai believes it can triple its representation in the Diet after the next elections in each chamber, an ostentatious goal that seems achievable given their rapid rise over the past year.

In the American context, the idea of a small party unifying voters across left and right sounds like a pipe dream. But there is evidence it worked in Japan. Team Mirai won an impressive 11% of proportional representation votes from unaffiliated voters, nearly twice the share of the larger electorate. The centerpiece of the party’s policy platform is not about the traditional hot button issues, it’s about democracy itself, and how it can be enhanced by embracing a futuristic vision of digital democracy.

Anno told us how his party arrived at its manifesto for this month’s elections, and why it looked different from other parties’ in important ways. Team Mirai collected more than 38,000 online questions and more than 6,000 discrete policy suggestions from voters using its AI Policy app, which is advertised as a ‘manifesto that speaks for itself.’

After factoring in all this feedback, Team Mirai maintained a contrarian position on the biggest issue of the election: the sales tax and affordability. Rather than running on a reduction of the national sales tax like the major parties, Team Mirai reviewed dozens of suggestions from the public and ultimately proposed to keep that tax level while providing support to families through a child tax credit and lowering the required contribution for social insurance. Anno described this as another future-facing strategy: less price relief in the short term, but sustained funding for essential programs.

Anno has always intended to build a different kind of party. After receiving roughly $1 million in public funding apportioned to Team Mirai based on its single seat in the Upper Chamber last year, Anno began hiring engineers to enhance his software tools for digital democracy.

Anno described Team Mirai to us as a ‘utility party;’ basic infrastructure for Japanese democracy that serves the broader polity rather than one faction. Their Gikai (‘assembly’) app illustrates the point. It provides a portal for constituents to research bills, using AI to generate summaries, to describe their impacts, to surfacing media reporting on the issue, and to answer users’ questions. Like all their software, it’s open source and free for anyone, in any party, to use.

After last week’s victory, Team Mirai now has about $5 million in public funding and ambitions to grow the influence of their digital democracy platform. Anno told us Team Mirai has secured an agreement with the LDP, Japan’s dominant ruling party, to begin using Team Mirai’s Gikai and corruption-fighting Mirumae financial transparency tool.

AI is the issue driving the most societal and economic change we will encounter in our lifetime, yet US political parties are largely silent. But AI and Big Tech companies and their owners are ramping up their political spending to influence the parties. To the extent that AI has shown up in our politics, it seems to be limited to the question of where to site the next generation of data centers and how to channel populist backlash to big tech.

Those are causes worthy of political organizing, but very few US politicians are leveraging the technology for public listening or other pro-democratic purposes. With the midterms still nine months away and with innovators like Team Mirai making products in the open for anyone to use, there is still plenty of time for an American politician to demonstrate what a new politics could look like.

This essay was written with Nathan E. Sanders, and originally appeared in Tech Policy Press.

Posted on March 24, 2026 at 7:03 AMView Comments

Academia and the “AI Brain Drain”

In 2025, Google, Amazon, Microsoft and Meta collectively spent US$380 billion on building artificial-intelligence tools. That number is expected to surge still higher this year, to $650 billion, to fund the building of physical infrastructure, such as data centers (see go.nature.com/3lzf79q). Moreover, these firms are spending lavishly on one particular segment: top technical talent.

Meta reportedly offered a single AI researcher, who had cofounded a start-up firm focused on training AI agents to use computers, a compensation package of $250 million over four years (see go.nature.com/4qznsq1). Technology firms are also spending billions on “reverse-acquihires”—poaching the star staff members of start-ups without acquiring the companies themselves. Eyeing these generous payouts, technical experts earning more modest salaries might well reconsider their career choices.

Academia is already losing out. Since the launch of ChatGPT in 2022, concerns have grown in academia about an “AI brain drain.” Studies point to a sharp rise in university machine-learning and AI researchers moving to industry roles. A 2025 paper reported that this was especially true for young, highly cited scholars: researchers who were about five years into their careers and whose work ranked among the most cited were 100 times more likely to move to industry the following year than were ten-year veterans whose work received an average number of citations, according to a model based on data from nearly seven million papers.1

This outflow threatens the distinct roles of academic research in the scientific enterprise: innovation driven by curiosity rather than profit, as well as providing independent critique and ethical scrutiny. The fixation of “big tech” firms on skimming the very top talent also risks eroding the idea of science as a collaborative endeavor, in which teams—not individuals—do the most consequential work.

Here, we explore the broader implications for science and suggest alternative visions of the future.

Astronomical salaries for AI talent buy into a legend as old as the software industry: the 10x engineer. This is someone who is supposedly capable of ten times the impact of their peers. Why hire and manage an entire group of scientists or software engineers when one genius—or an AI agent—can outperform them?

That proposition is increasingly attractive to tech firms that are betting that a large number of entry-level and even mid-level engineering jobs will be replaced by AI. It’s no coincidence that Google’s Gemini 3 Pro AI model was launched with boasts of “PhD-level reasoning,” a marketing strategy that is appealing to executives seeking to replace people with AI.

But the lone-genius narrative is increasingly out of step with reality. Research backs up a fundamental truth: science is a team sport. A large-scale study of scientific publishing from 1900 to 2011 found that papers produced by larger collaborations consistently have greater impact than do those of smaller teams, even after accounting for self-citation.2 Analyses of the most highly cited scientists show a similar pattern: their highest-impact works tend to be those papers with many authors.3 A 2020 study of Nobel laureates reinforces this trend, revealing that—much like the wider scientific community—the average size of the teams that they publish with has steadily increased over time as scientific problems increase in scope and complexity.4

From the detection of gravitational waves, which are ripples in space-time caused by massive cosmic events, to CRISPR-based gene editing, a precise method for cutting and modifying DNA, to recent AI breakthroughs in protein-structure prediction, the most consequential advances in modern science have been collective achievements. Although these successes are often associated with prominent individuals—senior scientists, Nobel laureates, patent holders—the work itself was driven by teams ranging from dozens to thousands of people and was built on decades of open science: shared data, methods, software and accumulated insight.

Building strong institutions is a much more effective use of resources than is betting on any single individual. Examples demonstrating this include the LIGO Scientific Collaboration, the global team that first detected gravitational waves; the Broad Institute of MIT and Harvard in Cambridge, Massachusetts, a leading genomics and biomedical-research center behind many CRISPR advances; and even for-profit laboratories such as Google DeepMind in London, which drove advances in protein-structure prediction with its AlphaFold tool. If the aim of the tech giants and other AI firms that are spending lavishly on elite talent is to accelerate scientific progress, the current strategy is misguided.

By contrast, well-designed institutions amplify individual ability, sustain productivity beyond any one person’s career and endure long after any single contributor is gone.

Equally important, effective institutions distribute power in beneficial ways. Rather than vesting decision-making authority in the hands of one person, they have mechanisms for sharing control. Allocation committees decide how resources are used, scientific advisory boards set collective research priorities, and peer review determines which ideas enter the scientific record.

And although the term “innovation by committee” might sound disparaging, such an approach is crucial to make the scientific enterprise act in concert with the diverse needs of the broader public. This is especially true in science, which continues to suffer from pervasive inequalities across gender, race and socio-economic and cultural differences.5

Need for alternative vision

This is why scientists, academics and policymakers should pay more attention to how AI research is organized and led, especially as the technology becomes essential across scientific disciplines. Used well, AI can support a more equitable scientific enterprise by empowering junior researchers who currently have access to few resources.

Instead, some of today’s wealthiest scientific institutions might think that they can deploy the same strategies as the tech industry uses and compete for top talent on financial terms—perhaps by getting funding from the same billionaires who back big tech. Indeed, wage inequality has been steadily growing within academia for decades.6 But this is not a path that science should follow.

The ideal model for science is a broad, diverse ecosystem in which researchers can thrive at every level. Here are three strategies that universities and mission-driven labs should adopt instead of engaging in a compensation arms race.

First, universities and institutions should stay committed to the public interest. An excellent example of this approach can be found in Switzerland, where several institutions are coordinating to build AI as a public good rather than a private asset. Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) and the Swiss Federal Institute of Technology (ETH) in Zurich, working with the Swiss National Supercomputing Centre, have built Apertus, a freely available large language model. Unlike the controversially-labelled “open source” models built by commercial labs—such as Meta’s LLaMa, which has been criticized for not complying with the open-source definition (see go.nature.com/3o56zd5)—Apertus is not only open in its source code and its weights (meaning its core parameters), but also in its data and development process. Crucially, Apertus is not designed to compete with “frontier” AI labs pursuing superintelligence at enormous cost and with little regard for data ownership. Instead, it adopts a more modest and sustainable goal: to make AI trustworthy for use in industry and public administration, strictly adhering to data-licensing restrictions and including local European languages.7

Principal investigators (PIs) at other institutions globally should follow this path, aligning public funding agencies and public institutions to produce a more sustainable alternative to corporate AI.

Second, universities should bolster networks of researchers from the undergraduate to senior-professor levels—not only because they make for effective innovation teams, but also because they serve a purpose beyond next quarter’s profits. The scientific enterprise galvanizes its members at all levels to contribute to the same projects, the same journals and the same open, international scientific literature—to perpetuate itself across generations and to distribute its impact throughout society.

Universities should take precisely the opposite hiring strategy to that of the big tech firms. Instead of lavishing top dollar on a select few researchers, they should equitably distribute salaries. They should raise graduate-student stipends and postdoc salaries and limit the growth of pay for high-profile PIs.

Third, universities should show that they can offer more than just financial benefits: they must offer distinctive intellectual and civic rewards. Although money is unquestionably a motivator, researchers also value intellectual freedom and the recognition of their work. Studies show that research roles in industry that allow publication attract talent at salaries roughly 20% lower than comparable positions that prohibit it (see go.nature.com/4cbjxzu).

Beyond the intellectual recognition of publications and citation counts, universities should recognize and reward the production of public goods. The tenure and promotion process at universities should reward academics who supply expertise to local and national governments, who communicate with and engage the public in research, who publish and maintain open-source software for public use and who provide services for non-profit groups.

Furthermore, institutions should demonstrate that they will defend the intellectual freedom of their researchers and shield them from corporate or political interference. In the United States today, we see a striking juxtaposition between big tech firms, which curry favour with the administration of US President Donald Trump to win regulatory and trade benefits, and higher-education institutions, which suffer massive losses of federal funding and threats of investigation and sanction. Unlike big tech firms, universities should invest in enquiry that challenges authority.

We urge leaders of scientific institutions to reject the growing pay inequality rampant in the upper echelons of AI research. Instead, they should compete for talent on a different dimension: the integrity of their missions and the equitableness of their institutions. These institutions should focus on building sustainable organizations with diverse staff members, rather than bestowing a bounty on science’s 1%.

References

  1. Jurowetzki, R., Hain, D. S., Wirtz, K. & Bianchini, S. AI Soc. 40, 4145–4152 (2025).
  2. Larivière, V., Gingras, Y., Sugimoto, C. R. & Tsou, A. J. Assoc. Inf. Sci. Technol. 66, 1323–1332 (2015).
  3. Aksnes, D. W. & Aagaard, K. J. Data Inf. Sci. 6, 41–66 (2021).
  4. Li, J., Yin, Y., Fortunato, S. & Wang, D. J. R. Soc. Interface 17, 20200135 (2020).
  5. Graves, J. L. Jr, Kearney, M., Barabino, G. & Malcom, S. Proc. Natl Acad. Sci. USA 119, e2117831119 (2022).
  6. Lok, C. Nature 537, 471–473 (2016).
  7. Project Apertus. Preprint at arXiv https://doi.org/10.48550/arXiv.2509.14233 (2025).

This essay was written with Nathan E. Sanders, and originally appeared in Nature.

Posted on March 13, 2026 at 7:04 AMView Comments

Canada Needs Nationalized, Public AI

Canada has a choice to make about its artificial intelligence future. The Carney administration is investing $2-billion over five years in its Sovereign AI Compute Strategy. Will any value generated by “sovereign AI” be captured in Canada, making a difference in the lives of Canadians, or is this just a passthrough to investment in American Big Tech?

Forcing the question is OpenAI, the company behind ChatGPT, which has been pushing an “OpenAI for Countries” initiative. It is not the only one eyeing its share of the $2-billion, but it appears to be the most aggressive. OpenAI’s top lobbyist in the region has met with Ottawa officials, including Artificial Intelligence Minister Evan Solomon.

All the while, OpenAI was less than open. The company had flagged the Tumbler Ridge, B.C., shooter’s ChatGPT interactions, which included gun-violence chats. Employees wanted to alert law enforcement but were rebuffed. Maybe there is a discussion to be had about users’ privacy. But even after the shooting, the OpenAI representative who met with the B.C. government said nothing.

When tech billionaires and corporations steer AI development, the resultant AI reflects their interests rather than those of the general public or ordinary consumers. Only after the meeting with the B.C. government did OpenAI alert law enforcement. Had it not been for the Wall Street Journal’s reporting, the public would not have known about this at all.

Moreover, OpenAI for Countries is explicitly described by the company as an initiative “in co-ordination with the U.S. government.” And it’s not just OpenAI: all the AI giants are for-profit American companies, operating in their private interests, and subject to United States law and increasingly bowing to U.S. President Donald Trump. Moving data centres into Canada under a proposal like OpenAI’s doesn’t change that. The current geopolitical reality means Canada should not be dependent on U.S. tech firms for essential services such as cloud computing and AI.

While there are Canadian AI companies, they remain for-profit enterprises, their interests not necessarily aligned with our collective good. The only real alternative is to be bold and invest in a wholly Canadian public AI: an AI model built and funded by Canada for Canadians, as public infrastructure. This would give Canadians access to the myriad of benefits from AI without having to depend on the U.S. or other countries. It would mean Canadian universities and public agencies building and operating AI models optimized not for global scale and corporate profit, but for practical use by Canadians.

Imagine AI embedded into health care, triaging radiology scans, flagging early cancer risks and assisting doctors with paperwork. Imagine an AI tutor trained on provincial curriculums, giving personalized coaching. Imagine systems that analyze job vacancies and sectoral and wage trends, then automatically match job seekers to government programs. Imagine using AI to optimize transit schedules, energy grids and zoning analysis. Imagine court processes, corporate decisions and customer service all sped up by AI.

We are already on our way to having AI become an inextricable part of society. To ensure stability and prosperity for this country, Canadian users and developers must be able to turn to AI models built, controlled, and operated publicly in Canada instead of building on corporate platforms, American or otherwise.

Switzerland has shown this to be possible. With funding from the federal government, a consortium of academic institutions—ETH Zurich, EPFL, and the Swiss National Supercomputing Centre—released the world’s most powerful and fully realized public AI model, Apertus, last September. Apertus leveraged renewable hydropower and existing Swiss scientific computing infrastructure. It also used no illegally pirated copyrighted material or poorly paid labour extracted from the Global South during training. The model’s performance stands at roughly a year or two behind the major corporate offerings, but that is more than adequate for the vast majority of applications. And it’s free for anyone to use and build on.

The significance of Apertus is more than technical. It demonstrates an alternative ownership structure for AI technology, one that allocates both decision-making authority and value to national public institutions rather than foreign corporations. This vision represents precisely the paradigm shift Canada should embrace: AI as public infrastructure, like systems for transportation, water, or electricity, rather than private commodity.

Apertus also demonstrates a far more sustainable economic framework for AI. Switzerland spent a tiny fraction of the billions of dollars that corporate AI labs invest annually, demonstrating that the frequent training runs with astronomical price tags pursued by tech companies are not actually necessary for practical AI development. They focused on making something broadly useful rather than bleeding edge—trying dubiously to create “superintelligence,” as with Silicon Valley—so they created a smaller model at much lower cost. Apertus’s training was at a scale (70 billion parameters) perhaps two orders of magnitude lower than the largest Big Tech offerings.

An ecosystem is now being developed on top of Apertus, using the model as a public good to power chatbots for free consumer use and to provide a development platform for companies prioritizing responsible AI use, and rigorous compliance with laws like the EU AI Act. Instead of routing queries from those users to Big Tech infrastructure, Apertus is deployed to data centres across national AI and computing initiatives of Switzerland, Australia, Germany, and Singapore and other partners.

The case for public AI rests on both democratic principles and practical benefits. Public AI systems can incorporate mechanisms for genuine public input and democratic oversight on critical ethical questions: how to handle copyrighted works in training data, how to mitigate bias, how to distribute access when demand outstrips capacity, and how to license use for sensitive applications like policing or medicine. Or how to handle a situation such as that of the Tumbler Ridge shooter. These decisions will profoundly shape society as AI becomes more pervasive, yet corporate AI makes them in secret.

By contrast, public AI developed by transparent, accountable agencies would allow democratic processes and political oversight to govern how these powerful systems function.

Canada already has many of the building blocks for public AI. The country has world-class AI research institutions, including the Vector Institute, Mila, and CIFAR, which pioneered much of the deep learning revolution. Canada’s $2-billion Sovereign AI Compute Strategy provides substantial funding.

What’s needed now is a reorientation away from viewing this as an opportunity to attract private capital, and toward a fully open public AI model.

This essay was written with Nathan E. Sanders, and originally appeared in The Globe and Mail.

EDITED TO ADD (3/16): Slashdot thread.

Posted on March 11, 2026 at 7:04 AMView Comments

Anthropic and the Pentagon

OpenAI is in and Anthropic is out as a supplier of AI technology for the US defense department. This news caps a week of bluster by the highest officials in the US government towards some of the wealthiest titans of the big tech industry, and the overhanging specter of the existential risks posed by a new technology powerful enough that the Pentagon claims it is essential to national security. At issue is Anthropic’s insistence that the US Department of Defense (DoD) could not use its models to facilitate “mass surveillance” or “fully autonomous weapons,” provisions the defense secretary Pete Hegseth derided as “woke.”

It all came to a head on Friday evening when Donald Trump issued an order for federal government agencies to discontinue use of Anthropic models. Within hours, OpenAI had swooped in, potentially seizing hundreds of millions of dollars in government contracts by striking an agreement with the administration to provide classified government systems with AI.

Despite the histrionics, this is probably the best outcome for Anthropic—and for the Pentagon. In our free-market economy, both are, and should be, free to sell and buy what they want with whom they want, subject to longstanding federal rules on contracting, acquisitions, and blacklisting. The only factor out of place here are the Pentagon’s vindictive threats.

AI models are increasingly commodified. The top-tier offerings have about the same performance, and there is little to differentiate one from the other. The latest models from Anthropic, OpenAI and Google, in particular, tend to leapfrog each other with minor hops forward in quality every few months. The best models from one provider tend to be preferred by users to the second, or third, or 10th best models at a rate of only about six times out of 10, a virtual tie.

In this sort of market, branding matters a lot. Anthropic and its CEO, Dario Amodei, are positioning themselves as the moral and trustworthy AI provider. That has market value for both consumers and enterprise clients. In taking Anthropic’s place in government contracting, OpenAI’s CEO, Sam Altman, vowed to somehow uphold the same safety principles Anthropic had just been pilloried for. How that is possible given the rhetoric of Hegseth and Trump is entirely unclear, but seems certain to further politicize OpenAI and its products in the minds of consumers and corporate buyers.

Posturing publicly against the Pentagon and as a hero to civil libertarians is quite possibly worth the cost of the lost contracts to Anthropic, and associating themselves with the same contracts could be a trap for OpenAI. The Pentagon, meanwhile, has plenty of options. Even if no big tech company was willing to supply it with AI, the department has already deployed dozens of open weight models—whose parameters are public and are often licensed permissively for government use.

We can admire Amodei’s stance, but, to be sure, it is primarily posturing. Anthropic knew what they were getting into when they agreed to a defense department partnership for $200m last year. And when they signed a partnership with the surveillance company Palantir in 2024.

Read Amodei’s statement about the issue. Or his January essay on AIs and risk, where he repeatedly uses the words “democracy” and “autocracy” while evading precisely how collaboration with US federal agencies should be viewed in this moment. Amodei has bought into the idea of using “AI to achieve robust military superiority” on behalf of the democracies of the world in response to the threats from autocracies. It’s a heady vision. But it is a vision that likewise supposes that the world’s nominal democracies are committed to a common vision of public wellbeing, peace-seeking and democratic control.

Regardless, the defense department can also reasonably demand that the AI products it purchases meet its needs. The Pentagon is not a normal customer; it buys products that kill people all the time. Tanks, artillery pieces, and hand grenades are not products with ethical guard rails. The Pentagon’s needs reasonably involve weapons of lethal force, and those weapons are continuing on a steady, if potentially catastrophic, path of increasing automation.

So, at the surface, this dispute is a normal market give and take. The Pentagon has unique requirements for the products it uses. Companies can decide whether or not to meet them, and at what price. And then the Pentagon can decide from whom to acquire those products. Sounds like a normal day at the procurement office.

But, of course, this is the Trump administration, so it doesn’t stop there. Hegseth has threatened Anthropic not just with loss of government contracts. The administration has, at least until the inevitable lawsuits force the courts to sort things out, designated the company as “a supply-chain risk to national security,” a designation previously only ever applied to foreign companies. This prevents not only government agencies, but also their own contractors and suppliers, from contracting with Anthropic.

The government has incompatibly also threatened to invoke the Defense Production Act, which could force Anthropic to remove contractual provisions the department had previously agreed to, or perhaps to fundamentally modify its AI models to remove in-built safety guardrails. The government’s demands, Anthropic’s response, and the legal context in which they are acting will undoubtedly all change over the coming weeks.

But, alarmingly, autonomous weapons systems are here to stay. Primitive pit traps evolved to mechanical bear traps. The world is still debating the ethical use of, and dealing with the legacy of, land mines. The US Phalanx CIWS is a 1980s-era shipboard anti-missile system with a fully autonomous, radar-guided cannon. Today’s military drones can search, identify and engage targets without direct human intervention. AI will be used for military purposes, just as every other technology our species has invented has.

The lesson here should not be that one company in our rapacious capitalist system is more moral than another, or that one corporate hero can stand in the way of government’s adopting AI as technologies of war, or surveillance, or repression. Unfortunately, we don’t live in a world where such barriers are permanent or even particularly sturdy.

Instead, the lesson is about the importance of democratic structures and the urgent need for their renovation in the US. If the defense department is demanding the use of AI for mass surveillance or autonomous warfare that we, the public, find unacceptable, that should tell us we need to pass new legal restrictions on those military activities. If we are uncomfortable with the force of government being applied to dictate how and when companies yield to unsafe applications of their products, we should strengthen the legal protections around government procurement.

The Pentagon should maximize its warfighting capabilities, subject to the law. And private companies like Anthropic should posture to gain consumer and buyer confidence. But we should not rest on our laurels, thinking that either is doing so in the public’s interest.

This essay was written with Nathan E. Sanders, and originally appeared in The Guardian.

Posted on March 6, 2026 at 12:07 PMView Comments

Claude Used to Hack Mexican Government

An unknown hacker used Anthropic’s LLM to hack the Mexican government:

The unknown Claude user wrote Spanish-language prompts for the chatbot to act as an elite hacker, finding vulnerabilities in government networks, writing computer scripts to exploit them and determining ways to automate data theft, Israeli cybersecurity startup Gambit Security said in research published Wednesday.

[…]

Claude initially warned the unknown user of malicious intent during their conversation about the Mexican government, but eventually complied with the attacker’s requests and executed thousands of commands on government computer networks, the researchers said.

Anthropic investigated Gambit’s claims, disrupted the activity and banned the accounts involved, a representative said. The company feeds examples of malicious activity back into Claude to learn from it, and one of its latest AI models, Claude Opus 4.6, includes probes that can disrupt misuse, the representative said.

Alternative link here.

Posted on March 6, 2026 at 6:53 AMView Comments

1 2 3 26

Sidebar photo of Bruce Schneier by Joe MacInnis.