December 15, 2022
by Bruce Schneier
Fellow and Lecturer, Harvard Kennedy School
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Another Event-Related Spyware App
- Russian Software Company Pretending to Be American
- Failures in Twitter’s Two-Factor Authentication System
- Successful Hack of Time-Triggered Ethernet
- First Review of A Hacker’s Mind
- Breaking the Zeppelin Ransomware Encryption Scheme
- Apple’s Device Analytics Can Identify iCloud Users
- The US Has a Shortage of Bomb-Sniffing Dogs
- Computer Repair Technicians Are Stealing Your Data
- Charles V of Spain Secret Code Cracked
- Facebook Fined $276M under GDPR
- Sirius XM Software Vulnerability
- LastPass Security Breach
- Existential Risk and the Fermi Paradox
- CAPTCHA
- CryWiper Data Wiper Targeting Russian Sites
- The Decoupling Principle
- Leaked Signing Keys Are Being Used to Sign Malware
- Security Vulnerabilities in Eufy Cameras
- Hacking Trespass Law
- Apple Is Finally Encrypting iCloud Backups
- Obligatory ChatGPT Post
- Hacking Boston’s CharlieCard
- Recreating Democracy
Another Event-Related Spyware App
[2022.11.15] Last month, we were warned not to install Qatar’s World Cup app because it was spyware. This month, it’s Egypt’s COP27 Summit app:
The app is being promoted as a tool to help attendees navigate the event. But it risks giving the Egyptian government permission to read users’ emails and messages. Even messages shared via encrypted services like WhatsApp are vulnerable, according to POLITICO’s technical review of the application, and two of the outside experts.
The app also provides Egypt’s Ministry of Communications and Information Technology, which created it, with other so-called backdoor privileges, or the ability to scan people’s devices.
On smartphones running Google’s Android software, it has permission to potentially listen into users’ conversations via the app, even when the device is in sleep mode, according to the three experts and POLITICO’s separate analysis. It can also track people’s locations via smartphone’s built-in GPS and Wi-Fi technologies, according to two of the analysts.
Russian Software Company Pretending to Be American
[2022.11.16] Computer code developed by a company called Pushwoosh is in about 8,000 Apple and Google smartphone apps. The company pretends to be American when it is actually Russian.
According to company documents publicly filed in Russia and reviewed by Reuters, Pushwoosh is headquartered in the Siberian town of Novosibirsk, where it is registered as a software company that also carries out data processing. It employs around 40 people and reported revenue of 143,270,000 rubles ($2.4 mln) last year. Pushwoosh is registered with the Russian government to pay taxes in Russia.
On social media and in US regulatory filings, however, it presents itself as a US company, based at various times in California, Maryland, and Washington, DC, Reuters found.
What does the code do? Spy on people:
Pushwoosh provides code and data processing support for software developers, enabling them to profile the online activity of smartphone app users and send tailor-made push notifications from Pushwoosh servers.
On its website, Pushwoosh says it does not collect sensitive information, and Reuters found no evidence Pushwoosh mishandled user data. Russian authorities, however, have compelled local companies to hand over user data to domestic security agencies.
I have called supply chain security “an insurmountably hard problem,” and this is just another example of that.
EDITED TO ADD (12/12): Here is a list of apps that use the Pushwoosh SDK.
Failures in Twitter’s Two-Factor Authentication System
[2022.11.17] Twitter is having intermittent problems with its two-factor authentication system:
Not all users are having problems receiving SMS authentication codes, and those who rely on an authenticator app or physical authentication token to secure their Twitter account may not have reason to test the mechanism. But users have been self-reporting issues on Twitter since the weekend, and WIRED confirmed that on at least some accounts, authentication texts are hours delayed or not coming at all. The meltdown comes less than two weeks after Twitter laid off about half of its workers, roughly 3,700 people. Since then, engineers, operations specialists, IT staff, and security teams have been stretched thin attempting to adapt Twitter’s offerings and build new features per new owner Elon Musk’s agenda.
On top of that, it seems that the system has a new vulnerability:
A researcher contacted Information Security Media Group on condition of anonymity to reveal that texting “STOP” to the Twitter verification service results in the service turning off SMS two-factor authentication.
“Your phone has been removed and SMS 2FA has been disabled from all accounts,” is the automated response.
The vulnerability, which ISMG verified, allows a hacker to spoof the registered phone number to disable two-factor authentication. That potentially exposes accounts to a password reset attack or account takeover through password stuffing.
This is not a good sign.
Successful Hack of Time-Triggered Ethernet
[2022.11.18] Time-triggered Ethernet (TTE) is used in spacecraft, basically to use the same hardware to process traffic with different timing and criticality. Researchers have defeated it:
On Tuesday, researchers published findings that, for the first time, break TTE’s isolation guarantees. The result is PCspooF, an attack that allows a single non-critical device connected to a single plane to disrupt synchronization and communication between TTE devices on all planes. The attack works by exploiting a vulnerability in the TTE protocol. The work was completed by researchers at the University of Michigan, the University of Pennsylvania, and NASA’s Johnson Space Center.
“Our evaluation shows that successful attacks are possible in seconds and that each successful attack can cause TTE devices to lose synchronization for up to a second and drop tens of TT messages—both of which can result in the failure of critical systems like aircraft or automobiles,” the researchers wrote. “We also show that, in a simulated spaceflight mission, PCspooF causes uncontrolled maneuvers that threaten safety and mission success.”
Much more detail in the article—and the research paper.
First Review of A Hacker’s Mind
[2022.11.18] Kirkus reviews A Hacker’s Mind:
A cybersecurity expert examines how the powerful game whatever system is put before them, leaving it to others to cover the cost.
Schneier, a professor at Harvard Kennedy School and author of such books as Data and Goliath and Click Here To Kill Everybody, regularly challenges his students to write down the first 100 digits of pi, a nearly impossible task—but not if they cheat, concerning which he admonishes, “Don’t get caught.” Not getting caught is the aim of the hackers who exploit the vulnerabilities of systems of all kinds. Consider right-wing venture capitalist Peter Thiel, who located a hack in the tax code: “Because he was one of the founders of PayPal, he was able to use a $2,000 investment to buy 1.7 million shares of the company at $0.001 per share, turning it into $5 billion—all forever tax free.” It was perfectly legal—and even if it weren’t, the wealthy usually go unpunished. The author, a fluid writer and tech communicator, reveals how the tax code lends itself to hacking, as when tech companies like Apple and Google avoid paying billions of dollars by transferring profits out of the U.S. to corporate-friendly nations such as Ireland, then offshoring the “disappeared” dollars to Bermuda, the Caymans, and other havens. Every system contains trap doors that can be breached to advantage. For example, Schneier cites “the Pudding Guy,” who hacked an airline miles program by buying low-cost pudding cups in a promotion that, for $3,150, netted him 1.2 million miles and “lifetime Gold frequent flier status.” Since it was all within the letter if not the spirit of the offer, “the company paid up.” The companies often do, because they’re gaming systems themselves. “Any rule can be hacked,” notes the author, be it a religious dietary restriction or a legislative procedure. With technology, “we can hack more, faster, better,” requiring diligent monitoring and a demand that everyone play by rules that have been hardened against tampering.
An eye-opening, maddening book that offers hope for leveling a badly tilted playing field.
I got a starred review. Libraries make decisions on what to buy based on starred reviews. Publications make decisions about what to review based on starred reviews. This is a big deal.
Book’s webpage.
Breaking the Zeppelin Ransomware Encryption Scheme
[2022.11.21] Brian Krebs writes about how the Zeppelin ransomware encryption scheme was broken:
The researchers said their break came when they understood that while Zeppelin used three different types of encryption keys to encrypt files, they could undo the whole scheme by factoring or computing just one of them: An ephemeral RSA-512 public key that is randomly generated on each machine it infects.
“If we can recover the RSA-512 Public Key from the registry, we can crack it and get the 256-bit AES Key that encrypts the files!” they wrote. “The challenge was that they delete the [public key] once the files are fully encrypted. Memory analysis gave us about a 5-minute window after files were encrypted to retrieve this public key.”
Unit 221B ultimately built a “Live CD” version of Linux that victims could run on infected systems to extract that RSA-512 key. From there, they would load the keys into a cluster of 800 CPUs donated by hosting giant Digital Ocean that would then start cracking them. The company also used that same donated infrastructure to help victims decrypt their data using the recovered keys.
A company offered recovery services based on this break, but was reluctant to advertise because it didn’t want Zeppelin’s creators to fix their encryption flaw.
Technical details.
EDITED TO ADD (12/12): When BitDefender publicly advertised a decryption tool for a strain of DarkSide ransomware, DarkSide immediately updated its ransomware to render the tool obsolete. It’s hard to come up with a solution to this problem.
Apple’s Device Analytics Can Identify iCloud Users
[2022.11.22] Researchers claim that supposedly anonymous device analytics information can identify users:
On Twitter, security researchers Tommy Mysk and Talal Haj Bakry have found that Apple’s device analytics data includes an iCloud account and can be linked directly to a specific user, including their name, date of birth, email, and associated information stored on iCloud.
Apple has long claimed otherwise:
On Apple’s device analytics and privacy legal page, the company says no information collected from a device for analytics purposes is traceable back to a specific user. “iPhone Analytics may include details about hardware and operating system specifications, performance statistics, and data about how you use your devices and applications. None of the collected information identifies you personally,” the company claims.
Apple was just sued for tracking iOS users without their consent, even when they explicitly opt out of tracking.
The US Has a Shortage of Bomb-Sniffing Dogs
[2022.11.23] Nothing beats a dog’s nose for detecting explosives. Unfortunately, there aren’t enough dogs:
Last month, the US Government Accountability Office (GAO) released a nearly 100-page report about working dogs and the need for federal agencies to better safeguard their health and wellness. The GOA says that as of February the US federal government had approximately 5,100 working dogs, including detection dogs, across three federal agencies. Another 420 dogs “served the federal government in 24 contractor-managed programs within eight departments and two independent agencies,” the GAO report says.
The report also underscores the demands placed on detection dogs and the potential for overwork if there aren’t enough dogs available. “Working dogs might need the strength to suddenly run fast, or to leap over a tall barrier, as well as the physical stamina to stand or walk all day,” the report says. “They might need to search over rubble or in difficult environmental conditions, such as extreme heat or cold, often wearing heavy body armor. They also might spend the day detecting specific scents among thousands of others, requiring intense mental concentration. Each function requires dogs to undergo specialized training.”
A decade and a half ago I was optimistic about bomb-sniffing bees and wasps, but nothing seems to have come of that.
Computer Repair Technicians Are Stealing Your Data
[2022.11.28] Laptop technicians routinely violate the privacy of the people whose computers they repair:
Researchers at University of Guelph in Ontario, Canada, recovered logs from laptops after receiving overnight repairs from 12 commercial shops. The logs showed that technicians from six of the locations had accessed personal data and that two of those shops also copied data onto a personal device. Devices belonging to females were more likely to be snooped on, and that snooping tended to seek more sensitive data, including both sexually revealing and non-sexual pictures, documents, and financial information.
[…]
In three cases, Windows Quick Access or Recently Accessed Files had been deleted in what the researchers suspect was an attempt by the snooping technician to cover their tracks. As noted earlier, two of the visits resulted in the logs the researchers relied on being unrecoverable. In one, the researcher explained they had installed antivirus software and performed a disk cleanup to “remove multiple viruses on the device.” The researchers received no explanation in the other case.
[…]
The laptops were freshly imaged Windows 10 laptops. All were free of malware and other defects and in perfect working condition with one exception: the audio driver was disabled. The researchers chose that glitch because it required only a simple and inexpensive repair, was easy to create, and didn’t require access to users’ personal files.
Half of the laptops were configured to appear as if they belonged to a male and the other half to a female. All of the laptops were set up with email and gaming accounts and populated with browser history across several weeks. The researchers added documents, both sexually revealing and non-sexual pictures, and a cryptocurrency wallet with credentials.
A few notes. One: this is a very small study—only twelve laptop repairs. Two, some of the results were inconclusive, which indicated—but did not prove—log tampering by the technicians. Three, this study was done in Canada. There would probably be more snooping by American repair technicians.
The moral isn’t a good one: if you bring your laptop in to be repaired, you should expect the technician to snoop through your hard drive, taking what they want.
Research paper.
Charles V of Spain Secret Code Cracked
[2022.11.29] Diplomatic code cracked after 500 years:
In painstaking work backed by computers, Pierrot found “distinct families” of about 120 symbols used by Charles V. “Whole words are encrypted with a single symbol” and the emperor replaced vowels coming after consonants with marks, she said, an inspiration probably coming from Arabic.
In another obstacle, he used meaningless symbols to mislead any adversary trying to decipher the message.
The breakthrough came in June when Pierrot managed to make out a phrase in the letter, and the team then cracked the code with the help of Camille Desenclos, a historian. “It was painstaking and long work but there was really a breakthrough that happened in one day, where all of a sudden we had the right hypothesis,” she said.
Facebook Fined $276M under GDPR
[2022.11.30] Facebook—Meta—was just fined $276 million (USD) for a data leak that included full names, birth dates, phone numbers, and location.
Meta’s total fine by the Data Protection Commission is over $700 million. Total GDPR fines are over €2 billion (EUR) since 2018.
Sirius XM Software Vulnerability
[2022.12.01] This is new:
Newly revealed research shows that a number of major car brands, including Honda, Nissan, Infiniti, and Acura, were affected by a previously undisclosed security bug that would have allowed a savvy hacker to hijack vehicles and steal user data. According to researchers, the bug was in the car’s Sirius XM telematics infrastructure and would have allowed a hacker to remotely locate a vehicle, unlock and start it, flash the lights, honk the horn, pop the trunk, and access sensitive customer info like the owner’s name, phone number, address, and vehicle details.
Cars are just computers with four wheels and an engine. It’s no surprise that the software is vulnerable, and that everything is connected.
LastPass Security Breach
[2022.12.02] The company was hacked, and customer information accessed. No passwords were compromised.
Existential Risk and the Fermi Paradox
[2022.12.02] We know that complexity is the worst enemy of security, because it makes attack easier and defense harder. This becomes catastrophic as the effects of that attack become greater.
In A Hacker’s Mind (coming in February 2023), I write:
Our societal systems, in general, may have grown fairer and more just over the centuries, but progress isn’t linear or equitable. The trajectory may appear to be upwards when viewed in hindsight, but from a more granular point of view there are a lot of ups and downs. It’s a “noisy” process.
Technology changes the amplitude of the noise. Those near-term ups and downs are getting more severe. And while that might not affect the long-term trajectories, they drastically affect all of us living in the short term. This is how the twentieth century could—statistically—both be the most peaceful in human history and also contain the most deadly wars.
Ignoring this noise was only possible when the damage wasn’t potentially fatal on a global scale; that is, if a world war didn’t have the potential to kill everybody or destroy society, or occur in places and to people that the West wasn’t especially worried about. We can’t be sure of that anymore. The risks we face today are existential in a way they never have been before. The magnifying effects of technology enable short-term damage to cause long-term planet-wide systemic damage. We’ve lived for half a century under the potential specter of nuclear war and the life-ending catastrophe that could have been. Fast global travel allowed local outbreaks to quickly become the COVID-19 pandemic, costing millions of lives and billions of dollars while increasing political and social instability. Our rapid, technologically enabled changes to the atmosphere, compounded through feedback loops and tipping points, may make Earth much less hospitable for the coming centuries. Today, individual hacking decisions can have planet-wide effects. Sociobiologist Edward O. Wilson once described the fundamental problem with humanity is that “we have Paleolithic emotions, medieval institutions, and godlike technology.”
Technology could easily get to the point where the effects of a successful attack could be existential. Think biotech, nanotech, global climate change, maybe someday cyberattack—everything that people like Nick Bostrom study. In these areas, like everywhere else in past and present society, the technologies of attack develop faster the technologies of defending against attack. But suddenly, our inability to be proactive becomes fatal. As the noise due to technological power increases, we reach a threshold where a small group of people can irrecoverably destroy the species. The six-sigma guy can ruin it for everyone. And if they can, sooner or later they will. It’s possible that I have just explained the Fermi paradox.
CAPTCHA
[2022.12.05] This is an actual CAPTCHA I was shown when trying to log into PayPal.
As an actual human and not a bot, I had no idea how to answer. Is this a joke? (Seems not.) Is it a Magritte-like existential question? (It’s not a bicycle. It’s a drawing of a bicycle. Actually, it’s a photograph of a drawing of a bicycle. No, it’s really a computer image of a photograph of a drawing of a bicycle.) Am I overthinking this? (Definitely.) I stared at the screen, paralyzed, for way too long.
It’s probably the best CAPTCHA I have ever encountered; a computer would have just answered.
(In the end, I treated the drawing as a real bicycle and selected the appropriate squares…and it seemed to like that.)
CryWiper Data Wiper Targeting Russian Sites
[2022.12.06] Kaspersky is reporting on a data wiper masquerading as ransomware that is targeting local Russian government networks.
The Trojan corrupts any data that’s not vital for the functioning of the operating system. It doesn’t affect files with extensions .exe, .dll, .lnk, .sys or .msi, and ignores several system folders in the C:\Windows directory. The malware focuses on databases, archives, and user documents.
So far, our experts have seen only pinpoint attacks on targets in the Russian Federation. However, as usual, no one can guarantee that the same code won’t be used against other targets.
Nothing leading to an attribution.
News article.
Slashdot thread.
The Decoupling Principle
[2022.12.07] This is a really interesting paper that discusses what the authors call the Decoupling Principle:
The idea is simple, yet previously not clearly articulated: to ensure privacy, information should be divided architecturally and institutionally such that each entity has only the information they need to perform their relevant function. Architectural decoupling entails splitting functionality for different fundamental actions in a system, such as decoupling authentication (proving who is allowed to use the network) from connectivity (establishing session state for communicating). Institutional decoupling entails splitting what information remains between non-colluding entities, such as distinct companies or network operators, or between a user and network peers. This decoupling makes service providers individually breach-proof, as they each have little or no sensitive data that can be lost to hackers. Put simply, the Decoupling Principle suggests always separating who you are from what you do.
Lots of interesting details in the paper.
Leaked Signing Keys Are Being Used to Sign Malware
[2022.12.08] A bunch of Android OEM signing keys have been leaked or stolen, and they are actively being used to sign malware.
Łukasz Siewierski, a member of Google’s Android Security Team, has a post on the Android Partner Vulnerability Initiative (AVPI) issue tracker detailing leaked platform certificate keys that are actively being used to sign malware. The post is just a list of the keys, but running each one through APKMirror or Google’s VirusTotal site will put names to some of the compromised keys: Samsung, LG, and Mediatek are the heavy hitters on the list of leaked keys, along with some smaller OEMs like Revoview and Szroco, which makes Walmart’s Onn tablets.
This is a huge problem. The whole system of authentication rests on the assumption that signing keys are kept secret by the legitimate signers. Once that assumption is broken, all bets are off:
Samsung’s compromised key is used for everything: Samsung Pay, Bixby, Samsung Account, the phone app, and a million other things you can find on the 101 pages of results for that key. It would be possible to craft a malicious update for any one of these apps, and Android would be happy to install it overtop of the real app. Some of the updates are from today, indicating Samsung has still not changed the key.
Security Vulnerabilities in Eufy Cameras
[2022.12.09] Eufy cameras claim to be local only, but upload data to the cloud. The company is basically lying to reporters, despite being shown evidence to the contrary. The company’s behavior is so egregious that ReviewGeek is no longer recommending them.
This will be interesting to watch. If Eufy can ignore security researchers and the press without there being any repercussions in the market, others will follow suit. And we will lose public shaming as an incentive to improve security.
After further testing, we’re not seeing the VLC streams begin based solely on the camera detecting motion. We’re not sure if that’s a change since yesterday or something I got wrong in our initial report. It does appear that Eufy is making changes—it appears to have removed access to the method we were using to get the address of our streams, although an address we already obtained is still working.
Hacking Trespass Law
[2022.12.09] This article talks about public land in the US that is completely surrounded by private land, which in some cases makes it inaccessible to the public. But there’s a hack:
Some hunters have long believed, however, that the publicly owned parcels on Elk Mountain can be legally reached using a practice called corner-crossing.
Corner-crossing can be visualized in terms of a checkerboard. Ever since the Westward Expansion, much of the Western United States has been divided into alternating squares of public and private land. Corner-crossers, like checker pieces, literally step from one public square to another in diagonal fashion, avoiding trespassing charges. The practice is neither legal nor illegal. Most states discourage it, but none ban it.
It’s an interesting ambiguity in the law: does checker trespass on white squares when it moves diagonally over black squares? But, of course, the legal battle isn’t really about that. It’s about the rights of property owners vs the rights of those who wish to walk on this otherwise-inaccessible public land.
This particular hack will be adjudicated in court. State court, I think, which means the answer might be different in different states. It’s not an example I discuss in my new book, but it’s similar to many I do discuss. It’s the act of adjudicating hacks that allows systems to evolve.
Apple Is Finally Encrypting iCloud Backups
[2022.12.12] After way too many years, Apple is finally encrypting iCloud backups:
Based on a screenshot from Apple, these categories are covered when you flip on Advanced Data Protection: device backups, messages backups, iCloud Drive, Notes, Photos, Reminders, Safari bookmarks, Siri Shortcuts, Voice Memos, and Wallet Passes. Apple says the only “major” categories not covered by Advanced Data Protection are iCloud Mail, Contacts, and Calendar because “of the need to interoperate with the global email, contacts, and calendar systems,” according to its press release.
You can see the full list of data categories and what is protected under standard data protection, which is the default for your account, and Advanced Data Protection on Apple’s website.
With standard data protection, Apple holds the encryption keys for things that aren’t end-to-end encrypted, which means the company can help you recover that data if needed. Data that’s end-to-end encrypted can only be encrypted on “your trusted devices where you’re signed in with your Apple ID,” according to Apple, meaning that the company—or law enforcement or hackers—cannot access your data from Apple’s databases.
Note that this system doesn’t have the backdoor that was in Apple’s previous proposal, the one put there under the guise of detecting CSAM.
Apple says that it will roll out worldwide by the end of next year. I wonder how China will react to this.
Obligatory ChatGPT Post
[2022.12.13] Seems like absolutely everyone everywhere is playing with Chat GPT.
So I did, too….
Write an essay in the style of Bruce Schneier on how ChatGPT will affect cybersecurity.
As with any new technology, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. In many ways, ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats. At the same time, however, the use of these technologies also raises important questions and concerns about the potential risks and challenges they may pose.
One of the key ways in which ChatGPT is likely to affect cybersecurity is by enabling organizations to more quickly and accurately identify potential security threats. With its ability to analyze vast amounts of data and identify patterns and trends that might be difficult for humans to spot, ChatGPT could be a valuable tool for detecting and responding to cyberattacks. This could be especially useful in the case of complex, coordinated attacks that might otherwise be difficult to identify and defend against.
Another potential benefit of ChatGPT in the realm of cybersecurity is its ability to improve communication and coordination among different organizations and agencies. By enabling these groups to more easily share information and work together to defend against cyber threats, ChatGPT could help to make the overall cybersecurity landscape more effective and efficient. This could be especially important in the case of large-scale attacks that require a coordinated response from multiple organizations.
Of course, as with any new technology, there are also potential risks and challenges associated with the use of ChatGPT in the field of cybersecurity. One key concern is the potential for ChatGPT to be used by malicious actors to conduct cyberattacks. By automating the process of planning and executing attacks, ChatGPT could make it easier for attackers to carry out sophisticated, coordinated attacks at a much faster pace than would be possible without AI. This could potentially make it more difficult for organizations and individuals to defend against such attacks.
Another potential risk is the possibility that ChatGPT could be used to create fake news or propaganda, which could be used to spread misinformation or confusion. In the context of cybersecurity, the spread of false information could potentially lead to confusion and panic, making it more difficult for organizations and individuals to respond effectively to cyber threats. Additionally, the use of ChatGPT to create fake news or propaganda could also undermine trust in information and communications systems, potentially undermining the overall security of these systems.
Overall, the development and deployment of ChatGPT is likely to have a significant impact on the field of cybersecurity. While ChatGPT and other AI technologies hold great promise for improving the ability of organizations and individuals to defend against cyber threats, it is important to carefully consider the potential risks and challenges they may pose, and to take appropriate measures to mitigate these risks. This may include implementing strict security protocols and safeguards to prevent unauthorized access to ChatGPT and other AI systems, as well as ongoing monitoring and evaluation to ensure that these technologies are being used in a safe and responsible manner.
Not bad. I don’t think I can retire, but I probably can go on vacation for a few days. And, of course, it’s going to get better…a lot better…and probably sooner than we expect. This is a big deal.
Hacking Boston’s CharlieCard
[2022.12.14] Interesting discussion of vulnerabilities and exploits against Boston’s CharlieCard.
Recreating Democracy
[2022.12.14] Last week, I hosted a two-day workshop on recreating democracy.
The idea was to bring together people from a variety of disciplines who are all thinking about different aspects of democracy, less from a “what we need to do today” perspective and more from a blue-sky future perspective. My remit to the participants was this:
The idea is to start from scratch, to pretend we’re forming a new country and don’t have any precedent to deal with. And that we don’t have any unique interests to perturb our thinking. The modern representative democracy was the best form of government mid-eighteenth century politicians technology could invent. The twenty-first century is a very different place technically, scientifically, and philosophically. What could democracy look like if it were reinvented today? Would it even be democracy—what comes after democracy?
Some questions to think about:
- Representative democracies were built under the assumption that travel and communications were difficult. Does it still make sense to organize our representative units by geography? Or to send representatives far away to create laws in our name? Is there a better way for people to choose collective representatives?
- Indeed, the very idea of representative government is due to technological limitations. If an AI system could find the optimal solution for balancing every voter’s preferences, would it still make sense to have representatives—or should we vote for ideas and goals instead?
- With today’s technology, we can vote anywhere and any time. How should we organize the temporal pattern of voting—and of other forms of participation?
- Starting from scratch, what is today’s ideal government structure? Does it make sense to have a singular leader “in charge” of everything? How should we constrain power—is there something better than the legislative/judicial/executive set of checks and balances?
- The size of contemporary political units ranges from a few people in a room to vast nation-states and alliances. Within one country, what might the smaller units be—and how do they relate to one another?
- Who has a voice in the government? What does “citizen” mean? What about children? Animals? Future people (and animals)? Corporations? The land?
- And much more: What about the justice system? Is the twelfth-century jury form still relevant? How do we define fairness? Limit financial and military power? Keep our system robust to psychological manipulation?
My perspective, of course, is security. I want to create a system that is resilient against hacking: one that can evolve as both technologies and threats evolve.
The format was one that I have used before. Forty-eight people meet over two days. There are four ninety-minute panels per day, with six people on each. Everyone speaks for ten minutes, and the rest of the time is devoted to questions and comments. Ten minutes means that no one gets bogged down in jargon or details. Long breaks between sessions and evening dinners allow people to talk more informally. The result is a very dense, idea-rich environment that I find extremely valuable.
It was amazing event. Everyone participated. Everyone was interesting. (Details of the event—emerging themes, notes from the speakers—are in the comments.) It’s a week later and I am still buzzing with ideas. I hope this is only the first of an ongoing series of similar workshops.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of over one dozen books—including his latest, We Have Root—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet & Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an Advisory Board Member of the Electronic Privacy Information Center and VerifiedVoting.org. He is the Chief of Security Architecture at Inrupt, Inc.
Copyright © 2022 by Bruce Schneier.