April 15, 2019
by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
In this issue:
- Critical Flaw in Swiss Internet Voting System
- Upcoming Speaking Engagements
- I Was Cited in a Court Decision
- CAs Reissue Over One Million Weak Certificates
- Triton
- An Argument that Cybersecurity Is Basically Okay
- Zipcar Disruption
- First Look Media Shutting Down Access to Snowden NSA Archives
- Enigma, Typex, and Bombe Simulators
- Mail Fishing
- Personal Data Left on Used Laptops
- Programmers Who Don’t Understand Security Are Poor at Security
- Malware Installed in Asus Computers through Hacked Update Process
- NSA-Inspired Vulnerability Found in Huawei Laptops
- Recovering Smartphone Typing from Microphone Sounds
- Hacking Instagram to Get Free Meals in Exchange for Positive Reviews
- How Political Campaigns Use Personal Data
- Adversarial Machine Learning against Tesla’s Autopilot
- Former Mozilla CTO Harassed at the US Border
- Unhackable Cryptography?
- Ghidra: NSA’s Reverse-Engineering Tool
- Hey Secret Service: Don’t Plug Suspect USB Sticks into Random Computers
- How the Anonymous Artist Banksy Authenticates His or Her Work
- TajMahal Spyware
- New Version of Flame Malware Discovered
- Maliciously Tampering with Medical Imagery
Critical Flaw in Swiss Internet Voting System
[2019.03.15] Researchers have found a critical flaw in the Swiss Internet voting system. I was going to write an essay about how this demonstrates that Internet voting is a stupid idea and should never be attempted—and that this system in particular should never be deployed, even if the found flaw is fixed—but Cory Doctorow beat me to it:
The belief that companies can be trusted with this power defies all logic, but it persists. Someone found Swiss Post’s embrace of the idea too odious to bear, and they leaked the source code that Swiss Post had shared under its nondisclosure terms, and then an international team of some of the world’s top security experts (including some of our favorites, like Matthew Green) set about analyzing that code, and (as every security expert who doesn’t work for an e-voting company has predicted since the beginning of time), they found an incredibly powerful bug that would allow a single untrusted party at Swiss Post to undetectably alter the election results.
And, as everyone who’s ever advocated for the right of security researchers to speak in public without permission from the companies whose products they were assessing has predicted since the beginning of time, Swiss Post and Scytl downplayed the importance of this objectively very, very, very important bug. Swiss Post’s position is that since the bug only allows elections to be stolen by Swiss Post employees, it’s not a big deal, because Swiss Post employees wouldn’t steal an election.
But when Swiss Post agreed to run the election, they promised an e-voting system based on “zero knowledge” proofs that would allow voters to trust the outcome of the election without having to trust Swiss Post. Swiss Post is now moving the goalposts, saying that it wouldn’t be such a big deal if you had to trust Swiss Post implicitly to trust the outcome of the election.
You might be thinking, “Well, what is the big deal? If you don’t trust the people administering an election, you can’t trust the election’s outcome, right?” Not really: we design election systems so that multiple, uncoordinated people all act as checks and balances on each other. To suborn a well-run election takes massive coordination at many polling- and counting-places, as well as independent scrutineers from different political parties, as well as outside observers, etc.
Read the whole thing. It’s excellent.
Upcoming Speaking Engagements
[2019.03.15] This is a current list of where and when I am scheduled to speak:
I’m teaching a live online class called “Spotlight on Cloud: The Future of Internet Security with Bruce Schneier” on O’Reilly’s learning platform, Thursday, April 4, at 10:00 AM PT/1:00 PM ET.
The list is maintained on this page.
I Was Cited in a Court Decision
[2019.03.15] An article I co-wrote—my first law journal article—was cited by the Massachusetts Supreme Judicial Court—the state supreme court—in a case on compelled decryption.
Here’s the first, in footnote 1:
We understand the word “password” to be synonymous with other terms that cell phone users may be familiar with, such as Personal Identification Number or “passcode.” Each term refers to the personalized combination of letters or digits that, when manually entered by the user, “unlocks” a cell phone. For simplicity, we use “password” throughout. See generally, Kerr & Schneier, Encryption Workarounds, 106 Geo. L.J. 989, 990, 994, 998 (2018).
And here’s the second, in footnote 5:
We recognize that ordinary cell phone users are likely unfamiliar with the complexities of encryption technology. For instance, although entering a password “unlocks” a cell phone, the password itself is not the “encryption key” that decrypts the cell phone’s contents. See Kerr & Schneier, supra at 995. Rather, “entering the [password] decrypts the [encryption] key, enabling the key to be processed and unlocking the phone. This two-stage process is invisible to the casual user.” Id. Because the technical details of encryption technology do not play a role in our analysis, they are not worth belaboring. Accordingly, we treat the entry of a password as effectively decrypting the contents of a cell phone. For a more detailed discussion of encryption technology, see generally Kerr & Schneier, supra.
CAs Reissue Over One Million Weak Certificates
[2019.03.18] Turns out that the software a bunch of CAs used to generate public-key certificates was flawed: they created random serial numbers with only 63 bits instead of the required 64. That may not seem like a big deal to the layman, but that one bit change means that the serial numbers only have half the required entropy. This really isn’t a security problem; the serial numbers are to protect against attacks that involve weak hash functions, and we don’t allow those weak hash functions anymore. Still, it’s a good thing that the CAs are reissuing the certificates. The point of a standard is that it’s to be followed.
Triton
[2019.03.19] Good article on the Triton malware that targets industrial control systems.
An Argument that Cybersecurity Is Basically Okay
[2019.03.20] Andrew Odlyzko’s new essay is worth reading—”Cybersecurity is not very important“:
Abstract: There is a rising tide of security breaches. There is an even faster rising tide of hysteria over the ostensible reason for these breaches, namely the deficient state of our information infrastructure. Yet the world is doing remarkably well overall, and has not suffered any of the oft-threatened giant digital catastrophes. This continuing general progress of society suggests that cyber security is not very important. Adaptations to cyberspace of techniques that worked to protect the traditional physical world have been the main means of mitigating the problems that occurred. This “chewing gum and baling wire”approach is likely to continue to be the basic method of handling problems that arise, and to provide adequate levels of security.
I am reminded of these two essays. And, as I said in the blog post about those two essays:
This is true, and is something I worry will change in a world of physically capable computers. Automation, autonomy, and physical agency will make computer security a matter of life and death, and not just a matter of data.
Zipcar Disruption
[2019.03.20] This isn’t a security story, but it easily could have been. Last Saturday, Zipcar had a system outage: “an outage experienced by a third party telecommunications vendor disrupted connections between the company’s vehicles and its reservation software.”
That didn’t just mean people couldn’t get cars they reserved. Sometimes is meant they couldn’t get the cars they were already driving to work:
Andrew Jones of Roxbury was stuck on hold with customer service for at least a half-hour while he and his wife waited inside a Zipcar that would not turn back on after they stopped to fill it up with gas.
“We were just waiting and waiting for the call back,” he said.
Customers in other states, including New York, California, and Oregon, reported a similar problem. One user who tweeted about issues with a Zipcar vehicle listed his location as Toronto.
Some, like Jones, stayed with the inoperative cars. Others, including Tina Penman in Portland, Ore., and Heather Reid in Cambridge, abandoned their Zipcar. Penman took an Uber home, while Reid walked from the grocery store back to her apartment.
This is a reliability issue that turns into a safety issue. Systems that touch the direct physical world like this need better fail-safe defaults.
First Look Media Shutting Down Access to Snowden NSA Archives
[2019.03.21] The Daily Beast is reporting that First Look Media—home of The Intercept and Glenn Greenwald—is shutting down access to the Snowden archives.
The Intercept was the home for Greenwald’s subset of Snowden’s NSA documents since 2014, after he parted ways with the Guardian the year before. I don’t know the details of how the archive was stored, but it was offline and well secured—and it was available to journalists for research purposes. Many stories were published based on those archives over the years, albeit fewer in recent years.
The article doesn’t say what “shutting down access” means, but my guess is that it means that First Look Media will no longer make the archive available to outside journalists, and probably not to staff journalists, either. Reading between the lines, I think they will delete what they have.
This doesn’t mean that we’re done with the documents. Glenn Greenwald tweeted:
Both Laura & I have full copies of the archives, as do others. The Intercept has given full access to multiple media orgs, reporters & researchers. I’ve been looking for the right partner—an academic institution or research facility—that has the funds to robustly publish.
I’m sure there are still stories in those NSA documents, but with many of them a decade or more old, they are increasingly history and decreasingly current events. Every capability discussed in the documents needs to be read with a “and then they had ten years to improve this” mentality.
Eventually it’ll all become public, but not before it is 100% history and 0% current events.
Enigma, Typex, and Bombe Simulators
[2019.03.22] GCHQ has put simulators for the Enigma, Typex, and Bombe on the Internet.
News article.
Mail Fishing
[2019.03.25] Not email, paper mail:
Thieves, often at night, use string to lower glue-covered rodent traps or bottles coated with an adhesive down the chute of a sidewalk mailbox. This bait attaches to the envelopes inside, and the fish in this case—mail containing gift cards, money orders or checks, which can be altered with chemicals and cashed—are reeled out slowly.
In response, the US Post Office is introducing a more secure mailbox:
The mail slots are only large enough for letters, meaning sending even small packages will require a trip to the post office. The opening is also equipped with a mechanism that grabs at a letter once inserted, making it difficult to retract.
The crime has become more common in the past few years.
Personal Data Left on Used Laptops
[2019.03.26] A recent experiment found all sorts of personal data left on used laptops and smartphones.
This should come as no surprise. Simson Garfinkel performed the same experiment in 2003, with similar results.
Programmers Who Don’t Understand Security Are Poor at Security
[2019.03.27] A university study confirmed the obvious: if you pay a random bunch of freelance programmers a small amount of money to write security software, they’re not going to do a very good job at it.
In an experiment that involved 43 programmers hired via the Freelancer.com platform, University of Bonn academics have discovered that developers tend to take the easy way out and write code that stores user passwords in an unsafe manner.
For their study, the German academics asked a group of 260 Java programmers to write a user registration system for a fake social network.
Of the 260 developers, only 43 took up the job, which involved using technologies such as Java, JSF, Hibernate, and PostgreSQL to create the user registration component.
Of the 43, academics paid half of the group with €100, and the other half with €200, to determine if higher pay made a difference in the implementation of password security features.
Further, they divided the developer group a second time, prompting half of the developers to store passwords in a secure manner, and leaving the other half to store passwords in their preferred method—hence forming four quarters of developers paid €100 and prompted to use a secure password storage method (P100), developers paid €200 and prompted to use a secure password storage method (P200), devs paid €100 but not prompted for password security (N100), and those paid €200 but not prompted for password security (N200).
I don’t know why anyone would expect this group of people to implement a good secure password system. Look at how they were hired. Look at the scope of the project. Look at what they were paid. I’m sure they grabbed the first thing they found on GitHub that did the job.
I’m not very impressed with the study or its conclusions.
Malware Installed in Asus Computers through Hacked Update Process
[2019.03.28] Kaspersky Labs is reporting on a new supply chain attack they call “Shadowhammer.”
In January 2019, we discovered a sophisticated supply chain attack involving the ASUS Live Update Utility. The attack took place between June and November 2018 and according to our telemetry, it affected a large number of users.
[…]
The goal of the attack was to surgically target an unknown pool of users, which were identified by their network adapters’ MAC addresses. To achieve this, the attackers had hardcoded a list of MAC addresses in the trojanized samples and this list was used to identify the actual intended targets of this massive operation. We were able to extract more than 600 unique MAC addresses from over 200 samples used in this attack. Of course, there might be other samples out there with different MAC addresses in their list.
We believe this to be a very sophisticated supply chain attack, which matches or even surpasses the Shadowpad and the CCleaner incidents in complexity and techniques. The reason that it stayed undetected for so long is partly due to the fact that the trojanized updaters were signed with legitimate certificates (eg: “ASUSTeK Computer Inc.”). The malicious updaters were hosted on the official liveupdate01s.asus[.]com and liveupdate01.asus[.]com ASUS update servers.
The sophistication of the attack leads to the speculation that a nation-state—and one of the cyber powers—is responsible.
As I have previously written, supply chain security is “an incredibly complex problem.” These attacks co-opt the very mechanisms we need to trust for our security. And the international nature of our industry results in an array of vulnerabilities that are very hard to secure.
Kim Zetter has a really good article on this. Check if your computer is infected here, or use this diagnostic tool from Asus.
Another news article.
NSA-Inspired Vulnerability Found in Huawei Laptops
[2019.03.29] This is an interesting story of a serious vulnerability in a Huawei driver that Microsoft found. The vulnerability is similar in style to the NSA’s DOUBLEPULSAR that was leaked by the Shadow Brokers—believed to be the Russian government—and it’s obvious that this attack copied that technique.
What is less clear is whether the vulnerability—which has been fixed—was put into the Huwei driver accidentally or on purpose.
Recovering Smartphone Typing from Microphone Sounds
[2019.04.01] Yet another side-channel attack on smartphones: “Hearing your touch: A new acoustic side channel on smartphones,” by Ilia Shumailov, Laurent Simon, Jeff Yan, and Ross Anderson.
Abstract: We present the first acoustic side-channel attack that recovers what users type on the virtual keyboard of their touch-screen smartphone or tablet. When a user taps the screen with a finger, the tap generates a sound wave that propagates on the screen surface and in the air. We found the device’s microphone(s) can recover this wave and “hear” the finger’s touch, and the wave’s distortions are characteristic of the tap’s location on the screen. Hence, by recording audio through the built-in microphone(s), a malicious app can infer text as the user enters it on their device. We evaluate the effectiveness of the attack with 45 participants in a real-world environment on an Android tablet and an Android smartphone. For the tablet, we recover 61% of 200 4-digit PIN-codes within 20 attempts, even if the model is not trained with the victim’s data. For the smartphone, we recover 9 words of size 7-13 letters with 50 attempts in a common side-channel attack benchmark. Our results suggest that it not always sufficient to rely on isolation mechanisms such as TrustZone to protect user input. We propose and discuss hardware, operating-system and application-level mechanisms to block this attack more effectively. Mobile devices may need a richer capability model, a more user-friendly notification system for sensor usage and a more thorough evaluation of the information leaked by the underlying hardware.
Blog post.
Hacking Instagram to Get Free Meals in Exchange for Positive Reviews
[2019.04.02] This is a fascinating hack:
In today’s digital age, a large Instagram audience is considered a valuable currency. I had also heard through the grapevine that I could monetize a large following—or in my desired case—use it to have my meals paid for. So I did just that.
I created an Instagram page that showcased pictures of New York City’s skylines, iconic spots, elegant skyscrapers—you name it. The page has amassed a following of over 25,000 users in the NYC area and it’s still rapidly growing.
I reach out restaurants in the area either via Instagram’s direct messaging or email and offer to post a positive review in return for a free entree or at least a discount. Almost every restaurant I’ve messaged came back at me with a compensated meal or a gift card. Most places have an allocated marketing budget for these types of things so they were happy to offer me a free dining experience in exchange for a promotion. I’ve ended up giving some of these meals away to my friends and family because at times I had too many queued up to use myself.
The beauty of this all is that I automated the whole thing. And I mean 100% of it. I wrote code that finds these pictures or videos, makes a caption, adds hashtags, credits where the picture or video comes from, weeds out bad or spammy posts, posts them, follows and unfollows users, likes pictures, monitors my inbox, and most importantly—both direct messages and emails restaurants about a potential promotion. Since its inception, I haven’t even really logged into the account. I spend zero time on it. It’s essentially a robot that operates like a human, but the average viewer can’t tell the difference. And as the programmer, I get to sit back and admire its (and my) work.
So much going on in this project.
How Political Campaigns Use Personal Data
[2019.04.03] Really interesting report from Tactical Tech.
Data-driven technologies are an inevitable feature of modern political campaigning. Some argue that they are a welcome addition to politics as normal and a necessary and modern approach to democratic processes; others say that they are corrosive and diminish trust in already flawed political systems. The use of these technologies in political campaigning is not going away; in fact, we can only expect their sophistication and prevalence to grow. For this reason, the techniques and methods need to be reviewed outside the dichotomy of ‘good’ or ‘bad’ and beyond the headlines of ‘disinformation campaigns’.
All the data-driven methods presented in this guide would not exist without the commercial digital marketing and advertising industry. From analysing behavioural data to A/B testing and from geotargeting to psychometric profiling, political parties are using the same techniques to sell political candidates to voters that companies use to sell shoes to consumers. The question is, is that appropriate? And what impact does it have not only on individual voters, who may or may not be persuad-ed, but on the political environment as a whole?
The practice of political strategists selling candidates as brands is not new. Vance Packard wrote about the ‘depth probing’ techniques of ‘political persuaders’ as early as 1957. In his book, ‘The Hidden Persuaders’, Packard described political strategies designed to sell candidates to voters ‘like toothpaste’, and how public relations directors at the time boasted that ‘scientific methods take the guesswork out of politics’.5 In this sense, what we have now is a logical progression of the digitisation of marketing techniques and political persuasion techniques.
Adversarial Machine Learning against Tesla’s Autopilot
[2019.04.04] Researchers have been able to fool Tesla’s autopilot in a variety of ways, including convincing it to drive into oncoming traffic. It requires the placement of stickers on the road.
Abstract: Keen Security Lab has maintained the security research work on Tesla vehicle and shared our research results on Black Hat USA 2017 and 2018 in a row. Based on the ROOT privilege of the APE (Tesla Autopilot ECU, software version 18.6.1), we did some further interesting research work on this module. We analyzed the CAN messaging functions of APE, and successfully got remote control of the steering system in a contact-less way. We used an improved optimization algorithm to generate adversarial examples of the features (autowipers and lane recognition) which make decisions purely based on camera data, and successfully achieved the adversarial example attack in the physical world. In addition, we also found a potential high-risk design weakness of the lane recognition when the vehicle is in Autosteer mode. The whole article is divided into four parts: first a brief introduction of Autopilot, after that we will introduce how to send control commands from APE to control the steering system when the car is driving. In the last two sections, we will introduce the implementation details of the autowipers and lane recognition features, as well as our adversarial example attacking methods in the physical world. In our research, we believe that we made three creative contributions:
- We proved that we can remotely gain the root privilege of APE and control the steering system.
- We proved that we can disturb the autowipers function by using adversarial examples in the physical world.
- We proved that we can mislead the Tesla car into the reverse lane with minor changes on the road.
You can see the stickers in this photo. They’re unobtrusive.
This is machine learning’s big problem, and I think solving it is a lot harder than many believe.
Former Mozilla CTO Harassed at the US Border
[2019.04.04] This is a pretty awful story of how Andreas Gal, former Mozilla CTO and US citizen, was detained and threatened at the US border. CBP agents demanded that he unlock his phone and computer.
Know your rights when you enter the US. The EFF publishes a handy guide. And if you want to encrypt your computer so that you are unable to unlock it on demand, here’s my guide. Remember not to lie to a customs officer; that’s a crime all by itself.
Unhackable Cryptography?
[2019.04.05] A recent article overhyped the release of EverCrypt, a cryptography library created using formal methods to prove security against specific attacks.
The Quanta magazine article sets off a series of “snake-oil” alarm bells. The author’s Github README is more measured and accurate, and illustrates what a cool project this really is. But it’s not “hacker-proof cryptographic code.”
Ghidra: NSA’s Reverse-Engineering Tool
[2019.04.08] Last month, the NSA released Ghidra, a software reverse-engineering tool. Early reactions are uniformly positive.
Hey Secret Service: Don’t Plug Suspect USB Sticks into Random Computers
[2019.04.09] I just noticed this bit from the incredibly weird story of the Chinese woman arrested at Mar-a-Lago:
Secret Service agent Samuel Ivanovich, who interviewed Zhang on the day of her arrest, testified at the hearing. He stated that when another agent put Zhang’s thumb drive into his computer, it immediately began to install files, a “very out-of-the-ordinary” event that he had never seen happen before during this kind of analysis. The agent had to immediately stop the analysis to halt any further corruption of his computer, Ivanovich testified. The analysis is ongoing but still inconclusive, he said.
This is what passes for forensics at the Secret Service? I expect better.
EDITED TO ADD (4/9): Ars Technica has more detail.
How the Anonymous Artist Banksy Authenticates His or Her Work
[2019.04.10] Interesting scheme:
It all starts off with a fairly bog standard gallery style certificate. Details of the work, the authenticating agency, a bit of embossing and a large impressive signature at the bottom. Exactly the sort of things that can be easily copied by someone on a mission to create the perfect fake.
That torn-in-half banknote though? Never mind signatures, embossing or wax seals. The Di Faced Tenner is doing all the authentication heavy lifting here.
The tear is what uniquely separates the private key, the half of the note kept secret under lock and key at Pest Control, with the public key. The public key is the half of the note attached to the authentication certificate which gets passed on with the print, and allows its authenticity to be easily verified.
We have no idea what has been written on Pest Control’s private half of the note. Which means it can’t be easily recreated, and that empowers Pest Control to keep the authoritative list of who currently owns each authenticated Banksy work.
TajMahal Spyware
[2019.04.11] Kaspersky has released details about a sophisticated nation-state spyware it calls TajMahal:
The TajMahal framework’s 80 modules, Shulmin says, comprise not only the typical keylogging and screengrabbing features of spyware, but also never-before-seen and obscure tricks. It can intercept documents in a printer queue, and keep track of “files of interest,” automatically stealing them if a USB drive is inserted into the infected machine. And that unique spyware toolkit, Kaspersky says, bears none of the fingerprints of any known nation-state hacker group.
It was found on the servers of an “embassy of a Central Asian country.” No speculation on who wrote and controls it.
More details.
New Version of Flame Malware Discovered
[2019.04.12] Flame was discovered in 2012, linked to Stuxnet, and believed to be American in origin. It has recently been linked to more modern malware through new analysis tools that find linkages between different software.
Seems that Flame did not disappear after it was discovered, as was previously thought. (Its controllers used a kill switch to disable and erase it.) It was rewritten and reintroduced.
Note that the article claims that Flame was believed to be Israeli in origin. That’s wrong; most people who have an opinion believe it is from the NSA.
Maliciously Tampering with Medical Imagery
[2019.04.12] In what I am sure is only a first in many similar demonstrations, researchers are able to add or remove cancer signs from CT scans. The results easily fool radiologists.
I don’t think the medical device industry has thought at all about data integrity and authentication issues. In a world where sensor data of all kinds is undetectably manipulatable, they’re going to have to start.
Research paper. Slashdot thread.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books—including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2019 by Bruce Schneier.