July 15, 2018
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- Important: Crypto-Gram Has Moved to MailChimp
- Thomas Dullien on Complexity and Security
- Ridiculously Insecure Smart Lock
- Are Free Societies at a Disadvantage in National Cybersecurity
- Perverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill
- Algeria Shut Down the Internet to Prevent Students from Cheating on Exams
- Domain Name Stealing at Gunpoint
- The Effects of Iran’s Telegram Ban
- Secure Speculative Execution
- Bypassing Passcodes in iOS
- IEEE Statement on Strong Encryption vs. Backdoors
- Manipulative Social Media Practices
- Conservation of Threat
- Traffic Analysis of the LTE Mobile Standard
- California Passes New Privacy Law
- Beating Facial Recognition Software with Face Makeup
- The NSA’s Domestic Surveillance Centers
- PROPagate Code Injection Seen in the Wild
- Recovering Keyboard Inputs through Thermal Imaging
- Department of Commerce Report on the Botnet Threat
- Gas Pump Hack
- Schneier News
tl;dr: If you’re seeing Crypto-Gram for the first time in a while, it’s because I’ve changed e-mail providers. If you want to unsubscribe, click here.
Last month, I explained why I had to move Crypto-Gram to a new host, and why I chose MailChimp. Part of the reason is that MailChimp is allowing me to completely disable tracking. So there are no web bugs that track when you open Crypto-Gram, and no link tracking when you click on something.
If this is the first time you’ve seen Crypto-Gram in a while, it’s because your e-mail provider has been blocking the newsletter. Or because your spam filter has been misclassifying it. You’re on the mailing list because you subscribed some time ago, and the fact that you’re reading this now demonstrates that MailChimp is solving these problems.
Another change is that I am now sending out Crypto-Gram in HTML instead of plain text. It’s very simple, plain HTML with minimal formatting and no images. But this change allows links to appear in their natural place within the text, instead of being dumped into a long ugly list of URLs at the end. I think that’s a win. Expect some minor tweaks as I fine-tune the design.
Thank you for your understanding, and thank you to MailChimp for working with me to turn tracking off.
For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I’ve ever provided. Video. Slides.
Tapplock sells an “unbreakable” Internet-connected lock that you can open with your fingerprint. It turns out that:
- The lock broadcasts its Bluetooth MAC address in the clear, and you can calculate the unlock key from it.
- Any Tapplock account can unlock every lock.
- You can open the lock with a screwdriver.
Regarding the third flaw, the manufacturer has responded that “…the lock is invincible to the people who do not have a screwdriver.”
You can’t make this stuff up.
EDITED TO ADD: The quote at the end is from a different smart lock manufacturer. Apologies for that.
Jack Goldsmith and Stuart Russell just published an interesting paper, making the case that free and democratic nations are at a structural disadvantage in nation-on-nation cyberattack and defense. From a blog post:
It seeks to explain why the United States is struggling to deal with the “soft” cyber operations that have been so prevalent in recent years: cyberespionage and cybertheft, often followed by strategic publication; information operations and propaganda; and relatively low-level cyber disruptions such as denial-of-service and ransomware attacks. The main explanation is that constituent elements of U.S. society—a commitment to free speech, privacy and the rule of law; innovative technology firms; relatively unregulated markets; and deep digital sophistication—create asymmetric vulnerabilities that foreign adversaries, especially authoritarian ones, can exploit. These asymmetrical vulnerabilities might explain why the United States so often appears to be on the losing end of recent cyber operations and why U.S. attempts to develop and implement policies to enhance defense, resiliency, response or deterrence in the cyber realm have been ineffective.
I have long thought this to be true. There are defensive cybersecurity measures that a totalitarian country can take that a free, open, democratic country cannot. And there are attacks against a free, open, democratic country that just don’t matter to a totalitarian country. That makes us more vulnerable. (I don’t mean to imply—and neither do Russell and Goldsmith—that this disadvantage implies that free societies are overall worse, but it is an asymmetry that we should be aware of.)
I do worry that these disadvantages will someday become intolerable. Dan Geer often said that “the price of freedom is the probability of crime.” We are willing to pay this price because it isn’t that high. As technology makes individual and small-group actors more powerful, this price will get higher. Will there be a point in the future where free and open societies will no longer be able to survive? I honestly don’t know.
EDITED TO ADD (6/21): Jack Goldsmith also wrote this.
Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.
Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:
Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.
Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here—which is among the most common methods currently used—the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.
This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user’s phone accesses the bank’s legitimate online banking service.
This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.
Algeria shut the Internet down nationwide to prevent high-school students from cheating on their exams.
The solution in New South Wales, Australia was to ban smartphones.
EDITED TO ADD (6/22): Slashdot thread.
The Center for Human Rights in Iran has released a report outlining the effect’s of that country’s ban on Telegram, a secure messaging app used by about half of the country.
The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran’s elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.
From a Wired article:
Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.
It’s interesting that the analysis doesn’t really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.
We’re starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here’s one.
I don’t know if this particular design is secure. My guess is that we’re going to see several iterations of design and attack before we settle on something that works. But it’s good to see the research results emerge.
Last month, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:
We reported Friday on Hickey’s findings, which claimed to be able to send all combinations of a user’s possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn’t give the software any breaks, the keyboard input routine takes priority over the device’s data-erasing feature.
I didn’t write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings—and it seems that it doesn’t work.
This isn’t to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement—which means governments and criminals can do the same thing—and that Apple is releasing a new feature called “restricted mode” that may make those hacks obsolete.
Grayshift is claiming that its technology will still work.
Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn’t break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.
“Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled,” Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. “You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3.”
Whether that’s real or marketing, we don’t know.
The IEEE came out in favor of strong encryption:
IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as “backdoors” or “key escrow schemes” in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes—no matter how well intentioned—does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences
The full statement is here.
From the executive summary:
Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.
The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.
The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.
Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task —to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.
This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.
The academic paper.
Interesting research in using traffic analysis to learn things about encrypted traffic. It’s hard to know how critical these vulnerabilities are. They’re very hard to close without wasting a huge amount of bandwidth.
The active attacks are more interesting.
EDITED TO ADD (7/3): More information.
I have been thinking about this, and now believe the attacks are more serious than I previously wrote.
The California legislature unanimously passed the strongest data privacy law in the nation. This is great news, but I have a lot of reservations. The Internet tech companies pressed to get this law passed out of self-defense. A ballot initiative was already going to be voted on in November, one with even stronger data privacy protections. The author of that initiative agreed to pull it if the legislature passed something similar, and that’s why it did. This law doesn’t take effect until 2020, and that gives the legislature a lot of time to amend the law before it actually protects anyone’s privacy. And a conventional law is much easier to amend than a ballot initiative. Just as the California legislature gutted its net neutrality law in committee at the behest of the telcos, I expect it to do the same with this law at the behest of the Internet giants.
So: tentative hooray, I guess.
At least right now, facial recognition algorithms don’t work with Juggalo makeup.
The Intercept has a long story about the NSA’s domestic interception points.
Includes some new Snowden documents.
This technique abuses the SetWindowsSubclass function—a process used to install or update subclass windows running on the system—and can be used to modify the properties of windows running in the same session. This can be used to inject code and drop files while also hiding the fact it has happened, making it a useful, stealthy attack.
It’s likely that the attackers have observed publically available posts on PROPagate in order to recreate the technique for their own malicious ends.
Researchers at the University of California, Irvine, are able to recover user passwords by way of thermal imaging. The tech is pretty straightforward, but it’s interesting to think about the types of scenarios in which it might be pulled off.
Abstract: As a warm-blooded mammalian species, we humans routinely leave thermal residues on various objects with which we come in contact. This includes common input devices, such as keyboards, that are used for entering (among other things) secret information, such as passwords and PINs. Although thermal residue dissipates over time, there is always a certain time window during which thermal energy readings can be harvested from input devices to recover recently entered, and potentially sensitive, information.
To-date, there has been no systematic investigation of thermal profiles of keyboards, and thus no efforts have been made to secure them. This serves as our main motivation for constructing a means for password harvesting from keyboard thermal emanations. Specifically, we introduce Thermanator, a new post factum insider attack based on heat transfer caused by a user typing a password on a typical external keyboard. We conduct and describe a user study that collected thermal residues from 30 users entering 10 unique passwords (both weak and strong) on 4 popular commodity keyboards. Results show that entire sets of key-presses can be recovered by non-expert users as late as 30 seconds after initial password entry, while partial sets can be recovered as late as 1 minute after entry. Furthermore, we find that Hunt-and-Peck typists are particularly vulnerable. We also discuss some Thermanator mitigation strategies.
The main take-away of this work is three-fold: (1) using external keyboards to enter (already much-maligned) passwords is even less secure than previously recognized, (2) post factum (planned or impromptu) thermal imaging attacks are realistic, and finally (3) perhaps it is time to either stop using keyboards for password entry, or abandon passwords altogether.
The US Department of Commerce has released a report on the threat of botnets and what to do about it. I note that it explicitly said that the IoT makes the threat worse, and that the solutions are largely economic.
The Departments determined that the opportunities and challenges in working toward dramatically reducing threats from automated, distributed attacks can be summarized in six principal themes.
- Automated, distributed attacks are a global problem. The majority of the compromised devices in recent noteworthy botnets have been geographically located outside the United States. To increase the resilience of the Internet and communications ecosystem against these threats, many of which originate outside the United States, we must continue to work closely with international partners.
- Effective tools exist, but are not widely used. While there remains room for improvement, the tools, processes, and practices required to significantly enhance the resilience of the Internet and communications ecosystem are widely available, and are routinely applied in selected market sectors. However, they are not part of common practices for product development and deployment in many other sectors for a variety of reasons, including (but not limited to) lack of awareness, cost avoidance, insufficient technical expertise, and lack of market incentives
- Products should be secured during all stages of the lifecycle. Devices that are vulnerable at time of deployment, lack facilities to patch vulnerabilities after discovery, or remain in service after vendor support ends make assembling automated, distributed threats far too easy.
- Awareness and education are needed. Home users and some enterprise customers are often unaware of the role their devices could play in a botnet attack and may not fully understand the merits of available technical controls. Product developers, manufacturers, and infrastructure operators often lack the knowledge and skills necessary to deploy tools, processes, and practices that would make the ecosystem more resilient.
- Market incentives should be more effectively aligned. Market incentives do not currently appear to align with the goal of “dramatically reducing threats perpetrated by automated and distributed attacks.” Product developers, manufacturers, and vendors are motivated to minimize cost and time to market, rather than to build in security or offer efficient security updates. Market incentives must be realigned to promote a better balance between security and convenience when developing products.
- Automated, distributed attacks are an ecosystem-wide challenge. No single stakeholder community can address the problem in isolation.
The Departments identified five complementary and mutually supportive goals that, if realized, would dramatically reduce the threat of automated, distributed attacks and improve the resilience and redundancy of the ecosystem. A list of suggested actions for key stakeholders reinforces each goal. The goals are:
- Goal 1: Identify a clear pathway toward an adaptable, sustainable, and secure technology marketplace.
- Goal 2: Promote innovation in the infrastructure for dynamic adaptation to evolving threats.
- Goal 3: Promote innovation at the edge of the network to prevent, detect, and mitigate automated, distributed attacks.
- Goal 4: Promote and support coalitions between the security, infrastructure, and operational technology communities domestically and around the world
- Goal 5: Increase awareness and education across the ecosystem.
This summary is as good as any other:
The first big new feature in WPA3 is protection against offline, password-guessing attacks. This is where an attacker captures data from your Wi-Fi stream, brings it back to a private computer, and guesses passwords over and over again until they find a match. With WPA3, attackers are only supposed to be able to make a single guess against that offline data before it becomes useless; they’ll instead have to interact with the live Wi-Fi device every time they want to make a guess. (And that’s harder since they need to be physically present, and devices can be set up to protect against repeat guesses.)
WPA3’s other major addition, as highlighted by the Alliance, is forward secrecy. This is a privacy feature that prevents older data from being compromised by a later attack. So if an attacker captures an encrypted Wi-Fi transmission, then cracks the password, they still won’t be able to read the older data—they’d only be able to see new information currently flowing over the network.
Note that we’re just getting the new standard now. Actual devices that implement the standard are still months away.
This is weird:
Police in Detroit are looking for two suspects who allegedly managed to hack a gas pump and steal over 600 gallons of gasoline, valued at about $1,800. The theft took place in the middle of the day and went on for about 90 minutes, with the gas station attendant unable to thwart the hackers.
The theft, reported by Fox 2 Detroit, took place at around 1pm local time on June 23 at a Marathon gas station located about 15 minutes from downtown Detroit. At least 10 cars are believed to have benefitted from the free-flowing gas pump, which still has police befuddled.
Here’s what is known about the supposed hack: Per Fox 2 Detroit, the thieves used some sort of remote device that allowed them to hijack the pump and take control away from the gas station employee. Police confirmed to the local publication that the device prevented the clerk from using the gas station’s system to shut off the individual pump.
Hard to know what’s true, but it seems like a good example of a hack against a cyber-physical system.
I’m speaking at the University of Rwanda on August 9th.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books—including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2018 by Bruce Schneier.