August 15, 2018
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- New Book Announcement: Click Here to Kill Everybody
- Reasonably Clever Extortion E-mail Based on Password Theft
- Installing a Credit Card Skimmer on a POS Terminal
- Defeating the iPhone Restricted Mode
- Suing South Carolina Because Its Election Machines Are Insecure
- New Report on Chinese Intelligence Cyber-Operations
- 1Password’s Travel Mode
- Nicholas Weaver on Cryptocurrencies
- On Financial Fraud
- Major Bluetooth Vulnerability
- DARPA Wants Research into Resilient Anonymous Communications
- Google Employees Use a Physical Token as Their Second Authentication Factor
- Third Annual Cybercrime Conference
- New Report on Police Digital Forensics Techniques
- Identifying People by Metadata
- The Poor Cybersecurity of US Space Assets
- Hacking a Robot Vacuum
- Backdoors in Cisco Routers
- GCHQ on Quantum Key Distribution
- Using In-Game Purchases to Launder Money
- How the US Military Can Better Keep Hackers
- Three of My Books Are Available in DRM-Free E-Book Format
- Hacking the McDonald’s Monopoly Sweepstakes
- Measuring the Rationality of Security Decisions
- SpiderOak’s Warrant Canary Died
- Detecting Phishing Sites with Machine Learning
- Don’t Fear the TSA Cutting Airport Security. Be Glad That They’re Talking about It.
- xkcd on Voting Computers
- Identifying Programmers by their Coding Style
- Google Tracks its Users Even if They Opt-Out of Tracking
- My Speaking Engagements
I am pleased to announce the publication of my latest book Click Here to Kill Everybody: Security and Survival in a Hyper-Connected World. In it, I examine how our new immersive world of physically capable computers affects our security.
I argue that this changes everything about security. Attacks are no longer just about data, they now affect life and property: cars, medical devices, thermostats, power plants, drones, and so on. All of our security assumptions assume that computers are fundamentally benign. That, no matter how bad the breach or vulnerability is, it’s just data. That’s simply not true anymore. As automation, autonomy, and physical agency become more prevalent, the trade-offs we made for things like authentication, patching, and supply chain security no longer make any sense. The things we’ve done before will no longer work in the future.
This is a book about technology, and it’s also a book about policy. The regulation-free Internet that we’ve enjoyed for the past decades will not survive this new, more dangerous, world. I fear that our choice is no longer between government regulation and no government regulation; it’s between smart government regulation and stupid regulation. My aim is to discuss what a regulated Internet might look like before one is thrust upon us after a disaster.
Click Here to Kill Everybody will be published the first week of September. You can pre-order a copy from Amazon, Norton’s webpage, or anyplace else books are sold. If you want a signed copy, order it from me. (Note: don’t expect the book from me for a few weeks.)
[2018.07.16] Imagine you’ve gotten your hands on a file of e-mail addresses and passwords. You want to monetize it, but the site it’s for isn’t very valuable. How do you use it? You convince the owners of the password to send you money.
I recently saw a spam e-mail that ties the password to a porn site. The e-mail title contains the password, which is sure to get the recipient’s attention.
I do know, yhhaabor, is your password. You may not know me and you’re most likely thinking why you’re getting this email, right?
actually, I actually setup a malware on the adult video clips (pornographic material) web site and you know what, you visited this web site to have fun (you know what I mean). While you were watching videos, your web browser began operating as a RDP (Remote Desktop) having a key logger which provided me accessibility to your display and web camera. after that, my software obtained your entire contacts from your Messenger, social networks, and email.
What exactly did I do?
I created a double-screen video. First part shows the video you were viewing (you’ve got a fine taste ; )), and 2nd part displays the recording of your webcam.
What should you do?
Well, I believe, $2900 is a reasonable price for our little secret. You will make the payment through Bitcoin (if you don’t know this, search “how to buy bitcoin” in Google).
This is clever. The valid password establishes legitimacy. There’s a decent chance the recipient has visited porn sites, and maybe set up an account for which they can’t remember the password. The RDP attack is plausible, as is turning on the camera and downloading the contacts file.
Of course, it all fails because there isn’t enough detail. If the attacker actually did all of this, they would include the name of the porn site and attached the video file.
But it’s a clever attack, and one I have not seen before. If the attacker asked for an order of magnitude less money, I think they would make more.
EDITED TO ADD: Brian Krebs has written about this, too.
[2018.07.17] Watch how someone installs a credit card skimmer in just a couple of seconds. I don’t know if the skimmer just records the data and is collected later, or if it transmits the data back to some base station.
[2018.07.18] Recently, Apple introduced restricted mode to protect iPhones from attacks by companies like Cellebrite and Greyshift, which allow attackers to recover information from a phone without the password or fingerprint. Elcomsoft just announced that it can easily bypass it.
There is an important lesson in this: security is hard. Apple Computer has one of the best security teams on the planet. This feature was not tossed out in a day; it was designed and implemented with a lot of thought and care. If this team could make a mistake like this, imagine how bad a security feature is when implemented by a team without this kind of expertise.
This is the reason actual cryptographers and security engineers are very skeptical when a random company announces that their product is “secure.” We know that they don’t have the requisite security expertise to design and implement security properly. We know they didn’t take the time and care. We know that their engineers think they understand security, and designed to a level that they couldn’t break.
Getting security right is hard for the best teams on the world. It’s impossible for average teams.
Note: I am an advisor to Protect Democracy on its work related to election cybersecurity, and submitted a declaration in litigation it filed, challenging President Trump’s now-defunct “election integrity” commission.
The always interesting gruqq has some interesting commentary on the group and its tactics.
Lots of detailed information in the report, but I admit that I have never heard of ProtectWise or its research team 401TRG. Independent corroboration of this information would be helpful.
Your vaults aren’t just hidden; they’re completely removed from your devices as long as Travel Mode is on. That includes every item and all your encryption keys. There are no traces left for anyone to find. So even if you’re asked to unlock 1Password by someone at the border, there’s no way for them to tell that Travel Mode is even enabled.
In 1Password Teams, Travel Mode is even cooler. If you’re a team administrator, you have total control over which secrets your employees can travel with. You can turn Travel Mode on and off for your team members, so you can ensure that company information stays safe at all times.
The way this works is important. If the scary border police demand that you unlock your 1Password vault, those passwords/keys are not there for the border police to find.
The only flaw — and this is minor — is that the system requires you to lie. When the scary border police ask you “do you have any other passwords?” or “have you enabled travel mode,” you can’t tell them the truth. In the US, lying to a federal office is a felony.
I previously described a system that doesn’t require you to lie. It’s more complicated to implement, though.
This is a great feature, and I’m happy to see it implemented.
Cryptocurrencies, although a seemingly interesting idea, are simply not fit for purpose. They do not work as currencies, they are grossly inefficient, and they are not meaningfully distributed in terms of trust. Risks involving cryptocurrencies occur in four major areas: technical risks to participants, economic risks to participants, systemic risks to the cryptocurrency ecosystem, and societal risks.
I haven’t written much about cryptocurrencies, but I share Weaver’s skepticism.
EDITED TO ADD (8/2): Paul Krugman on cryptocurrencies.
That’s how we got it so wrong. We were looking for incidental breaches of technical regulations, not systematic crime. And the thing is, that’s normal. The nature of fraud is that it works outside your field of vision, subverting the normal checks and balances so that the world changes while the picture stays the same. People in financial markets have been missing the wood for the trees for as long as there have been markets.
Trust — particularly between complete strangers, with no interactions beside relatively anonymous market transactions — is the basis of the modern industrial economy. And the story of the development of the modern economy is in large part the story of the invention and improvement of technologies and institutions for managing that trust.
And as industrial society develops, it becomes easier to be a victim. In The Wealth of Nations, Adam Smith described how prosperity derived from the division of labour — the 18 distinct operations that went into the manufacture of a pin, for example. While this was going on, the modern world also saw a growing division of trust. The more a society benefits from the division of labour in checking up on things, the further you can go into a con game before you realise that you’re in one.
Libor teaches us a valuable lesson about commercial fraud — that unlike other crimes, it has a problem of denial as well as one of detection. There are very few other criminal acts where the victim not only consents to the criminal act, but voluntarily transfers the money or valuable goods to the criminal. And the hierarchies, status distinctions and networks that make up a modern economy also create powerful psychological barriers against seeing fraud when it is happening. White-collar crime is partly defined by the kind of person who commits it: a person of high status in the community, the kind of person who is always given the benefit of the doubt.
Fraudsters don’t play on moral weaknesses, greed or fear; they play on weaknesses in the system of checks and balances — the audit processes that are meant to supplement an overall environment of trust. One point that comes up again and again when looking at famous and large-scale frauds is that, in many cases, everything could have been brought to a halt at a very early stage if anyone had taken care to confirm all the facts. But nobody does confirm all the facts. There are just too bloody many of them. Even after the financial rubble has settled and the arrests been made, this is a huge problem.
In some implementations, the elliptic curve parameters are not all validated by the cryptographic algorithm implementation, which may allow a remote attacker within wireless range to inject an invalid public key to determine the session key with high probability. Such an attacker can then passively intercept and decrypt all device messages, and/or forge and inject malicious messages.
This is serious. Update your software now, and try not to think about all of the Bluetooth applications that can’t be updated.
A Google spokesperson said Security Keys now form the basis of all account access at Google.
“We have had no reported or confirmed account takeovers since implementing security keys at Google,” the spokesperson said. “Users might be asked to authenticate using their security key for many different apps/reasons. It all depends on the sensitivity of the app and the risk of the user at that point in time.”
Now Google is selling that security to its users:
On Wednesday, the company announced its new Titan security key, a device that protects your accounts by restricting two-factor authentication to the physical world. It’s available as a USB stick and in a Bluetooth variation, and like similar products by Yubico and Feitian, it utilizes the protocol approved by the FIDO alliance. That means it’ll be compatible with pretty much any service that enables users to turn on Universal 2nd Factor Authentication (U2F).
Over the past year, we conducted a series of interviews with federal, state, and local law enforcement officials, attorneys, service providers, and civil society groups. We also commissioned a survey of law enforcement officers from across the country to better understand the full range of difficulties they are facing in accessing and using digital evidence in their cases. Survey results indicate that accessing data from service providers — much of which is not encrypted — is the biggest problem that law enforcement currently faces in leveraging digital evidence.
This is a problem that has not received adequate attention or resources to date. An array of federal and state training centers, crime labs, and other efforts have arisen to help fill the gaps, but they are able to fill only a fraction of the need. And there is no central entity responsible for monitoring these efforts, taking stock of the demand, and providing the assistance needed. The key federal entity with an explicit mission to assist state and local law enforcement with their digital evidence needs — the National Domestic Communications Assistance Center (NDCAC)has a budget of $11.4 million, spread among several different programs designed to distribute knowledge about service providers’ policies and products, develop and share technical tools, and train law enforcement on new services and technologies, among other initiatives.
From a news article:
In addition to bemoaning the lack of guidance and help from tech companies — a quarter of survey respondents said their top issue was convincing companies to hand over suspects’ data — law enforcement officials also reported receiving barely any digital evidence training. Local police said they’d received only 10 hours of training in the past 12 months; state police received 13 and federal officials received 16. A plurality of respondents said they only received annual training. Only 16 percent said their organizations scheduled training sessions at least twice per year.
Here’s the report.
[2018.07.30] Interesting research: “You are your Metadata: Identification and Obfuscation of Social Media Users using Metadata Information,” by Beatrice Perez, Mirco Musolesi, and Gianluca Stringhini.
Abstract: Metadata are associated to most of the information we produce in our daily interactions and communication in the digital world. Yet, surprisingly, metadata are often still categorized as non-sensitive. Indeed, in the past, researchers and practitioners have mainly focused on the problem of the identification of a user from the content of a message.
In this paper, we use Twitter as a case study to quantify the uniqueness of the association between metadata and user identity and to understand the effectiveness of potential obfuscation strategies. More specifically, we analyze atomic fields in the metadata and systematically combine them in an effort to classify new tweets as belonging to an account using different machine learning algorithms of increasing complexity. We demonstrate that through the application of a supervised learning algorithm, we are able to identify any user in a group of 10,000 with approximately 96.7% accuracy. Moreover, if we broaden the scope of our search and consider the 10 most likely candidates we increase the accuracy of the model to 99.22%. We also found that data obfuscation is hard and ineffective for this type of data: even after perturbing 60% of the training data, it is still possible to classify users with an accuracy higher than 95%. These results have strong implications in terms of the design of metadata obfuscation strategies, for example for data set release, not only for Twitter, but, more generally, for most social media platforms.
[2018.07.31] The Diqee 360 robotic vacuum cleaner can be turned into a surveillance device. The attack requires physical access to the device, so in the scheme of things it’s not a big deal. But why in the world is the vacuum equipped with a microphone?
QKD protocols address only the problem of agreeing keys for encrypting data. Ubiquitous on-demand modern services (such as verifying identities and data integrity, establishing network sessions, providing access control, and automatic software updates) rely more on authentication and integrity mechanisms — such as digital signatures — than on encryption.
QKD technology cannot replace the flexible authentication mechanisms provided by contemporary public key signatures. QKD also seems unsuitable for some of the grand future challenges such as securing the Internet of Things (IoT), big data, social media, or cloud applications.
I agree with them. It’s a clever idea, but basically useless in practice. I don’t even think it’s anything more than a niche solution in a world where quantum computers have broken our traditional public-key algorithms.
Read the whole thing. It’s short.
The military is an impossible place for hackers thanks to antiquated career management, forced time away from technical positions, lack of mission, non-technical mid- and senior-level leadership, and staggering pay gaps, among other issues.
It is possible the military needs a cyber corps in the future, but by accelerating promotions, offering graduate school to newly commissioned officers, easing limited lateral entry for exceptional private-sector talent, and shortening the private/public pay gap, the military can better accommodate its most technical members now.
The model the author uses is military doctors.
[2018.08.03] Humble Bundle sells groups of e-books at ridiculously low prices, DRM free. This month, the bundles are all Wiley titles, including three of my books: Applied Cryptography, Secrets and Lies, and Cryptography Engineering. $15 gets you everything, and they’re all DRM-free.
Even better, a portion of the proceeds goes to the EFF. As a board member, I’ve seen the other side of this. It’s significant money.
[2018.08.06] Long and interesting story — now two decades old — of massive fraud perpetrated against the McDonald’s Monopoly sweepstakes. The central fraudster was the person in charge of securing the winning tickets.
[2018.08.07] Interesting research: “Dancing Pigs or Externalities? Measuring the Rationality of Security Decisions“:
Abstract: Accurately modeling human decision-making in security is critical to thinking about when, why, and how to recommend that users adopt certain secure behaviors. In this work, we conduct behavioral economics experiments to model the rationality of end-user security decision-making in a realistic online experimental system simulating a bank account. We ask participants to make a financially impactful security choice, in the face of transparent risks of account compromise and benefits offered by an optional security behavior (two-factor authentication). We measure the cost and utility of adopting the security behavior via measurements of time spent executing the behavior and estimates of the participant’s wage. We find that more than 50% of our participants made rational (e.g., utility optimal) decisions, and we find that participants are more likely to behave rationally in the face of higher risk. Additionally, we find that users’ decisions can be modeled well as a function of past behavior (anchoring effects), knowledge of costs, and to a lesser extent, users’ awareness of risks and context (R2=0.61). We also find evidence of endowment effects, as seen in other areas of economic and psychological decision-science literature, in our digital-security setting. Finally, using our data, we show theoretically that a “one-size-fits-all” emphasis on security can lead to market losses, but that adoption by a subset of users with higher risks or lower costs can lead to market gains
I have never quite trusted the idea of a warrant canary. But here it seems to have worked. (Presumably, if SpiderOak wanted to replace the warrant canary with a transparency report, they would have written something explaining their decision. To have it simply disappear is what we would expect if SpiderOak were being forced to comply with a US government request for personal data.)
EDITED TO ADD (8/9): SpiderOak has posted an explanation claiming that the warrant canary did not die — it just changed.
That’s obviously false, because it did die. And a change is the functional equivalent — that’s how they work. So either they have received a National Security Letter and now have to pretend they did not, or they completely misunderstood what a warrant canary is and how it works. No one knows.
I have never fully trusted warrant canaries — this EFF post explains why — and this is an illustration.
A trained eye (or even a not-so-trained one) can discern when something phishy is going on with a domain or subdomain name. There are search tools, such as Censys.io, that allow humans to specifically search through the massive pile of certificate log entries for sites that spoof certain brands or functions common to identity-processing sites. But it’s not something humans can do in real time very well — which is where machine learning steps in.
StreamingPhish and the other tools apply a set of rules against the names within certificate log entries. In StreamingPhish’s case, these rules are the result of guided learning — a corpus of known good and bad domain names is processed and turned into a “classifier,” which (based on my anecdotal experience) can then fairly reliably identify potentially evil websites.
[2018.08.10] Last week, CNN reported that the Transportation Security Administration is considering eliminating security at U.S. airports that fly only smaller planes — 60 seats or fewer. Passengers connecting to larger planes would clear security at their destinations.
To be clear, the TSA has put forth no concrete proposal. The internal agency working group’s report obtained by CNN contains no recommendations. It’s nothing more than 20 people examining the potential security risks of the policy change. It’s not even new: The TSA considered this back in 2011, and the agency reviews its security policies every year. But commentary around the news has been strongly negative. Regardless of the idea’s merit, it will almost certainly not happen. That’s the result of politics, not security: Sen. Charles E. Schumer (D-N.Y.), one of numerous outraged lawmakers, has already penned a letter to the agency saying that “TSA documents proposing to scrap critical passenger security screenings, without so much as a metal detector in place in some airports, would effectively clear the runway for potential terrorist attacks.” He continued, “It simply boggles the mind to even think that the TSA has plans like this on paper in the first place.”
We don’t know enough to conclude whether this is a good idea, but it shouldn’t be dismissed out of hand. We need to evaluate airport security based on concrete costs and benefits, and not continue to implement security theater based on fear. And we should applaud the agency’s willingness to explore changes in the screening process.
There is already a tiered system for airport security, varying for both airports and passengers. Many people are enrolled in TSA PreCheck, allowing them to go through checkpoints faster and with less screening. Smaller airports don’t have modern screening equipment like full-body scanners or CT baggage screeners, making it impossible for them to detect some plastic explosives. Any would-be terrorist is already able to pick and choose his flight conditions to suit his plot.
Over the years, I have written many essays critical of the TSA and airport security, in general. Most of it is security theater — measures that make us feel safer without improving security. For example, the liquids ban makes no sense as implemented, because there’s no penalty for repeatedly trying to evade the scanners. The full-body scanners are terrible at detecting the explosive material PETN if it is well concealed — which is their whole point.
There are two basic kinds of terrorists. The amateurs will be deterred or detected by even basic security measures. The professionals will figure out how to evade even the most stringent measures. I’ve repeatedly said that the two things that have made flying safer since 9/11 are reinforcing the cockpit doors and persuading passengers that they need to fight back. Everything beyond that isn’t worth it.
It’s always possible to increase security by adding more onerous — and expensive — procedures. If that were the only concern, we would all be strip-searched and prohibited from traveling with luggage. Realistically, we need to analyze whether the increased security of any measure is worth the cost, in money, time and convenience. We spend $8 billion a year on the TSA, and we’d like to get the most security possible for that money.
This is exactly what that TSA working group was doing. CNN reported that the group specifically evaluated the costs and benefits of eliminating security at minor airports, saving $115 million a year with a “small (nonzero) undesirable increase in risk related to additional adversary opportunity.” That money could be used to bolster security at larger airports or to reduce threats totally removed from airports.
We need more of this kind of thinking, not less. In 2017, political scientists Mark Stewart and John Mueller published a detailed evaluation of airport security measures based on the cost to implement and the benefit in terms of lives saved. They concluded that most of what our government does either isn’t effective at preventing terrorism or is simply too expensive to justify the security it does provide. Others might disagree with their conclusions, but their analysis provides enough detailed information to have a meaningful argument.
The more we politicize security, the worse we are. People are generally terrible judges of risk. We fear threats in the news out of proportion with the actual dangers. We overestimate rare and spectacular risks, and underestimate commonplace ones. We fear specific “movie-plot threats” that we can bring to mind. That’s why we fear flying over driving, even though the latter kills about 35,000 people each year — about a 9/11’s worth of deaths each month. And it’s why the idea of the TSA eliminating security at minor airports fills us with fear. We can imagine the plot unfolding, only without Bruce Willis saving the day.
Very little today is immune to politics, including the TSA. It drove most of the agency’s decisions in the early years after the 9/11 terrorist attacks. That the TSA is willing to consider politically unpopular ideas is a credit to the organization. Let’s let them perform their analyses in peace.
This essay originally appeared in the Washington Post.
Rachel Greenstadt, an associate professor of computer science at Drexel University, and Aylin Caliskan, Greenstadt’s former PhD student and now an assistant professor at George Washington University, have found that code, like other forms of stylistic expression, are not anonymous. At the DefCon hacking conference Friday, the pair will present a number of studies they’ve conducted using machine learning techniques to de-anonymize the authors of code samples. Their work could be useful in a plagiarism dispute, for instance, but it also has privacy implications, especially for the thousands of developers who contribute open source code to the world.
Google says that will prevent the company from remembering where you’ve been. Google’s support page on the subject states: “You can turn off Location History at any time. With Location History off, the places you go are no longer stored.”
That isn’t true. Even with Location History paused, some Google apps automatically store time-stamped location data without asking.
For example, Google stores a snapshot of where you are when you merely open its Maps app. Automatic daily weather updates on Android phones pinpoint roughly where you are. And some searches that have nothing to do with location, like “chocolate chip cookies,” or “kids science kits,” pinpoint your precise latitude and longitude - accurate to the square foot - and save it to your Google account.
On the one hand, this isn’t surprising to technologists. Lots of applications use location data. On the other hand, it’s very surprising — and counterintuitive — to everyone else. And that’s why this is a problem.
I don’t think we should pick on Google too much, though. Google is a symptom of the bigger problem: surveillance capitalism in general. As long as surveillance is the business model of the Internet, things like this are inevitable.
I’m giving three talks about my book Click Here to Kill Everybody: Security and Survival in a Hyper-connected World:
- The Ford Foundation in New York City on September 5, 2018.
- Harvard Book Store in Cambridge, Massachusetts on September 11, 2018.
- Fordham Law School in New York City on September 17, 2018.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books — including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World — as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2018 by Bruce Schneier.