February 15, 2019
by Bruce Schneier
CTO, IBM Resilient
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit Crypto-Gram’s web page.
These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.
- Alex Stamos on Content Moderation and Security
- El Chapo’s Encryption Defeated by Turning His IT Consultant
- Prices for Zero-Day Exploits Are Rising
- Evaluating the GCHQ Exceptional Access Proposal
- Clever Smartphone Malware Concealment Technique
- Hacking Construction Cranes
- The Evolution of Darknets
- Military Carrier Pigeons in the Era of Electronic Warfare
- Hacking the GCHQ Backdoor
- Japanese Government Will Hack Citizens’ IoT Devices
- iPhone FaceTime Vulnerability
- Security Analysis of the LIFX Smart Light Bulb
- Security Flaws in Children’s Smart Watches
- Public-Interest Tech at the RSA Conference
- Facebook’s New Privacy Hires
- Major Zcash Vulnerability Fixed
- Using Gmail “Dot Addresses” to Commit Fraud
- China’s AI Strategy and its Security Implications
- Blockchain and Trust
- Cyberinsurance and Acts of War
- USB Cable with Embedded Wi-Fi Controller
- Reconstructing SIGSALY
[2019.01.15] Former Facebook CISO Alex Stamos argues that increasing political pressure on social media platforms to moderate content will give them a pretext to turn all end-to-end crypto off—which would be more profitable for them and bad for society.
If we ask tech companies to fix ancient societal ills that are now reflected online with moderation, then we will end up with huge, democratically-unaccountable organizations controlling our lives in ways we never intended. And those ills will still exist below the surface.
In a daring move that placed his life in danger, the I.T. consultant eventually gave the F.B.I. his system’s secret encryption keys in 2011 after he had moved the network’s servers from Canada to the Netherlands during what he told the cartel’s leaders was a routine upgrade.
A Dutch article says that it’s a BlackBerry system.
El Chapo had his IT person install “…spyware called FlexiSPY on the ‘special phones’ he had given to his wife, Emma Coronel Aispuro, as well as to two of his lovers, including one who was a former Mexican lawmaker.” That same software was used by the FBI when his IT person turned over the keys. Yet again we learn the lesson that a backdoor can be used against you.
And it doesn’t have to be with the IT person’s permission. A good intelligence agency can use the IT person’s authorizations without his knowledge or consent. This is why the NSA hunts sysadmins.
EDITED TO ADD (2/12): Good information here.
On Monday, market-leading exploit broker Zerodium said it would pay up to $2 million for zero-click jailbreaks of Apple’s iOS, $1.5 million for one-click iOS jailbreaks, and $1 million for exploits that take over secure messaging apps WhatsApp and iMessage. Previously, Zerodium was offering $1.5 million, $1 million, and $500,000 for the same types of exploits respectively. The steeper prices indicate not only that the demand for these exploits continues to grow, but also that reliably compromising these targets is becoming increasingly hard.
Note that these prices are for offensive uses of the exploit. Zerodium—and others—sell exploits to companies who make surveillance tools and cyber-weapons for governments. Many companies have bug bounty programs for those who want the exploit used for defensive purposes—i.e., fixed—but they pay orders of magnitude less. This is a problem.
Back in 2014, Dan Geer said that that the US should corner the market on software vulnerabilities:
“There is no doubt that the U.S. Government could openly corner the world vulnerability market,” said Geer, “that is, we buy them all and we make them all public. Simply announce ‘Show us a competing bid, and we’ll give you [10 times more].’ Sure, there are some who will say ‘I hate Americans; I sell only to Ukrainians,’ but because vulnerability finding is increasingly automation-assisted, the seller who won’t sell to the Americans knows that his vulns can be rediscovered in due course by someone who will sell to the Americans who will tell everybody, thus his need to sell his product before it outdates is irresistible.”
I don’t know about the 10x, but in theory he’s right. There’s no other way to solve this.
[2019.01.18] The so-called Crypto Wars have been going on for 25 years now. Basically, the FBI—and some of their peer agencies in the UK, Australia, and elsewhere—argue that the pervasive use of civilian encryption is hampering their ability to solve crimes and that they need the tech companies to make their systems susceptible to government eavesdropping. Sometimes their complaint is about communications systems, like voice or messaging apps. Sometimes it’s about end-user devices. On the other side of this debate is pretty much all technologists working in computer security and cryptography, who argue that adding eavesdropping features fundamentally makes those systems less secure.
A recent entry in this debate is a proposal by Ian Levy and Crispin Robinson, both from the UK’s GCHQ (the British signals-intelligence agency—basically, its NSA). It’s actually a positive contribution to the discourse around backdoors; most of the time government officials broadly demand that the tech companies figure out a way to meet their requirements, without providing any details. Levy and Robinson write:
In a world of encrypted services, a potential solution could be to go back a few decades. It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved—they’re usually involved in introducing the parties to a chat or call. You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.
On the surface, this isn’t a big ask. It doesn’t affect the encryption that protects the communications. It only affects the authentication that assures people of whom they are talking to. But it’s no less dangerous a backdoor than any others that have been proposed: It exploits a security vulnerability rather than fixing it, and it opens all users of the system to exploitation of that same vulnerability by others.
In a blog post, cryptographer Matthew Green summarized the technical problems with this GCHQ proposal. Basically, making this backdoor work requires not only changing the cloud computers that oversee communications, but it also means changing the client program on everyone’s phone and computer. And that change makes all of those systems less secure. Levy and Robinson make a big deal of the fact that their backdoor would only be targeted against specific individuals and their communications, but it’s still a general backdoor that could be used against anybody.
The basic problem is that a backdoor is a technical capability—a vulnerability—that is available to anyone who knows about it and has access to it. Surrounding that vulnerability is a procedural system that tries to limit access to that capability. Computers, especially internet-connected computers, are inherently hackable, limiting the effectiveness of any procedures. The best defense is to not have the vulnerability at all.
That old physical eavesdropping system Levy and Robinson allude to also exploits a security vulnerability. Because telephone conversations were unencrypted as they passed through the physical wires of the phone system, the police were able to go to a switch in a phone company facility or a junction box on the street and manually attach alligator clips to a specific pair and listen in to what that phone transmitted and received. It was a vulnerability that anyone could exploit—not just the police—but was mitigated by the fact that the phone company was a monolithic monopoly, and physical access to the wires was either difficult (inside a phone company building) or obvious (on the street at a junction box).
The functional equivalent of physical eavesdropping for modern computer phone switches is a requirement of a 1994 U.S. law called CALEA—and similar laws in other countries. By law, telephone companies must engineer phone switches that the government can eavesdrop, mirroring that old physical system with computers. It is not the same thing, though. It doesn’t have those same physical limitations that make it more secure. It can be administered remotely. And it’s implemented by a computer, which makes it vulnerable to the same hacking that every other computer is vulnerable to.
This isn’t a theoretical problem; these systems have been subverted. The most public incident dates from 2004 in Greece. Vodafone Greece had phone switches with the eavesdropping feature mandated by CALEA. It was turned off by default in the Greek phone system, but the NSA managed to surreptitiously turn it on and use it to eavesdrop on the Greek prime minister and over 100 other high-ranking dignitaries.
There’s nothing distinct about a phone switch that makes it any different from other modern encrypted voice or chat systems; any remotely administered backdoor system will be just as vulnerable. Imagine a chat program added this GCHQ backdoor. It would have to add a feature that added additional parties to a chat from somewhere in the system—and not by the people at the endpoints. It would have to suppress any messages alerting users to another party being added to that chat. Since some chat programs, like iMessage and Signal, automatically send such messages, it would force those systems to lie to their users. Other systems would simply never implement the “tell me who is in this chat conversation” feature—which amounts to the same thing.
And once that’s in place, every government will try to hack it for its own purposes—just as the NSA hacked Vodafone Greece. Again, this is nothing new. In 2010, China successfully hacked the back-door mechanism Google put in place to meet law-enforcement requests. In 2015, someone—we don’t know who—hacked an NSA backdoor in a random-number generator used to create encryption keys, changing the parameters so they could also eavesdrop on the communications. There are certainly other stories that haven’t been made public.
Simply adding the feature erodes public trust. If you were a dissident in a totalitarian country trying to communicate securely, would you want to use a voice or messaging system that is known to have this sort of backdoor? Who would you bet on, especially when the cost of losing the bet might be imprisonment or worse: the company that runs the system, or your country’s government intelligence agency? If you were a senior government official, or the head of a large multinational corporation, or the security manager or a critical technician at a power plant, would you want to use this system?
Of course not.
Two years ago, there was a rumor of a WhatsApp backdoor. The details are complicated, and calling it a backdoor or a vulnerability is largely inaccurate—but the resultant confusion caused some people to abandon the encrypted messaging service.
Trust is fragile, and transparency is essential to trust. And while Levy and Robinson state that “any exceptional access solution should not fundamentally change the trust relationship between a service provider and its users,” this proposal does exactly that. Communications companies could no longer be honest about what their systems were doing, and we would have no reason to trust them if they tried.
In the end, all of these exceptional access mechanisms, whether they exploit existing vulnerabilities that should be closed or force vendors to open new ones, reduce the security of the underlying system. They reduce our reliance on security technologies we know how to do well—cryptography—to computer security technologies we are much less good at. Even worse, they replace technical security measures with organizational procedures. Whether it’s a database of master keys that could decrypt an iPhone or a communications switch that orchestrates who is securely chatting with whom, it is vulnerable to attack. And it will be attacked.
The foregoing discussion is a specific example of a broader discussion that we need to have, and it’s about the attack/defense balance. Which should we prioritize? Should we design our systems to be open to attack, in which case they can be exploited by law enforcement—and others? Or should we design our systems to be as secure as possible, which means they will be better protected from hackers, criminals, foreign governments and—unavoidably—law enforcement as well?
This discussion is larger than the FBI’s ability to solve crimes or the NSA’s ability to spy. We know that foreign intelligence services are targeting the communications of our elected officials, our power infrastructure, and our voting systems. Do we really want some foreign country penetrating our lawful-access backdoor in the same way the NSA penetrated Greece’s?
I have long maintained that we need to adopt a defense-dominant strategy: We should prioritize our need for security over our need for surveillance. This is especially true in the new world of physically capable computers. Yes, it will mean that law enforcement will have a harder time eavesdropping on communications and unlocking computing devices. But law enforcement has other forensic techniques to collect surveillance data in our highly networked world. We’d be much better off increasing law enforcement’s technical ability to investigate crimes in the modern digital world than we would be to weaken security for everyone. The ability to surreptitiously add ghost users to a conversation is a vulnerability, and it’s one that we would be better served by closing than exploiting.
This essay originally appeared on Lawfare.com.
EDITED TO ADD (1/30): More commentary.
Malicious apps hosted in the Google Play market are trying a clever trick to avoid detection—they monitor the motion-sensor input of an infected device before installing a powerful banking trojan to make sure it doesn’t load on emulators researchers use to detect attacks.
The thinking behind the monitoring is that sensors in real end-user devices will record motion as people use them. By contrast, emulators used by security researchers—and possibly Google employees screening apps submitted to Play—are less likely to use sensors. Two Google Play apps recently caught dropping the Anubis banking malware on infected devices would activate the payload only when motion was detected first. Otherwise, the trojan would remain dormant.
In our research and vulnerability discoveries, we found that weaknesses in the controllers can be (easily) taken advantage of to move full-sized machines such as cranes used in construction sites and factories. In the different attack classes that we’ve outlined, we were able to perform the attacks quickly and even switch on the controlled machine despite an operator’s having issued an emergency stop (e-stop).
The core of the problem lies in how, instead of depending on wireless, standard technologies, these industrial remote controllers rely on proprietary RF protocols, which are decades old and are primarily focused on safety at the expense of security. It wasn’t until the arrival of Industry 4.0, as well as the continuing adoption of the industrial internet of things (IIoT), that industries began to acknowledge the pressing need for security.
To prevent the problems of customer binding, and losing business when darknet markets go down, merchants have begun to leave the specialized and centralized platforms and instead ventured to use widely accessible technology to build their own communications and operational back-ends.
Instead of using websites on the darknet, merchants are now operating invite-only channels on widely available mobile messaging systems like Telegram. This allows the merchant to control the reach of their communication better and be less vulnerable to system take-downs. To further stabilize the connection between merchant and customer, repeat customers are given unique messaging contacts that are independent of shared channels and thus even less likely to be found and taken down. Channels are often operated by automated bots that allow customers to inquire about offers and initiate the purchase, often even allowing a fully bot-driven experience without human intervention on the merchant’s side.
The other major change is the use of “dead drops” instead of the postal system which has proven vulnerable to tracking and interception. Now, goods are hidden in publicly accessible places like parks and the location is given to the customer on purchase. The customer then goes to the location and picks up the goods. This means that delivery becomes asynchronous for the merchant, he can hide a lot of product in different locations for future, not yet known, purchases. For the client the time to delivery is significantly shorter than waiting for a letter or parcel shipped by traditional means – he has the product in his hands in a matter of hours instead of days. Furthermore this method does not require for the customer to give any personally identifiable information to the merchant, which in turn doesn’t have to safeguard it anymore. Less data means less risk for everyone.
The use of dead drops also significantly reduces the risk of the merchant to be discovered by tracking within the postal system. He does not have to visit any easily to surveil post office or letter box, instead the whole public space becomes his hiding territory.
Cryptocurrencies are still the main means of payment, but due to the higher customer-binding, and vetting process by the merchant, escrows are seldom employed. Usually only multi-party transactions between customer and merchant are established, and often not even that.
Other than allowing much more secure and efficient business for both sides of the transaction, this has also lead to changes in the organizational structure of merchants:
Instead of the flat hierarchies witnessed with darknet markets, merchants today employ hierarchical structures again. These consist of procurement layer, sales layer, and distribution layer. The people constituting each layer usually do not know the identity of the higher layers nor are ever in personal contact with them. All interaction is digital—messaging systems and cryptocurrencies again, product moves only through dead drops.
The procurement layer purchases product wholesale and smuggles it into the region. It is then sold for cryptocurrency to select people that operate the sales layer. After that transaction the risks of both procurement and sales layer are isolated.
The sales layer divides the product into smaller units and gives the location of those dead drops to the distribution layer. The distribution layer then divides the product again and places typical sales quantities into new dead drops. The location of these dead drops is communicated to the sales layer which then sells these locations to the customers through messaging systems.
To prevent theft by the distribution layer, the sales layer randomly tests dead drops by tasking different members of the distribution layer with picking up product from a dead drop and hiding it somewhere else, after verification of the contents. Usually each unit of product is tagged with a piece of paper containing a unique secret word which is used to prove to the sales layer that a dead drop was found. Members of the distribution layer have to post security – in the form of cryptocurrency – to the sales layer, and they lose part of that security with every dead drop that fails the testing, and with every dead drop they failed to test. So far, no reports of using violence to ensure performance of members of these structures has become known.
This concept of using messaging, cryptocurrency and dead drops even within the merchant structure allows for the members within each layer being completely isolated from each other, and not knowing anything about higher layers at all. There is no trace to follow if a distribution layer member is captured while servicing a dead drop. He will often not even be distinguishable from a regular customer. This makes these structures extremely secure against infiltration, takeover and capture. They are inherently resilient.
It is because of the use of dead drops and hierarchical structures that we call this kind of organization a Dropgang.
Pigeons are certainly no substitute for drones, but they provide a low-visibility option to relay information. Considering the storage capacity of microSD memory cards, a pigeon’s organic characteristics provide front line forces a relatively clandestine mean to transport gigabytes of video, voice, or still imagery and documentation over considerable distance with zero electromagnetic emissions or obvious detectability to radar. These decidedly low-technology options prove difficult to detect and track. Pigeons cannot talk under interrogation, although they are not entirely immune to being held under suspicion of espionage. Within an urban environment, a pigeon has even greater potential to blend into the local avian population, further compounding detection.
The author points out that both France and China still maintain a small number of pigeons in case electronic communications are disrupted.
And there’s an existing RFC.
EDITED TO ADD (2/13): The Russian military is still using pigeons.
[2019.01.25] Last week, I evaluated the security of a recent GCHQ backdoor proposal for communications systems. Furthering the debate, Nate Cardozo and Seth Schoen of EFF explain how this sort of backdoor can be detected:
In fact, we think when the ghost feature is active—silently inserting a secret eavesdropping member into an otherwise end-to-end encrypted conversation in the manner described by the GCHQ authors—it could be detected (by the target as well as certain third parties) with at least four different techniques: binary reverse engineering, cryptographic side channels, network-traffic analysis, and crash log analysis. Further, crash log analysis could lead unrelated third parties to find evidence of the ghost in use, and it’s even possible that binary reverse engineering could lead researchers to find ways to disable the ghost capability on the client side. It should be obvious that none of these possibilities are desirable for law enforcement or society as a whole. And while we’ve theorized some types of mitigations that might make the ghost less detectable by particular techniques, they could also impose considerable costs to the network when deployed at the necessary scale, as well as creating new potential security risks or detection methods.
EDITED TO ADD (1/26): Good commentary on how to defeat the backdoor detection.
[2019.01.28] The Japanese government is going to run penetration tests against all the IoT devices in their country, in an effort to (1) figure out what’s insecure, and (2) help consumers secure them:
The survey is scheduled to kick off next month, when authorities plan to test the password security of over 200 million IoT devices, beginning with routers and web cameras. Devices in people’s homes and on enterprise networks will be tested alike.
The Japanese government’s decision to log into users’ IoT devices has sparked outrage in Japan. Many have argued that this is an unnecessary step, as the same results could be achieved by just sending a security alert to all users, as there’s no guarantee that the users found to be using default or easy-to-guess passwords would change their passwords after being notified in private.
However, the government’s plan has its technical merits. Many of today’s IoT and router botnets are being built by hackers who take over devices with default or easy-to-guess passwords.
Hackers can also build botnets with the help of exploits and vulnerabilities in router firmware, but the easiest way to assemble a botnet is by collecting the ones that users have failed to secure with custom passwords.
Securing these devices is often a pain, as some expose Telnet or SSH ports online without the users’ knowledge, and for which very few users know how to change passwords. Further, other devices also come with secret backdoor accounts that in some cases can’t be removed without a firmware update.
I am interested in the results of this survey. Japan isn’t very different from other industrialized nations in this regard, so their findings will be general. I am less optimistic about the country’s ability to secure all of this stuff—especially before the 2020 Summer Olympics.
This is definitely an embarrassment, and Apple was right to disable Group FaceTime until it’s fixed. But it’s hard to imagine how an adversary can operationalize this in any useful way.
New York governor Andrew M. Cuomo wrote: “The FaceTime bug is an egregious breach of privacy that puts New Yorkers at risk.” Kinda, I guess.
EDITED TO ADD (1/30): This bug/vulnerability was first discovered by a 14-year-old, whose mother tried to alert Apple with no success.
In a very short limited amount of time, three vulnerabilities have been discovered:
- Wifi credentials of the user have been recovered (stored in plaintext into the flash memory).
- No security settings. The device is completely open (no secure boot, no debug interface disabled, no flash encryption).
- Root certificate and RSA private key have been extracted.
Boing Boing post.
[2019.01.31] A year ago, the Norwegian Consumer Council published an excellent security analysis of children’s GPS-connected smart watches. The security was terrible. Not only could parents track the children, anyone else could also track the children.
A recent analysis checked if anything had improved after that torrent of bad press. Short answer: no.
Guess what: a train wreck. Anyone could access the entire database, including real time child location, name, parents details etc. Not just Gator watches either—the same back end covered multiple brands and tens of thousands of watches
The Gator web backend was passing the user level as a parameter. Changing that value to another number gave super admin access throughout the platform. The system failed to validate that the user had the appropriate permission to take admin control!
This means that an attacker could get full access to all account information and all watch information. They could view any user of the system and any device on the system, including its location. They could manipulate everything and even change users’ emails/passwords to lock them out of their watch.
In fairness, upon our reporting of the vulnerability to them, Gator got it fixed in 48 hours.
This is a lesson in the limits of naming and shaming: publishing vulnerabilities in an effort to get companies to improve their security. If a company is specifically named, it is likely to improve the specific vulnerability described. But that is unlikely to translate into improved security practices in the future. If an industry, or product category, is named generally, nothing is likely to happen. This is one of the reasons I am a proponent of regulation.
EDITED TO ADD (2/13): The EU has acted in a similar case.
[2019.02.01] Our work in cybersecurity is inexorably intertwined with public policy and—more generally—the public interest. It’s obvious in the debates on encryption and vulnerability disclosure, but it’s also part of the policy discussions about the Internet of Things, cryptocurrencies, artificial intelligence, social media platforms, and pretty much everything else related to IT.
This societal dimension to our traditionally technical area is bringing with it a need for public-interest technologists.
Defining this term is difficult. One blog post described public-interest technologists as “technology practitioners who focus on social justice, the common good, and/or the public interest.” A group of academics in this field wrote that “public-interest technology refers to the study and application of technology expertise to advance the public interest/generate public benefits/promote the public good.”
I think of public-interest technologists as people who combine their technological expertise with a public-interest focus, either by working on tech policy (for the EFF or as a congressional staffer, as examples), working on a technology project with a public benefit (such as Tor or Signal), or working as a more traditional technologist for an organization with a public-interest focus (providing IT security for Human Rights Watch, as an example). Public-interest technology isn’t one thing; it’s many things. And not everyone likes the term. Maybe it’s not the most accurate term for what different people do, but it’s the best umbrella term that covers everyone.
It’s a growing field—one far broader than cybersecurity—and one that I am increasingly focusing my time on. I maintain a resources page for public-interest technology. (This is the single best document to read about the current state of public-interest technology, and what is still to be done.)
This year, I am bringing some of these ideas to the RSA Conference. In partnership with the Ford Foundation, I am hosting a mini-track on public-interest technology. Six sessions throughout the day on Thursday will highlight different aspects of this important work. We’ll look at public-interest technologists inside governments, as part of civil society, at universities, and in corporate environments.
- How Public-Interest Technologists are Changing the World . This introductory panel lays the groundwork for the day to come. I’ll be joined on stage with Matt Mitchell of Tactical Tech, and we’ll discuss how public-interest technologists are already changing the world.
- Public-Interest Tech in Silicon Valley. Most of us work for technology companies, and this panel discusses public-interest technology work within companies. Mitchell Baker of Mozilla Corp. and Cindy Cohn of the EFF will lead the discussion, looking at both public-interest projects within corporations and employee activism initiatives by corporate employees.
- Working in Civil Society. Bringing a technological perspective into civil society can transform how organizations do their work. Through a series of lightning talks, this session examines how this transformation can happen from a variety of perspectives: exposing government surveillance, protecting journalists worldwide, preserving a free and open Internet, bringing a security focus to artificial intelligence research, protecting NGO networks, and more. For those of us in security, bringing tech tools to those who need them is core to what we do.
- Government Needs You. Government needs technologists at all levels. We’re needed on legislative staffs and at regulatory agencies in order to make effective tech policy, but we’re also needed elsewhere to implement policy more broadly. We’re needed to advise courts, testify at hearings, and serve on advisory committees. At this session, you’ll hear from public-interest technologists who have had a major impact on government from a variety of positions, and learn about ways you can get involved.
- Changing Academia. Higher education needs to incorporate a public-interest perspective in technology departments, and a technology perspective in public-policy departments. This could look like ethics courses for computer science majors, programming for law students, or joint degrees that combine technology and social science. Danny Weitzner of MIT and Latanya Sweeney of Harvard will discuss efforts to build these sorts of interdisciplinary classes, programs, and institutes.
- The Future of Public-Interest Tech Creating an environment where public-interest technology can flourish will require a robust pipeline: more people wanting to go into this field, more places for them to go, and an improved market that matches supply with demand. In this closing session, Jenny Toomey of the Ford Foundation and I will sum up the day and discuss future directions for growing the field, funding trajectories, highlighting outstanding needs and gaps, and describing how you can get involved.
Check here for times and locations, and be sure to reserve your seat.
We all need to help. I don’t mean that we all need to quit our jobs and go work on legislative staffs; there’s a lot we can do while still maintaining our existing careers. We can advise governments and other public-interest organizations. We can agitate for the public interest inside the corporations we work for. We can speak at conferences and write opinion pieces for publication. We can teach part-time at all levels. But some of us will need to do this full-time.
There’s an interesting parallel to public-interest law, which covers everything from human-rights lawyers to public defenders. In the 1960s, that field didn’t exist. The field was deliberately created, funded by organizations like the Ford Foundation. They created a world where public-interest law is valued. Today, when the ACLU advertises for a staff attorney, paying a third to a tenth of a normal salary, it gets hundreds of applicants. Today, 20% of Harvard Law School grads go into public-interest law, while the percentage of computer science grads doing public-interest work is basically zero. This is what we need to fix.
Please stop in at my mini-track. Come for a panel that interests you, or stay for the whole day. Bring your ideas. Find me to talk about this further. Pretty much all the major policy debates of this century will have a strong technological component—and an important cybersecurity angle—and we all need to get involved.
This essay originally appeared on the RSA Conference blog.
Michael Brennan of the Ford Foundation also wrote an essay on the event.
[2019.02.04] The Wired headline sums it up nicely—”Facebook Hires Up Three of Its Biggest Privacy Critics“:
I know these people. They’re ethical, and they’re on the right side. I hope they continue to do their good work from inside Facebook.
Like all the other blockchain vulnerabilities and updates, this demonstrates the ridiculousness of the notion that code can replace people, that trust can be encompassed in the protocols, or that human governance is not ncessary.
[2019.02.06] In Gmail addresses, the dots don’t matter. The account “firstname.lastname@example.org” maps to the exact same address as “email@example.com” and “firstname.lastname@example.org”—and so on. (Note: I own none of those addresses, if they are actually valid.)
This fact can be used to commit fraud:
Recently, we observed a group of BEC actors make extensive use of Gmail dot accounts to commit a large and diverse amount of fraud. Since early 2018, this group has used this fairly simple tactic to facilitate the following fraudulent activities:
- Submit 48 credit card applications at four US-based financial institutions, resulting in the approval of at least $65,000 in fraudulent credit
- Register for 14 trial accounts with a commercial sales leads service to collect targeting data for BEC attacks
- File 13 fraudulent tax returns with an online tax filing service
- Submit 12 change of address requests with the US Postal Service
- Submit 11 fraudulent Social Security benefit applications
- Apply for unemployment benefits under nine identities in a large US state
- Submit applications for FEMA disaster assistance under three identities
In each case, the scammers created multiple accounts on each website within a short period of time, modifying the placement of periods in the email address for each account. Each of these accounts is associated with a different stolen identity, but all email from these services are received by the same Gmail account. Thus, the group is able to centralize and organize their fraudulent activity around a small set of email accounts, thereby increasing productivity and making it easier to continue their fraudulent behavior.
This isn’t a new trick. It has been previously documented as a way to trick Netflix users.
[2019.02.07] Gregory C. Allen at the Center for a New American Security has a new report with some interesting analysis and insights into China’s AI strategy, commercial, government, and military. There are numerous security—and national security—implications.
[2019.02.12] In his 2008 white paper that first proposed bitcoin, the anonymous Satoshi Nakamoto concluded with: “We have proposed a system for electronic transactions without relying on trust.” He was referring to blockchain, the system behind bitcoin cryptocurrency. The circumvention of trust is a great promise, but it’s just not true. Yes, bitcoin eliminates certain trusted intermediaries that are inherent in other payment systems like credit cards. But you still have to trust bitcoin—and everything about it.
Much has been written about blockchains and how they displace, reshape, or eliminate trust. But when you analyze both blockchain and trust, you quickly realize that there is much more hype than value. Blockchain solutions are often much worse than what they replace.
First, a caveat. By blockchain, I mean something very specific: the data structures and protocols that make up a public blockchain. These have three essential elements. The first is a distributed (as in multiple copies) but centralized (as in there’s only one) ledger, which is a way of recording what happened and in what order. This ledger is public, meaning that anyone can read it, and immutable, meaning that no one can change what happened in the past.
The second element is the consensus algorithm, which is a way to ensure all the copies of the ledger are the same. This is generally called mining; a critical part of the system is that anyone can participate. It is also distributed, meaning that you don’t have to trust any particular node in the consensus network. It can also be extremely expensive, both in data storage and in the energy required to maintain it. Bitcoin has the most expensive consensus algorithm the world has ever seen, by far.
Finally, the third element is the currency. This is some sort of digital token that has value and is publicly traded. Currency is a necessary element of a blockchain to align the incentives of everyone involved. Transactions involving these tokens are stored on the ledger.
Private blockchains are completely uninteresting. (By this, I mean systems that use the blockchain data structure but don’t have the above three elements.) In general, they have some external limitation on who can interact with the blockchain and its features. These are not anything new; they’re distributed append-only data structures with a list of individuals authorized to add to it. Consensus protocols have been studied in distributed systems for more than 60 years. Append-only data structures have been similarly well covered. They’re blockchains in name only, and—as far as I can tell—the only reason to operate one is to ride on the blockchain hype.
All three elements of a public blockchain fit together as a single network that offers new security properties. The question is: Is it actually good for anything? It’s all a matter of trust.
Trust is essential to society. As a species, humans are wired to trust one another. Society can’t function without trust, and the fact that we mostly don’t even think about it is a measure of how well trust works.
The word “trust” is loaded with many meanings. There’s personal and intimate trust. When we say we trust a friend, we mean that we trust their intentions and know that those intentions will inform their actions. There’s also the less intimate, less personal trust—we might not know someone personally, or know their motivations, but we can trust their future actions. Blockchain enables this sort of trust: We don’t know any bitcoin miners, for example, but we trust that they will follow the mining protocol and make the whole system work.
Most blockchain enthusiasts have a unnaturally narrow definition of trust. They’re fond of catchphrases like “in code we trust,” “in math we trust,” and “in crypto we trust.” This is trust as verification. But verification isn’t the same as trust.
In 2012, I wrote a book about trust and security, Liars and Outliers. In it, I listed four very general systems our species uses to incentivize trustworthy behavior. The first two are morals and reputation. The problem is that they scale only to a certain population size. Primitive systems were good enough for small communities, but larger communities required delegation, and more formalism.
The third is institutions. Institutions have rules and laws that induce people to behave according to the group norm, imposing sanctions on those who do not. In a sense, laws formalize reputation. Finally, the fourth is security systems. These are the wide varieties of security technologies we employ: door locks and tall fences, alarm systems and guards, forensics and audit systems, and so on.
These four elements work together to enable trust. Take banking, for example. Financial institutions, merchants, and individuals are all concerned with their reputations, which prevents theft and fraud. The laws and regulations surrounding every aspect of banking keep everyone in line, including backstops that limit risks in the case of fraud. And there are lots of security systems in place, from anti-counterfeiting technologies to internet-security technologies.
In his 2018 book, Blockchain and the New Architecture of Trust, Kevin Werbach outlines four different “trust architectures.” The first is peer-to-peer trust. This basically corresponds to my morals and reputational systems: pairs of people who come to trust each other. His second is leviathan trust, which corresponds to institutional trust. You can see this working in our system of contracts, which allows parties that don’t trust each other to enter into an agreement because they both trust that a government system will help resolve disputes. His third is intermediary trust. A good example is the credit card system, which allows untrusting buyers and sellers to engage in commerce. His fourth trust architecture is distributed trust. This is emergent trust in the particular security system that is blockchain.
What blockchain does is shift some of the trust in people and institutions to trust in technology. You need to trust the cryptography, the protocols, the software, the computers and the network. And you need to trust them absolutely, because they’re often single points of failure.
When that trust turns out to be misplaced, there is no recourse. If your bitcoin exchange gets hacked, you lose all of your money. If your bitcoin wallet gets hacked, you lose all of your money. If you forget your login credentials, you lose all of your money. If there’s a bug in the code of your smart contract, you lose all of your money. If someone successfully hacks the blockchain security, you lose all of your money. In many ways, trusting technology is harder than trusting people. Would you rather trust a human legal system or the details of some computer code you don’t have the expertise to audit?
Blockchain enthusiasts point to more traditional forms of trust—bank processing fees, for example—as expensive. But blockchain trust is also costly; the cost is just hidden. For bitcoin, that’s the cost of the additional bitcoin mined, the transaction fees, and the enormous environmental waste.
Blockchain doesn’t eliminate the need to trust human institutions. There will always be a big gap that can’t be addressed by technology alone. People still need to be in charge, and there is always a need for governance outside the system. This is obvious in the ongoing debate about changing the bitcoin block size, or in fixing the DAO attack against Ethereum. There’s always a need to override the rules, and there’s always a need for the ability to make permanent rules changes. As long as hard forks are a possibility—that’s when the people in charge of a blockchain step outside the system to change it—people will need to be in charge.
Any blockchain system will have to coexist with other, more conventional systems. Modern banking, for example, is designed to be reversible. Bitcoin is not. That makes it hard to make the two compatible, and the result is often an insecurity. Steve Wozniak was scammed out of $70K in bitcoin because he forgot this.
Blockchain technology is often centralized. Bitcoin might theoretically be based on distributed trust, but in practice, that’s just not true. Just about everyone using bitcoin has to trust one of the few available wallets and use one of the few available exchanges. People have to trust the software and the operating systems and the computers everything is running on. And we’ve seen attacks against wallets and exchanges. We’ve seen Trojans and phishing and password guessing. Criminals have even used flaws in the system that people use to repair their cell phones to steal bitcoin.
Moreover, in any distributed trust system, there are backdoor methods for centralization to creep back in. With bitcoin, there are only a few miners of consequence. There’s one company that provides most of the mining hardware. There are only a few dominant exchanges. To the extent that most people interact with bitcoin, it is through these centralized systems. This also allows for attacks against blockchain-based systems.
These issues are not bugs in current blockchain applications, they’re inherent in how blockchain works. Any evaluation of the security of the system has to take the whole socio-technical system into account. Too many blockchain enthusiasts focus on the technology and ignore the rest.
To the extent that people don’t use bitcoin, it’s because they don’t trust bitcoin. That has nothing to do with the cryptography or the protocols. In fact, a system where you can lose your life savings if you forget your key or download a piece of malware is not particularly trustworthy. No amount of explaining how SHA-256 works to prevent double-spending will fix that.
Similarly, to the extent that people do use blockchains, it is because they trust them. People either own bitcoin or not based on reputation; that’s true even for speculators who own bitcoin simply because they think it will make them rich quickly. People choose a wallet for their cryptocurrency, and an exchange for their transactions, based on reputation. We even evaluate and trust the cryptography that underpins blockchains based on the algorithms’ reputation.
To see how this can fail, look at the various supply-chain security systems that are using blockchain. A blockchain isn’t a necessary feature of any of them. The reasons they’re successful is that everyone has a single software platform to enter their data in. Even though the blockchain systems are built on distributed trust, people don’t necessarily accept that. For example, some companies don’t trust the IBM/Maersk system because it’s not their blockchain.
Irrational? Maybe, but that’s how trust works. It can’t be replaced by algorithms and protocols. It’s much more social than that.
Still, the idea that blockchains can somehow eliminate the need for trust persists. Recently, I received an email from a company that implemented secure messaging using blockchain. It said, in part: “Using the blockchain, as we have done, has eliminated the need for Trust.” This sentiment suggests the writer misunderstands both what blockchain does and how trust works.
Do you need a public blockchain? The answer is almost certainly no. A blockchain probably doesn’t solve the security problems you think it solves. The security problems it solves are probably not the ones you have. (Manipulating audit data is probably not your major security risk.) A false trust in blockchain can itself be a security risk. The inefficiencies, especially in scaling, are probably not worth it. I have looked at many blockchain applications, and all of them could achieve the same security properties without using a blockchain—of course, then they wouldn’t have the cool name.
Honestly, cryptocurrencies are useless. They’re only used by speculators looking for quick riches, people who don’t like government-backed currencies, and criminals who want a black-market way to exchange money.
To answer the question of whether the blockchain is needed, ask yourself: Does the blockchain change the system of trust in any meaningful way, or just shift it around? Does it just try to replace trust with verification? Does it strengthen existing trust relationships, or try to go against them? How can trust be abused in the new system, and is this better or worse than the potential abuses in the old system? And lastly: What would your system look like if you didn’t use blockchain at all?
If you ask yourself those questions, it’s likely you’ll choose solutions that don’t use public blockchain. And that’ll be a good thing—especially when the hype dissipates.
This essay previously appeared on Wired.com.
I have wanted to write this essay for over a year. The impetus to finally do it came from an invite to speak at the Hyperledger Global Forum in December. This essay is a version of the talk I wrote for that event, made more accessible to a general audience.
It seems to be the season for blockchain takedowns. James Waldo has an excellent essay in Queue. And Nicholas Weaver gave a talk at the Enigma Conference, summarized here. It’s a shortened version of this talk.
[2019.02.13] I had not heard about this case before. Zurich Insurance has refused to pay Mondelez International’s claim of $100 million in damages from NotPetya. It claims it is an act of war and therefor not covered. Mondelez is suing.
Those turning to cyber insurance to manage their exposure presently face significant uncertainties about its promise. First, the scope of cyber risks vastly exceeds available coverage, as cyber perils cut across most areas of commercial insurance in an unprecedented manner: direct losses to policyholders and third-party claims (clients, customers, etc.); financial, physical and IP damages; business interruption, and so on. Yet no cyber insurance policies cover this entire spectrum. Second, the scope of cyber-risk coverage under existing policies, whether traditional general liability or property policies or cyber-specific policies, is rarely comprehensive (to cover all possible cyber perils) and often unclear (i.e., it does not explicitly pertain to all manifestations of cyber perils, or it explicitly excludes some).
But it is in the public interest for Zurich and its peers to expand their role in managing cyber risk. In its ideal state, a mature cyber insurance market could go beyond simply absorbing some of the damage of cyberattacks and play a more fundamental role in engineering and managing cyber risk. It would allow analysis of data across industries to understand risk factors and develop common metrics and scalable solutions. It would allow researchers to pinpoint sources of aggregation risk, such as weak spots in widely relied-upon software and hardware platforms and services. Through its financial levers, the insurance industry can turn these insights into action, shaping private-sector behavior and promoting best practices internationally. Such systematic efforts to improve and incentivize cyber-risk management would redress the conditions that made NotPetya possible in the first place. This, in turn, would diminish the onus on governments to retaliate against attacks.
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.
You can also read these articles on my blog, Schneier on Security.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books—including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.
Copyright © 2019 by Bruce Schneier.