Crypto-Gram

October 15, 2018

by Bruce Schneier
CTO, IBM Resilient
schneier@schneier.com
https://www.schneier.com

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

For back issues, or to subscribe, visit Crypto-Gram’s web page.

These same essays and news items appear in the Schneier on Security blog, along with a lively and intelligent comment section. An RSS feed is available.


In this issue:

  1. NSA Attacks Against Virtual Private Networks
  2. Public Shaming of Companies for Bad Security
  3. Pegasus Spyware Used in 45 Countries
  4. Security Vulnerability in ESS ExpressVote Touchscreen Voting Computer
  5. AES Resulted in a $250-Billion Economic Benefit
  6. New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography
  7. New Variants of Cold-Boot Attack
  8. Evidence for the Security of PKCS #1 Digital Signatures
  9. Counting People through a Wall with Wi-Fi
  10. Yet Another IoT Cybersecurity Document
  11. Major Tech Companies Finally Endorse Federal Privacy Regulation
  12. More on the Five Eyes Statement on Encryption and Backdoors
  13. Facebook Is Using Your Two-Factor Authentication Phone Number to Target Advertising
  14. Sophisticated Voice Phishing Scams
  15. Terahertz Millimeter-Wave Scanners
  16. The Effects of GDPR’s 72-Hour Notification Rule
  17. Helen Nissenbaum on Data Privacy and Consent
  18. Chinese Supply Chain Hardware Attack
  19. Conspiracy Theories around the “Presidential Alert”
  20. Detecting Credit Card Skimmers
  21. Defeating the “Deal or No Deal” Arcade Game
  22. The US National Cyber Strategy
  23. Access Now Is Looking for a Chief Security Officer
  24. Security Vulnerabilities in US Weapons Systems
  25. Another Bloomberg Story about Supply-Chain Hardware Attacks from China
  26. Security in a World of Physically Capable Computers
  27. Upcoming Speaking Engagements

NSA Attacks Against Virtual Private Networks

[2018.09.17] A 2006 document from the Snowden archives outlines successful NSA operations against “a number of “high potential” virtual private networks, including those of media organization Al Jazeera, the Iraqi military and internet service organizations, and a number of airline reservation systems.”

It’s hard to believe that many of the Snowden documents are now more than a decade old.


Public Shaming of Companies for Bad Security

[2018.09.18] Troy Hunt makes some good points, with good examples.


Pegasus Spyware Used in 45 Countries

[2018.09.19] Citizen Lab has published a new report about the Pegasus spyware. From a ZDNet article:

The malware, known as Pegasus (or Trident), was created by Israeli cyber-security firm NSO Group and has been around for at least three years—when it was first detailed in a report over the summer of 2016.

The malware can operate on both Android and iOS devices, albeit it’s been mostly spotted in campaigns targeting iPhone users primarily. On infected devices, Pegasus is a powerful spyware that can do many things, such as record conversations, steal private messages, exfiltrate photos, and much much more.

From the report:

We found suspected NSO Pegasus infections associated with 33 of the 36 Pegasus operators we identified in 45 countries: Algeria, Bahrain, Bangladesh, Brazil, Canada, Cote d’Ivoire, Egypt, France, Greece, India, Iraq, Israel, Jordan, Kazakhstan, Kenya, Kuwait, Kyrgyzstan, Latvia, Lebanon, Libya, Mexico, Morocco, the Netherlands, Oman, Pakistan, Palestine, Poland, Qatar, Rwanda, Saudi Arabia, Singapore, South Africa, Switzerland, Tajikistan, Thailand, Togo, Tunisia, Turkey, the UAE, Uganda, the United Kingdom, the United States, Uzbekistan, Yemen, and Zambia. As our findings are based on country-level geolocation of DNS servers, factors such as VPNs and satellite Internet teleport locations can introduce inaccuracies.

Six of those countries are known to deploy spyware against political opposition: Bahrain, Kazakhstan, Mexico, Morocco, Saudi Arabia, and the United Arab Emirates.

Also note:

On 17 September 2018, we then received a public statement from NSO Group. The statement mentions that “the list of countries in which NSO is alleged to operate is simply inaccurate. NSO does not operate in many of the countries listed.” This statement is a misunderstanding of our investigation: the list in our report is of suspected locations of NSO infections, it is not a list of suspected NSO customers. As we describe in Section 3, we observed DNS cache hits from what appear to be 33 distinct operators, some of whom appeared to be conducting operations in multiple countries. Thus, our list of 45 countries necessarily includes countries that are not NSO Group customers. We describe additional limitations of our method in Section 4, including factors such as VPNs and satellite connections, which can cause targets to appear in other countries.

Motherboard article. Slashdot and Boing Boing posts.


Security Vulnerability in ESS ExpressVote Touchscreen Voting Computer

[2018.09.20] Of course the ESS ExpressVote voting computer will have lots of security vulnerabilities. It’s a computer, and computers have lots of vulnerabilities. This particular vulnerability is particularly interesting because it’s the result of a security mistake in the design process. Someone didn’t think the security through, and the result is a voter-verifiable paper audit trail that doesn’t provide the security it promises.

Here are the details:

Now there’s an even worse option than “DRE with paper trail”; I call it “press this button if it’s OK for the machine to cheat” option. The country’s biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote. Some of these are optical scanners (which are fine), and others are “combination” machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas. The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates. Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect. If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself). It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here’s the amazingly bad feature: “The version that we have has an option for both ways,” [Johnson County Election Commissioner Ronnie] Metsker said. “We instruct the voters to print their ballots so that they can review their paper ballots, but they’re not required to do so. If they want to press the button ‘cast ballot,’ it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine.” [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it’s easy for a hacked machine to cheat undetectably! All the fraudulent vote-counting program has to do is wait until the voter chooses between “cast ballot without inspecting” and “inspect ballot before casting.” If the latter, then don’t cheat on this ballot. If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

A voter-verifiable paper audit trail does not require every voter to verify the paper ballot. But it does require that every voter be able to verify the paper ballot. I am continuously amazed by how bad electronic voting machines are. Yes, they’re computers. But they also seem to be designed by people who don’t understand computer (or any) security.


AES Resulted in a $250-Billion Economic Benefit

[2018.09.21] NIST has released a new study concluding that the AES encryption standard has resulted in a $250-billion worldwide economic benefit over the past 20 years. I have no idea how to even begin to assess the quality of the study and its conclusions—it’s all in the 150-page report, though—but I do like the pretty block diagram of AES on the report’s cover.


New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography

[2018.09.21] Lots of people are e-mailing me about this new result on the distribution of prime numbers. While interesting, it has nothing to do with cryptography. Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers. Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.


New Variants of Cold-Boot Attack

[2018.09.24] If someone has physical access to your locked—but still running—computer, they can probably break the hard drive’s encryption. This is a “cold boot” attack, and one we thought solved. We have not:

To carry out the attack, the F-Secure researchers first sought a way to defeat the the industry-standard cold boot mitigation. The protection works by creating a simple check between an operating system and a computer’s firmware, the fundamental code that coordinates hardware and software for things like initiating booting. The operating system sets a sort of flag or marker indicating that it has secret data stored in its memory, and when the computer boots up, its firmware checks for the flag. If the computer shuts down normally, the operating system wipes the data and the flag with it. But if the firmware detects the flag during the boot process, it takes over the responsibility of wiping the memory before anything else can happen.

Looking at this arrangement, the researchers realized a problem. If they physically opened a computer and directly connected to the chip that runs the firmware and the flag, they could interact with it and clear the flag. This would make the computer think it shut down correctly and that the operating system wiped the memory, because the flag was gone, when actually potentially sensitive data was still there.

So the researchers designed a relatively simple microcontroller and program that can connect to the chip the firmware is on and manipulate the flag. From there, an attacker could move ahead with a standard cold boot attack. Though any number of things could be stored in memory when a computer is idle, Segerdahl notes that an attacker can be sure the device’s decryption keys will be among them if she is staring down a computer’s login screen, which is waiting to check any inputs against the correct ones.


Evidence for the Security of PKCS #1 Digital Signatures

[2018.09.25] This is interesting research: “On the Security of the PKCS#1 v1.5 Signature Scheme“:

Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

I don’t think the protocol is “provably secure,” meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.


Counting People through a Wall with Wi-Fi

[2018.09.27] Interesting research:

In the team’s experiments, one WiFi transmitter and one WiFi receiver are behind walls, outside a room in which a number of people are present. The room can get very crowded with as many as 20 people zigzagging each other. The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room ­ an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.

Academic paper.


Yet Another IoT Cybersecurity Document

[2018.09.28] This one is from NIST: “Considerations for Managing Internet of Things (IoT) Cybersecurity and Privacy Risks.” It’s still in draft.

Remember, there are many others.


Major Tech Companies Finally Endorse Federal Privacy Regulation

[2018.09.28] The major tech companies, scared that states like California might impose actual privacy regulations, have now decided that they can better lobby the federal government for much weaker national legislation that will preempt any stricter state measures.

I’m sure they’ll still do all they can to weaken the California law, but they know they’ll do better at the national level.


More on the Five Eyes Statement on Encryption and Backdoors

[2018.10.01] Earlier this month, I wrote about a statement by the Five Eyes countries about encryption and back doors. (Short summary: they like them.) One of the weird things about the statement is that it was clearly written from a law-enforcement perspective, though we normally think of the Five Eyes as a consortium of intelligence agencies.

Susan Landau examines the details of the statement, explains what’s going on, and why the statement is a lot less than what it might seem.


Facebook Is Using Your Two-Factor Authentication Phone Number to Target Advertising

[2018.10.02] From Kashmir Hill:

Facebook is not content to use the contact information you willingly put into your Facebook profile for advertising. It is also using contact information you handed over for security purposes and contact information you didn’t hand over at all, but that was collected from other people’s contact books, a hidden layer of details Facebook has about you that I’ve come to call “shadow contact information.” I managed to place an ad in front of Alan Mislove by targeting his shadow profile. This means that the junk email address that you hand over for discounts or for shady online shopping is likely associated with your account and being used to target you with ads.

Here’s the research paper. Hill again:

They found that when a user gives Facebook a phone number for two-factor authentication or in order to receive alerts about new log-ins to a user’s account, that phone number became targetable by an advertiser within a couple of weeks. So users who want their accounts to be more secure are forced to make a privacy trade-off and allow advertisers to more easily find them on the social network.


Sophisticated Voice Phishing Scams

[2018.10.02] Brian Krebs is reporting on some new and sophisticated phishing scams over the telephone.

I second his advice: “never give out any information about yourself in response to an unsolicited phone call.” Always call them back, and not using the number offered to you by the caller. Always.

EDITED TO ADD: In 2009, I wrote:

When I was growing up, children were commonly taught: “don’t talk to strangers.” Strangers might be bad, we were told, so it’s prudent to steer clear of them.

And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

These two pieces of advice may seem to contradict each other, but they don’t. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it’s not a random choice. It’s more likely, although still unlikely, that the stranger is up to no good.

That advice is generalizable to this instance as well. The problem is that someone claiming to be from your bank asking for personal information. The problem is that they contacted you first.

Where else does this advice hold true?


Terahertz Millimeter-Wave Scanners

[2018.10.03] Interesting article on terahertz millimeter-wave scanners and their uses to detect terrorist bombers.

The heart of the device is a block of electronics about the size of a 1990s tower personal computer. It comes housed in a musician’s black case, akin to the one Spinal Tap might use on tour. At the front: a large, square white plate, the terahertz camera and, just above it, an ordinary closed-circuit television (CCTV) camera. Mounted on a shelf inside the case is a laptop that displays the CCTV image and the blobby terahertz image side by side.

An operator compares the two images as people flow past, looking for unexplained dark areas that could represent firearms or suicide vests. Most images that might be mistaken for a weapon­—backpacks or a big patch of sweat on the back of a person’s shirt­—are easily evaluated by observing the terahertz image alongside an unaltered video picture of the passenger.

It is up to the operator­—in LA’s case, presumably a transport police officer­—to query people when dark areas on the terahertz image suggest concealed large weapons or suicide vests. The device cannot see inside bodies, backpacks or shoes. “If you look at previous incidents on public transit systems, this technology would have detected those,” Sotero says, noting LA Metro worked “closely” with the TSA for over a year to test this and other technologies. “It definitely has the backing of TSA.”

How the technology works in practice depends heavily on the operator’s training. According to Evans, “A lot of tradecraft goes into understanding where the threat item is likely to be on the body.” He sees the crucial role played by the operator as giving back control to security guards and allowing them to use their common sense.

I am quoted in the article as being skeptical of the technology, particularly how its deployed.


The Effects of GDPR’s 72-Hour Notification Rule

[2018.10.03] The EU’s GDPR regulation requires companies to report a breach within 72 hours. Alex Stamos, former Facebook CISO now at Stanford University, points out how this can be a problem:

Interesting impact of the GDPR 72-hour deadline: companies announcing breaches before investigations are complete.

1) Announce & cop to max possible impacted users.

2) Everybody is confused on actual impact, lots of rumors.

3) A month later truth is included in official filing.

Last week’s Facebook hack is his example.

The Twitter conversation continues as various people try to figure out if the European law allows a delay in order to work with law enforcement to catch the hackers, or if a company can report the breach privately with some assurance that it won’t accidentally leak to the public.

The other interesting impact is the foreclosing of any possible coordination with law enforcement. I once ran response for a breach of a financial institution, which wasn’t disclosed for months as the company was working with the USSS to lure the attackers into a trap. It worked.

[…]

The assumption that anything you share with an EU DPA stays confidential in the current media environment has been disproven by my personal experience.

This is a perennial problem: we can get information quickly, or we can get accurate information. It’s hard to get both at the same time.


Helen Nissenbaum on Data Privacy and Consent

[2018.10.04] This is a fantastic Q&A with Cornell Tech Professor Helen Nissenbaum on data privacy and why it’s wrong to focus on consent.

I’m not going to pull a quote, because you should read the whole thing.


Chinese Supply Chain Hardware Attack

[2018.10.04] Bloomberg is reporting about a Chinese espionage operating involving inserting a tiny chip into computer products made in China.

I’ve written about (alternate link) this threat more generally. Supply-chain security is an insurmountably hard problem. Our IT industry is inexorably international, and anyone involved in the process can subvert the security of the end product. No one wants to even think about a US-only anything; prices would multiply many times over.

We cannot trust anyone, yet we have no choice but to trust everyone. No one is ready for the costs that solving this would entail.

EDITED TO ADD: Apple, Amazon, and others are denying that this attack is real. Stay tuned for more information.

EDITED TO ADD (9/6): TheGrugq comments. Bottom line is that we still don’t know. I think that precisely exemplifies the greater problem.

EDITED TO ADD (10/7): Both the US Department of Homeland Security and the UK National Cyber Security Centre claim to believe the tech companies. Bloomberg is standing by its story. Nicholas Weaver writes that the story is plausible.


Conspiracy Theories around the “Presidential Alert”

[2018.10.04] Noted conspiracy theorist John McAfee tweeted:

The “Presidential alerts”: they are capable of accessing the E911 chip in your phones—giving them full access to your location, microphone, camera and every function of your phone. This not a rant, this is from me, still one of the leading cybersecurity experts. Wake up people!

This is, of course, ridiculous. I don’t even know what an “E911 chip” is. And—honestly—if the NSA wanted in your phone, they would be a lot more subtle than this.

RT has picked up the story, though.

(If they just called it a “FEMA Alert,” there would be a lot less stress about the whole thing.)


Detecting Credit Card Skimmers

[2018.10.05] Interesting research paper: “Fear the Reaper: Characterization and Fast Detection of Card Skimmers“:

Abstract: Payment card fraud results in billions of dollars in losses annually. Adversaries increasingly acquire card data using skimmers, which are attached to legitimate payment devices including point of sale terminals, gas pumps, and ATMs. Detecting such devices can be difficult, and while many experts offer advice in doing so, there exists no large-scale characterization of skimmer technology to support such defenses. In this paper, we perform the first such study based on skimmers recovered by the NYPD’s Financial Crimes Task Force over a 16 month period. After systematizing these devices, we develop the Skim Reaper, a detector which takes advantage of the physical properties and constraints necessary for many skimmers to steal card data. Our analysis shows the Skim Reaper effectively detects 100% of devices supplied by the NYPD. In so doing, we provide the first robust and portable mechanism for detecting card skimmers.

Boing Boing post.


Defeating the “Deal or No Deal” Arcade Game

[2018.10.08] Two teenagers figured out how to beat the “Deal or No Deal” arcade game by filming the computer animation and then slowing it down enough to determine where the big prize was hidden.


The US National Cyber Strategy

[2018.10.09] Last month, the White House released the “National Cyber Strategy of the United States of America. I generally don’t have much to say about these sorts of documents. They’re filled with broad generalities. Who can argue with:

Defend the homeland by protecting networks, systems, functions, and data;

Promote American prosperity by nurturing a secure, thriving digital economy and fostering strong domestic innovation;

Preserve peace and security by strengthening the ability of the United States in concert with allies and partners ­ to deter and, if necessary, punish those who use cyber tools for malicious purposes; and

Expand American influence abroad to extend the key tenets of an open, interoperable, reliable, and secure Internet.

The devil is in the details, of course. And the strategy includes no details.

In a New York Times op-ed, Josephine Wolff argues that this new strategy, together with the more-detailed Department of Defense cyber strategy and the classified National Security Presidential Memorandum 13, represent a dangerous shift of US cybersecurity posture from defensive to offensive:

…the National Cyber Strategy represents an abrupt and reckless shift in how the United States government engages with adversaries online. Instead of continuing to focus on strengthening defensive technologies and minimizing the impact of security breaches, the Trump administration plans to ramp up offensive cyberoperations. The new goal: deter adversaries through pre-emptive cyberattacks and make other nations fear our retaliatory powers.

[…]

The Trump administration’s shift to an offensive approach is designed to escalate cyber conflicts, and that escalation could be dangerous. Not only will it detract resources and attention from the more pressing issues of defense and risk management, but it will also encourage the government to act recklessly in directing cyberattacks at targets before they can be certain of who those targets are and what they are doing.

[…]

There is no evidence that pre-emptive cyberattacks will serve as effective deterrents to our adversaries in cyberspace. In fact, every time a country has initiated an unprompted cyberattack, it has invariably led to more conflict and has encouraged retaliatory breaches rather than deterring them. Nearly every major publicly known online intrusion that Russia or North Korea has perpetrated against the United States has had significant and unpleasant consequences.

Wolff is right; this is reckless. In Click Here to Kill Everybody, I argue for a “defense dominant” strategy: that while offense is essential for defense, when the two are in conflict, it should take a back seat to defense. It’s more complicated than that, of course, and I devote a whole chapter to its implications. But as computers and the Internet become more critical to our lives and society, keeping them secure becomes more important than using them to attack others.


Access Now Is Looking for a Chief Security Officer

[2018.10.09] The international digital human rights organization Access Now (I am on the board) is looking to hire a Chief Security Officer.

I believe that, somewhere, there is a highly qualified security person who has had enough of corporate life and wants instead to make a difference in the world. If that’s you, please consider applying.


Security Vulnerabilities in US Weapons Systems

[2018.10.10] The US Government Accounting Office just published a new report: “Weapons Systems Cyber Security: DOD Just Beginning to Grapple with Scale of Vulnerabilities” (summary here). The upshot won’t be a surprise to any of my regular readers: they’re vulnerable.

From the summary:

Automation and connectivity are fundamental enablers of DOD’s modern military capabilities. However, they make weapon systems more vulnerable to cyber attacks. Although GAO and others have warned of cyber risks for decades, until recently, DOD did not prioritize weapon systems cybersecurity. Finally, DOD is still determining how best to address weapon systems cybersecurity.

In operational testing, DOD routinely found mission-critical cyber vulnerabilities in systems that were under development, yet program officials GAO met with believed their systems were secure and discounted some test results as unrealistic. Using relatively simple tools and techniques, testers were able to take control of systems and largely operate undetected, due in part to basic issues such as poor password management and unencrypted communications. In addition, vulnerabilities that DOD is aware of likely represent a fraction of total vulnerabilities due to testing limitations. For example, not all programs have been tested and tests do not reflect the full range of threats.

It is definitely easier, and cheaper, to ignore the problem or pretend it isn’t a big deal. But that’s probably a mistake in the long run.


Another Bloomberg Story about Supply-Chain Hardware Attacks from China

[2018.10.11] Bloomberg has another story about hardware surveillance implants in equipment made in China. This implant is different from the one Bloomberg reported on last week. That story has been denied by pretty much everyone else, but Bloomberg is sticking by its story and its sources. (I linked to other commentary and analysis here.)

Again, I have no idea what’s true. The story is plausible. The denials are about what you’d expect. My lone hesitation to believing this is not seeing a photo of the hardware implant. If these things were in servers all over the US, you’d think someone would have come up with a photograph by now.

EDITED TO ADD (10/12): Three more links worth reading.


Security in a World of Physically Capable Computers

[2018.10.12] It’s no secret that computers are insecure. Stories like the recent Facebook hack, the Equifax hack and the hacking of government agencies are remarkable for how unremarkable they really are. They might make headlines for a few days, but they’re just the newsworthy tip of a very large iceberg.

The risks are about to get worse, because computers are being embedded into physical devices and will affect lives, not just our data. Security is not a problem the market will solve. The government needs to step in and regulate this increasingly dangerous space.

The primary reason computers are insecure is that most buyers aren’t willing to pay—in money, features, or time to market—for security to be built into the products and services they want. As a result, we are stuck with hackable internet protocols, computers that are riddled with vulnerabilities and networks that are easily penetrated.

We have accepted this tenuous situation because, for a very long time, computer security has mostly been about data. Banking data stored by financial institutions might be important, but nobody dies when it’s stolen. Facebook account data might be important, but again, nobody dies when it’s stolen. Regardless of how bad these hacks are, it has historically been cheaper to accept the results than to fix the problems. But the nature of how we use computers is changing, and that comes with greater security risks.

Many of today’s new computers are not just screens that we stare at, but objects in our world with which we interact. A refrigerator is now a computer that keeps things cold; a car is now a computer with four wheels and an engine. These computers sense us and our environment, and they affect us and our environment. They talk to each other over networks, they are autonomous, and they have physical agency. They drive our cars, pilot our planes, and run our power plants. They control traffic, administer drugs into our bodies, and dispatch emergency services. These connected computers and the network that connects them—collectively known as “the internet of things”—affect the world in a direct physical manner.

We’ve already seen hacks against robot vacuum cleaners, ransomware that shut down hospitals and denied care to patients, and malware that shut down cars and power plants. These attacks will become more common, and more catastrophic. Computers fail differently than most other machines: It’s not just that they can be attacked remotely—they can be attacked all at once. It’s impossible to take an old refrigerator and infect it with a virus or recruit it into a denial-of-service botnet, and a car without an internet connection simply can’t be hacked remotely. But that computer with four wheels and an engine? It—along with all other cars of the same make and model—can be made to run off the road, all at the same time.

As the threats increase, our longstanding assumptions about security no longer work. The practice of patching a security vulnerability is a good example of this. Traditionally, we respond to the never-ending stream of computer vulnerabilities by regularly patching our systems, applying updates that fix the insecurities. This fails in low-cost devices, whose manufacturers don’t have security teams to write the patches: if you want to update your DVR or webcam for security reasons, you have to throw your old one away and buy a new one. Patching also fails in more expensive devices, and can be quite dangerous. Do we want to allow vulnerable automobiles on the streets and highways during the weeks before a new security patch is written, tested, and distributed?

Another failing assumption is the security of our supply chains. We’ve started to see political battles about government-placed vulnerabilities in computers and software from Russia and China. But supply chain security is about more than where the suspect company is located: we need to be concerned about where the chips are made, where the software is written, who the programmers are, and everything else.

Last week, Bloomberg reported that China inserted eavesdropping chips into hardware made for American companies like Amazon and Apple. The tech companies all denied the accuracy of this report, which precisely illustrates the problem. Everyone involved in the production of a computer must be trusted, because any one of them can subvert the security. As everything becomes a computer and those computers become embedded in national-security applications, supply-chain corruption will be impossible to ignore.

These are problems that the market will not fix. Buyers can’t differentiate between secure and insecure products, so sellers prefer to spend their money on features that buyers can see. The complexity of the internet and of our supply chains make it difficult to trace a particular vulnerability to a corresponding harm. The courts have traditionally not held software manufacturers liable for vulnerabilities. And, for most companies, it has generally been good business to skimp on security, rather than sell a product that costs more, does less, and is on the market a year later.

The solution is complicated, and it’s one I devoted my latest book to answering. There are technological challenges, but they’re not insurmountable—the policy issues are far more difficult. We must engage with the future of internet security as a policy issue. Doing so requires a multifaceted approach, one that requires government involvement at every step.

First, we need standards to ensure that unsafe products don’t harm others. We need to accept that the internet is global and regulations are local, and design accordingly. These standards will include some prescriptive rules for minimal acceptable security. California just enacted an Internet of Things security law that prohibits default passwords. This is just one of many security holes that need to be closed, but it’s a good start.

We also need our standards to be flexible and easy to adapt to the needs of various companies, organizations, and industries. The National Institute of Standards and Technology’s Cybersecurity Framework is an excellent example of this, because its recommendations can be tailored to suit the individual needs and risks of organizations. The Cybersecurity Framework—which contains guidance on how to identify, prevent, recover, and respond to security risks—is voluntary at this point, which means nobody follows it. Making it mandatory for critical industries would be a great first step. An appropriate next step would be to implement more specific standards for industries like automobiles, medical devices, consumer goods, and critical infrastructure.

Second, we need regulatory agencies to penalize companies with bad security, and a robust liability regime. The Federal Trade Commission is starting to do this, but it can do much more. It needs to make the cost of insecurity greater than the cost of security, which means that fines have to be substantial. The European Union is leading the way in this regard: they’ve passed a comprehensive privacy law, and are now turning to security and safety. The United States can and should do the same.

We need to ensure that companies are held accountable for their products and services, and that those affected by insecurity can recover damages. Traditionally, United States courts have declined to enforce liabilities for software vulnerabilities, and those affected by data breaches have been unable to prove specific harm. Here, we need statutory damages—harms spelled out in the law that don’t require any further proof.

Finally, we need to make it an overarching policy that security takes precedence over everything else. The internet is used globally, by everyone, and any improvements we make to security will necessarily help those we might prefer remain insecure: criminals, terrorists, rival governments. Here, we have no choice. The security we gain from making our computers less vulnerable far outweighs any security we might gain from leaving insecurities that we can exploit.

Regulation is inevitable. Our choice is no longer between government regulation and no government regulation, but between smart government regulation and ill-advised government regulation. Government regulation is not something to fear. Regulation doesn’t stifle innovation, and I suspect that well-written regulation will spur innovation by creating a market for security technologies.

No industry has significantly improved the security or safety of its products without the government stepping in to help. Cars, airplanes, pharmaceuticals, consumer goods, food, medical devices, workplaces, restaurants, and, most recently, financial products—all needed government regulation in order to become safe and secure.

Getting internet safety and security right will depend on people: people who are willing to take the time and expense to do the right things; people who are determined to put the best possible law and policy into place. The internet is constantly growing and evolving; we still have time for our security to adapt, but we need to act quickly, before the next disaster strikes. It’s time for the government to jump in and help. Not tomorrow, not next week, not next year, not when the next big technology company or government agency is hacked, but now.

This essay previously appeared in the New York Times. It’s basically a summary of what I talk about in my new book.


Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.


Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security technology. To subscribe, or to read back issues, see Crypto-Gram’s web page.

You can also read these articles on my blog, Schneier on Security.

Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

Bruce Schneier is an internationally renowned security technologist, called a security guru by the Economist. He is the author of 14 books—including the New York Times best-seller Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World—as well as hundreds of articles, essays, and academic papers. His newsletter and blog are read by over 250,000 people. Schneier is a fellow at the Berkman Klein Center for Internet and Society at Harvard University; a Lecturer in Public Policy at the Harvard Kennedy School; a board member of the Electronic Frontier Foundation, AccessNow, and the Tor Project; and an advisory board member of EPIC and VerifiedVoting.org. He is also a special advisor to IBM Security and the CTO of IBM Resilient.

Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of IBM, IBM Security, or IBM Resilient.

Copyright © 2018 by Bruce Schneier.

Sidebar photo of Bruce Schneier by Joe MacInnis.