Blog: September 2018 Archives

Major Tech Companies Finally Endorse Federal Privacy Regulation

The major tech companies, scared that states like California might impose actual privacy regulations, have now decided that they can better lobby the federal government for much weaker national legislation that will preempt any stricter state measures.

I’m sure they’ll still do all they can to weaken the California law, but they know they’ll do better at the national level.

Posted on September 28, 2018 at 1:19 PM14 Comments

Counting People through a Wall with Wi-Fi

Interesting research:

In the team’s experiments, one WiFi transmitter and one WiFi receiver are behind walls, outside a room in which a number of people are present. The room can get very crowded with as many as 20 people zigzagging each other. The transmitter sends a wireless signal whose received signal strength (RSSI) is measured by the receiver. Using only such received signal power measurements, the receiver estimates how many people are inside the room ­ an estimate that closely matches the actual number. It is noteworthy that the researchers do not do any prior measurements or calibration in the area of interest; their approach has only a very short calibration phase that need not be done in the same area.

Academic paper.

Posted on September 27, 2018 at 7:43 AM24 Comments

Evidence for the Security of PKCS #1 Digital Signatures

This is interesting research: “On the Security of the PKCS#1 v1.5 Signature Scheme“:

Abstract: The RSA PKCS#1 v1.5 signature algorithm is the most widely used digital signature scheme in practice. Its two main strengths are its extreme simplicity, which makes it very easy to implement, and that verification of signatures is significantly faster than for DSA or ECDSA. Despite the huge practical importance of RSA PKCS#1 v1.5 signatures, providing formal evidence for their security based on plausible cryptographic hardness assumptions has turned out to be very difficult. Therefore the most recent version of PKCS#1 (RFC 8017) even recommends a replacement the more complex and less efficient scheme RSA-PSS, as it is provably secure and therefore considered more robust. The main obstacle is that RSA PKCS#1 v1.5 signatures use a deterministic padding scheme, which makes standard proof techniques not applicable.

We introduce a new technique that enables the first security proof for RSA-PKCS#1 v1.5 signatures. We prove full existential unforgeability against adaptive chosen-message attacks (EUF-CMA) under the standard RSA assumption. Furthermore, we give a tight proof under the Phi-Hiding assumption. These proofs are in the random oracle model and the parameters deviate slightly from the standard use, because we require a larger output length of the hash function. However, we also show how RSA-PKCS#1 v1.5 signatures can be instantiated in practice such that our security proofs apply.

In order to draw a more complete picture of the precise security of RSA PKCS#1 v1.5 signatures, we also give security proofs in the standard model, but with respect to weaker attacker models (key-only attacks) and based on known complexity assumptions. The main conclusion of our work is that from a provable security perspective RSA PKCS#1 v1.5 can be safely used, if the output length of the hash function is chosen appropriately.

I don’t think the protocol is “provably secure,” meaning that it cannot have any vulnerabilities. What this paper demonstrates is that there are no vulnerabilities under the model of the proof. And, more importantly, that PKCS #1 v1.5 is as secure as any of its successors like RSA-PSS and RSA Full-Domain.

Posted on September 25, 2018 at 6:50 AM6 Comments

New Variants of Cold-Boot Attack

If someone has physical access to your locked—but still running—computer, they can probably break the hard drive’s encryption. This is a “cold boot” attack, and one we thought solved. We have not:

To carry out the attack, the F-Secure researchers first sought a way to defeat the the industry-standard cold boot mitigation. The protection works by creating a simple check between an operating system and a computer’s firmware, the fundamental code that coordinates hardware and software for things like initiating booting. The operating system sets a sort of flag or marker indicating that it has secret data stored in its memory, and when the computer boots up, its firmware checks for the flag. If the computer shuts down normally, the operating system wipes the data and the flag with it. But if the firmware detects the flag during the boot process, it takes over the responsibility of wiping the memory before anything else can happen.

Looking at this arrangement, the researchers realized a problem. If they physically opened a computer and directly connected to the chip that runs the firmware and the flag, they could interact with it and clear the flag. This would make the computer think it shut down correctly and that the operating system wiped the memory, because the flag was gone, when actually potentially sensitive data was still there.

So the researchers designed a relatively simple microcontroller and program that can connect to the chip the firmware is on and manipulate the flag. From there, an attacker could move ahead with a standard cold boot attack. Though any number of things could be stored in memory when a computer is idle, Segerdahl notes that an attacker can be sure the device’s decryption keys will be among them if she is staring down a computer’s login screen, which is waiting to check any inputs against the correct ones.

Posted on September 24, 2018 at 6:52 AM29 Comments

New Findings About Prime Number Distribution Almost Certainly Irrelevant to Cryptography

Lots of people are e-mailing me about this new result on the distribution of prime numbers. While interesting, it has nothing to do with cryptography. Cryptographers aren’t interested in how to find prime numbers, or even in the distribution of prime numbers. Public-key cryptography algorithms like RSA get their security from the difficulty of factoring large composite numbers that are the product of two prime numbers. That’s completely different.

Posted on September 21, 2018 at 2:14 PM34 Comments

Security Vulnerability in ESS ExpressVote Touchscreen Voting Computer

Of course the ESS ExpressVote voting computer will have lots of security vulnerabilities. It’s a computer, and computers have lots of vulnerabilities. This particular vulnerability is particularly interesting because it’s the result of a security mistake in the design process. Someone didn’t think the security through, and the result is a voter-verifiable paper audit trail that doesn’t provide the security it promises.

Here are the details:

Now there’s an even worse option than “DRE with paper trail”; I call it “press this button if it’s OK for the machine to cheat” option. The country’s biggest vendor of voting machines, ES&S, has a line of voting machines called ExpressVote. Some of these are optical scanners (which are fine), and others are “combination” machines, basically a ballot-marking device and an optical scanner all rolled into one.

This video shows a demonstration of ExpressVote all-in-one touchscreens purchased by Johnson County, Kansas. The voter brings a blank ballot to the machine, inserts it into a slot, chooses candidates. Then the machine prints those choices onto the blank ballot and spits it out for the voter to inspect. If the voter is satisfied, she inserts it back into the slot, where it is counted (and dropped into a sealed ballot box for possible recount or audit).

So far this seems OK, except that the process is a bit cumbersome and not completely intuitive (watch the video for yourself). It still suffers from the problems I describe above: voter may not carefully review all the choices, especially in down-ballot races; counties need to buy a lot more voting machines, because voters occupy the machine for a long time (in contrast to op-scan ballots, where they occupy a cheap cardboard privacy screen).

But here’s the amazingly bad feature: “The version that we have has an option for both ways,” [Johnson County Election Commissioner Ronnie] Metsker said. “We instruct the voters to print their ballots so that they can review their paper ballots, but they’re not required to do so. If they want to press the button ‘cast ballot,’ it will cast the ballot, but if they do so they are doing so with full knowledge that they will not see their ballot card, it will instead be cast, scanned, tabulated and dropped in the secure ballot container at the backside of the machine.” [TYT Investigates, article by Jennifer Cohn, September 6, 2018]

Now it’s easy for a hacked machine to cheat undetectably! All the fraudulent vote-counting program has to do is wait until the voter chooses between “cast ballot without inspecting” and “inspect ballot before casting.” If the latter, then don’t cheat on this ballot. If the former, then change votes how it likes, and print those fraudulent votes on the paper ballot, knowing that the voter has already given up the right to look at it.

A voter-verifiable paper audit trail does not require every voter to verify the paper ballot. But it does require that every voter be able to verify the paper ballot. I am continuously amazed by how bad electronic voting machines are. Yes, they’re computers. But they also seem to be designed by people who don’t understand computer (or any) security.

Posted on September 20, 2018 at 6:45 AM25 Comments

Pegasus Spyware Used in 45 Countries

Citizen Lab has published a new report about the Pegasus spyware. From a ZDNet article:

The malware, known as Pegasus (or Trident), was created by Israeli cyber-security firm NSO Group and has been around for at least three years—when it was first detailed in a report over the summer of 2016.

The malware can operate on both Android and iOS devices, albeit it’s been mostly spotted in campaigns targeting iPhone users primarily. On infected devices, Pegasus is a powerful spyware that can do many things, such as record conversations, steal private messages, exfiltrate photos, and much much more.

From the report:

We found suspected NSO Pegasus infections associated with 33 of the 36 Pegasus operators we identified in 45 countries: Algeria, Bahrain, Bangladesh, Brazil, Canada, Cote d’Ivoire, Egypt, France, Greece, India, Iraq, Israel, Jordan, Kazakhstan, Kenya, Kuwait, Kyrgyzstan, Latvia, Lebanon, Libya, Mexico, Morocco, the Netherlands, Oman, Pakistan, Palestine, Poland, Qatar, Rwanda, Saudi Arabia, Singapore, South Africa, Switzerland, Tajikistan, Thailand, Togo, Tunisia, Turkey, the UAE, Uganda, the United Kingdom, the United States, Uzbekistan, Yemen, and Zambia. As our findings are based on country-level geolocation of DNS servers, factors such as VPNs and satellite Internet teleport locations can introduce inaccuracies.

Six of those countries are known to deploy spyware against political opposition: Bahrain, Kazakhstan, Mexico, Morocco, Saudi Arabia, and the United Arab Emirates.

Also note:

On 17 September 2018, we then received a public statement from NSO Group. The statement mentions that “the list of countries in which NSO is alleged to operate is simply inaccurate. NSO does not operate in many of the countries listed.” This statement is a misunderstanding of our investigation: the list in our report is of suspected locations of NSO infections, it is not a list of suspected NSO customers. As we describe in Section 3, we observed DNS cache hits from what appear to be 33 distinct operators, some of whom appeared to be conducting operations in multiple countries. Thus, our list of 45 countries necessarily includes countries that are not NSO Group customers. We describe additional limitations of our method in Section 4, including factors such as VPNs and satellite connections, which can cause targets to appear in other countries.

Motherboard article. Slashdot and Boing Boing posts.

Posted on September 19, 2018 at 5:19 AM11 Comments

NSA Attacks Against Virtual Private Networks

A 2006 document from the Snowden archives outlines successful NSA operations against “a number of “high potential” virtual private networks, including those of media organization Al Jazeera, the Iraqi military and internet service organizations, and a number of airline reservation systems.”

It’s hard to believe that many of the Snowden documents are now more than a decade old.

Posted on September 17, 2018 at 6:12 AM42 Comments

Click Here to Kill Everybody Reviews and Press Mentions

It’s impossible to know all the details, but my latest book seems to be selling well. Initial reviews have been really positive: Boing Boing, Financial Times, Harris Online, Kirkus Reviews, Nature, Politico, and Virus Bulletin.

I’ve also done a bunch of interviews—either written or radio/podcast—including the Washington Post, a Reddit AMA, “The 1A ” on NPR, Security Ledger, MIT Technology Review, CBC Radio, and WNYC Radio.

There have been others—like the Lawfare, Cyberlaw, and Hidden Forces podcasts—but they haven’t been published yet. I also did a book talk at Google that should appear on YouTube soon.

If you’ve bought and read the book, thank you. Please consider leaving a review on Amazon.

Posted on September 14, 2018 at 2:14 PM28 Comments

Quantum Computing and Cryptography

Quantum computing is a new way of computing—one that could allow humankind to perform computations that are simply impossible using today’s computing technologies. It allows for very fast searching, something that would break some of the encryption algorithms we use today. And it allows us to easily factor large numbers, something that would break the RSA cryptosystem for any key length.

This is why cryptographers are hard at work designing and analyzing “quantum-resistant” public-key algorithms. Currently, quantum computing is too nascent for cryptographers to be sure of what is secure and what isn’t. But even assuming aliens have developed the technology to its full potential, quantum computing doesn’t spell the end of the world for cryptography. Symmetric cryptography is easy to make quantum-resistant, and we’re working on quantum-resistant public-key algorithms. If public-key cryptography ends up being a temporary anomaly based on our mathematical knowledge and computational ability, we’ll still survive. And if some inconceivable alien technology can break all of cryptography, we still can have secrecy based on information theory—albeit with significant loss of capability.

At its core, cryptography relies on the mathematical quirk that some things are easier to do than to undo. Just as it’s easier to smash a plate than to glue all the pieces back together, it’s much easier to multiply two prime numbers together to obtain one large number than it is to factor that large number back into two prime numbers. Asymmetries of this kind—one-way functions and trap-door one-way functions—underlie all of cryptography.

To encrypt a message, we combine it with a key to form ciphertext. Without the key, reversing the process is more difficult. Not just a little more difficult, but astronomically more difficult. Modern encryption algorithms are so fast that they can secure your entire hard drive without any noticeable slowdown, but that encryption can’t be broken before the heat death of the universe.

With symmetric cryptography—the kind used to encrypt messages, files, and drives—that imbalance is exponential, and is amplified as the keys get larger. Adding one bit of key increases the complexity of encryption by less than a percent (I’m hand-waving here) but doubles the cost to break. So a 256-bit key might seem only twice as complex as a 128-bit key, but (with our current knowledge of mathematics) it’s 340,282,366,920,938,463,463,374,607,431,768,211,456 times harder to break.

Public-key encryption (used primarily for key exchange) and digital signatures are more complicated. Because they rely on hard mathematical problems like factoring, there are more potential tricks to reverse them. So you’ll see key lengths of 2,048 bits for RSA, and 384 bits for algorithms based on elliptic curves. Here again, though, the costs to reverse the algorithms with these key lengths are beyond the current reach of humankind.

This one-wayness is based on our mathematical knowledge. When you hear about a cryptographer “breaking” an algorithm, what happened is that they’ve found a new trick that makes reversing easier. Cryptographers discover new tricks all the time, which is why we tend to use key lengths that are longer than strictly necessary. This is true for both symmetric and public-key algorithms; we’re trying to future-proof them.

Quantum computers promise to upend a lot of this. Because of the way they work, they excel at the sorts of computations necessary to reverse these one-way functions. For symmetric cryptography, this isn’t too bad. Grover’s algorithm shows that a quantum computer speeds up these attacks to effectively halve the key length. This would mean that a 256-bit key is as strong against a quantum computer as a 128-bit key is against a conventional computer; both are secure for the foreseeable future.

For public-key cryptography, the results are more dire. Shor’s algorithm can easily break all of the commonly used public-key algorithms based on both factoring and the discrete logarithm problem. Doubling the key length increases the difficulty to break by a factor of eight. That’s not enough of a sustainable edge.

There are a lot of caveats to those two paragraphs, the biggest of which is that quantum computers capable of doing anything like this don’t currently exist, and no one knows when—or even if ­- we’ll be able to build one. We also don’t know what sorts of practical difficulties will arise when we try to implement Grover’s or Shor’s algorithms for anything but toy key sizes. (Error correction on a quantum computer could easily be an unsurmountable problem.) On the other hand, we don’t know what other techniques will be discovered once people start working with actual quantum computers. My bet is that we will overcome the engineering challenges, and that there will be many advances and new techniques­but they’re going to take time to discover and invent. Just as it took decades for us to get supercomputers in our pockets, it will take decades to work through all the engineering problems necessary to build large-enough quantum computers.

In the short term, cryptographers are putting considerable effort into designing and analyzing quantum-resistant algorithms, and those are likely to remain secure for decades. This is a necessarily slow process, as both good cryptanalysis transitioning standards take time. Luckily, we have time. Practical quantum computing seems to always remain “ten years in the future,” which means no one has any idea.

After that, though, there is always the possibility that those algorithms will fall to aliens with better quantum techniques. I am less worried about symmetric cryptography, where Grover’s algorithm is basically an upper limit on quantum improvements, than I am about public-key algorithms based on number theory, which feel more fragile. It’s possible that quantum computers will someday break all of them, even those that today are quantum resistant.

If that happens, we will face a world without strong public-key cryptography. That would be a huge blow to security and would break a lot of stuff we currently do, but we could adapt. In the 1980s, Kerberos was an all-symmetric authentication and encryption system. More recently, the GSM cellular standard does both authentication and key distribution—at scale—with only symmetric cryptography. Yes, those systems have centralized points of trust and failure, but it’s possible to design other systems that use both secret splitting and secret sharing to minimize that risk. (Imagine that a pair of communicants get a piece of their session key from each of five different key servers.) The ubiquity of communications also makes things easier today. We can use out-of-band protocols where, for example, your phone helps you create a key for your computer. We can use in-person registration for added security, maybe at the store where you buy your smartphone or initialize your Internet service. Advances in hardware may also help to secure keys in this world. I’m not trying to design anything here, only to point out that there are many design possibilities. We know that cryptography is all about trust, and we have a lot more techniques to manage trust than we did in the early years of the Internet. Some important properties like forward secrecy will be blunted and far more complex, but as long as symmetric cryptography still works, we’ll still have security.

It’s a weird future. Maybe the whole idea of number theory­-based encryption, which is what our modern public-key systems are, is a temporary detour based on our incomplete model of computing. Now that our model has expanded to include quantum computing, we might end up back to where we were in the late 1970s and early 1980s: symmetric cryptography, code-based cryptography, Merkle hash signatures. That would be both amusing and ironic.

Yes, I know that quantum key distribution is a potential replacement for public-key cryptography. But come on—does anyone expect a system that requires specialized communications hardware and cables to be useful for anything but niche applications? The future is mobile, always-on, embedded computing devices. Any security for those will necessarily be software only.

There’s one more future scenario to consider, one that doesn’t require a quantum computer. While there are several mathematical theories that underpin the one-wayness we use in cryptography, proving the validity of those theories is in fact one of the great open problems in computer science. Just as it is possible for a smart cryptographer to find a new trick that makes it easier to break a particular algorithm, we might imagine aliens with sufficient mathematical theory to break all encryption algorithms. To us, today, this is ridiculous. Public- key cryptography is all number theory, and potentially vulnerable to more mathematically inclined aliens. Symmetric cryptography is so much nonlinear muddle, so easy to make more complex, and so easy to increase key length, that this future is unimaginable. Consider an AES variant with a 512-bit block and key size, and 128 rounds. Unless mathematics is fundamentally different than our current understanding, that’ll be secure until computers are made of something other than matter and occupy something other than space.

But if the unimaginable happens, that would leave us with cryptography based solely on information theory: one-time pads and their variants. This would be a huge blow to security. One-time pads might be theoretically secure, but in practical terms they are unusable for anything other than specialized niche applications. Today, only crackpots try to build general-use systems based on one-time pads—and cryptographers laugh at them, because they replace algorithm design problems (easy) with key management and physical security problems (much, much harder). In our alien-ridden science-fiction future, we might have nothing else.

Against these godlike aliens, cryptography will be the only technology we can be sure of. Our nukes might refuse to detonate and our fighter jets might fall out of the sky, but we will still be able to communicate securely using one-time pads. There’s an optimism in that.

This essay originally appeared in IEEE Security and Privacy.

Posted on September 14, 2018 at 6:15 AM53 Comments

Security Risks of Government Hacking

Some of us—myself included—have proposed lawful government hacking as an alternative to backdoors. A new report from the Center of Internet and Society looks at the security risks of allowing government hacking. They include:

  • Disincentive for vulnerability disclosure
  • Cultivation of a market for surveillance tools
  • Attackers co-opt hacking tools over which governments have lost control
  • Attackers learn of vulnerabilities through government use of malware
  • Government incentives to push for less-secure software and standards
  • Government malware affects innocent users.

These risks are real, but I think they’re much less than mandating backdoors for everyone. From the report’s conclusion:

Government hacking is often lauded as a solution to the “going dark” problem. It is too dangerous to mandate encryption backdoors, but targeted hacking of endpoints could ensure investigators access to same or similar necessary data with less risk. Vulnerabilities will never affect everyone, contingent as they are on software, network configuration, and patch management. Backdoors, however, mean everybody is vulnerable and a security failure fails catastrophically. In addition, backdoors are often secret, while eventually, vulnerabilities will typically be disclosed and patched.

The key to minimizing the risks is to ensure that law enforcement (or whoever) report all vulnerabilities discovered through the normal process, and use them for lawful hacking during the period between reporting and patching. Yes, that’s a big ask, but the alternatives are worse.

This is the canonical lawful hacking paper.

Posted on September 13, 2018 at 9:08 AM40 Comments

Security Vulnerability in Smart Electric Outlets

A security vulnerability in Belkin’s Wemo Insight “smartplugs” allows hackers to not only take over the plug, but use it as a jumping-off point to attack everything else on the network.

From the Register:

The bug underscores the primary risk posed by IoT devices and connected appliances. Because they are commonly built by bolting on network connectivity to existing appliances, many IoT devices have little in the way of built-in network security.

Even when security measures are added to the devices, the third-party hardware used to make the appliances “smart” can itself contain security flaws or bad configurations that leave the device vulnerable.

“IoT devices are frequently overlooked from a security perspective; this may be because many are used for seemingly innocuous purposes such as simple home automation,” the McAfee researchers wrote.

“However, these devices run operating systems and require just as much protection as desktop computers.”

I’ll bet you anything that the plug cannot be patched, and that the vulnerability will remain until people throw them away.

Boing Boing post. McAfee’s original security bulletin.

Posted on September 12, 2018 at 6:19 AM49 Comments

Using Hacked IoT Devices to Disrupt the Power Grid

This is really interesting research: “BlackIoT: IoT Botnet of High Wattage Devices Can Disrupt the Power Grid“:

Abstract: We demonstrate that an Internet of Things (IoT) botnet of high wattage devices—such as air conditioners and heaters—gives a unique ability to adversaries to launch large-scale coordinated attacks on the power grid. In particular, we reveal a new class of potential attacks on power grids called the Manipulation of demand via IoT (MadIoT) attacks that can leverage such a botnet in order to manipulate the power demand in the grid. We study five variations of the MadIoT attacks and evaluate their effectiveness via state-of-the-art simulators on real-world power grid models. These simulation results demonstrate that the MadIoT attacks can result in local power outages and in the worst cases, large-scale blackouts. Moreover, we show that these attacks can rather be used to increase the operating cost of the grid to benefit a few utilities in the electricity market. This work sheds light upon the interdependency between the vulnerability of the IoT and that of the other networks such as the power grid whose security requires attention from both the systems security and power engineering communities.

I have been collecting examples of surprising vulnerabilities that result when we connect things to each other. This is a good example of that.

Wired article.

Posted on September 11, 2018 at 6:25 AM32 Comments

Five-Eyes Intelligence Services Choose Surveillance Over Security

The Five Eyes—the intelligence consortium of the rich English-speaking countries (the US, Canada, the UK, Australia, and New Zealand)—have issued a “Statement of Principles on Access to Evidence and Encryption” where they claim their needs for surveillance outweigh everyone’s needs for security and privacy.

…the increasing use and sophistication of certain encryption designs present challenges for nations in combatting serious crimes and threats to national and global security. Many of the same means of encryption that are being used to protect personal, commercial and government information are also being used by criminals, including child sex offenders, terrorists and organized crime groups to frustrate investigations and avoid detection and prosecution.

Privacy laws must prevent arbitrary or unlawful interference, but privacy is not absolute. It is an established principle that appropriate government authorities should be able to seek access to otherwise private information when a court or independent authority has authorized such access based on established legal standards. The same principles have long permitted government authorities to search homes, vehicles, and personal effects with valid legal authority.

The increasing gap between the ability of law enforcement to lawfully access data and their ability to acquire and use the content of that data is a pressing international concern that requires urgent, sustained attention and informed discussion on the complexity of the issues and interests at stake. Otherwise, court decisions about legitimate access to data are increasingly rendered meaningless, threatening to undermine the systems of justice established in our democratic nations.

To put it bluntly, this is reckless and shortsighted. I’ve repeatedly written about why this can’t be done technically, and why trying results in insecurity. But there’s a greater principle at first: we need to decide, as nations and as society, to put defense first. We need a “defense dominant” strategy for securing the Internet and everything attached to it.

This is important. Our national security depends on the security of our technologies. Demanding that technology companies add backdoors to computers and communications systems puts us all at risk. We need to understand that these systems are too critical to our society and—now that they can affect the world in a direct physical manner—affect our lives and property as well.

This is what I just wrote, in Click Here to Kill Everybody:

There is simply no way to secure US networks while at the same time leaving foreign networks open to eavesdropping and attack. There’s no way to secure our phones and computers from criminals and terrorists without also securing the phones and computers of those criminals and terrorists. On the generalized worldwide network that is the Internet, anything we do to secure its hardware and software secures it everywhere in the world. And everything we do to keep it insecure similarly affects the entire world.

This leaves us with a choice: either we secure our stuff, and as a side effect also secure their stuff; or we keep their stuff vulnerable, and as a side effect keep our own stuff vulnerable. It’s actually not a hard choice. An analogy might bring this point home. Imagine that every house could be opened with a master key, and this was known to the criminals. Fixing those locks would also mean that criminals’ safe houses would be more secure, but it’s pretty clear that this downside would be worth the trade-off of protecting everyone’s house. With the Internet+ increasing the risks from insecurity dramatically, the choice is even more obvious. We must secure the information systems used by our elected officials, our critical infrastructure providers, and our businesses.

Yes, increasing our security will make it harder for us to eavesdrop, and attack, our enemies in cyberspace. (It won’t make it impossible for law enforcement to solve crimes; I’ll get to that later in this chapter.) Regardless, it’s worth it. If we are ever going to secure the Internet+, we need to prioritize defense over offense in all of its aspects. We’ve got more to lose through our Internet+ vulnerabilities than our adversaries do, and more to gain through Internet+ security. We need to recognize that the security benefits of a secure Internet+ greatly outweigh the security benefits of a vulnerable one.

We need to have this debate at the level of national security. Putting spy agencies in charge of this trade-off is wrong, and will result in bad decisions.

Cory Doctorow has a good reaction.

Slashdot post.

Posted on September 6, 2018 at 6:41 AM158 Comments

Using a Smartphone's Microphone and Speakers to Eavesdrop on Passwords

It’s amazing that this is even possible: “SonarSnoop: Active Acoustic Side-Channel Attacks“:

Abstract: We report the first active acoustic side-channel attack. Speakers are used to emit human inaudible acoustic signals and the echo is recorded via microphones, turning the acoustic system of a smart phone into a sonar system. The echo signal can be used to profile user interaction with the device. For example, a victim’s finger movements can be inferred to steal Android phone unlock patterns. In our empirical study, the number of candidate unlock patterns that an attacker must try to authenticate herself to a Samsung S4 Android phone can be reduced by up to 70% using this novel acoustic side-channel. Our approach can be easily applied to other application scenarios and device types. Overall, our work highlights a new family of security threats.

News article.

Posted on September 5, 2018 at 6:05 AM27 Comments

New Book Announcement: Click Here to Kill Everybody

I am pleased to announce the publication of my latest book: Click Here to Kill Everybody: Security and Survival in a Hyper-connected World. In it, I examine how our new immersive world of physically capable computers affects our security.

I argue that this changes everything about security. Attacks are no longer just about data, they now affect life and property: cars, medical devices, thermostats, power plants, drones, and so on. All of our security assumptions assume that computers are fundamentally benign. That, no matter how bad the breach or vulnerability is, it’s just data. That’s simply not true anymore. As automation, autonomy, and physical agency become more prevalent, the trade-offs we made for things like authentication, patching, and supply chain security no longer make any sense. The things we’ve done before will no longer work in the future.

This is a book about technology, and it’s also a book about policy. The regulation-free Internet that we’ve enjoyed for the past decades will not survive this new, more dangerous, world. I fear that our choice is no longer between government regulation and no government regulation; it’s between smart government regulation and stupid regulation. My aim is to discuss what a regulated Internet might look like before one is thrust upon us after a disaster.

Click Here to Kill Everybody is available starting today. You can order a copy from Amazon, Barnes & Noble, Books-a-Million, Norton’s webpage, or anyplace else books are sold. If you’re going to buy it, please do so this week. First-week sales matter in this business.

Reviews so far from the Financial Times, Nature, and Kirkus.

Posted on September 4, 2018 at 6:20 AM54 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.