Blog: February 2017 Archives

Adm. Rogers Talks about Buying Cyberweapons

At a talk last week, the head of US Cyber Command and the NSA Mike Rogers talked about the US buying cyberweapons from arms manufacturers.

“In the application of kinetic functionality—weapons—we go to the private sector and say, ‘Build this thing we call a [joint directed-attack munition], a [Tomahawk land-attack munition].’ Fill in the blank,” he said.

“On the offensive side, to date, we have done almost all of our weapons development internally. And part of me goes—five to ten years from now is that a long-term sustainable model? Does that enable you to access fully the capabilities resident in the private sector? I’m still trying to work my way through that, intellectually.”

Businesses already flog exploits, security vulnerability details, spyware, and similar stuff to US intelligence agencies, and Rogers is clearly considering stepping that trade up a notch.

Already, Third World countries are buying from cyberweapons arms manufacturers. My guess is that he’s right and the US will be doing that in the future, too.

Posted on February 27, 2017 at 2:28 PM20 Comments

A Survey of Propaganda

This is an excellent survey article on modern propaganda techniques, how they work, and how we might defend ourselves against them.

Cory Doctorow summarizes the techniques on BoingBoing:

…in Russia, it’s about flooding the channel with a mix of lies and truth, crowding out other stories; in China, it’s about suffocating arguments with happy-talk distractions, and for trolls like Milo Yiannopoulos, it’s weaponizing hate, outraging people so they spread your message to the small, diffused minority of broken people who welcome your message and would otherwise be uneconomical to reach.

As to defense: “Debunking doesn’t work: provide an alternative narrative.”

Posted on February 27, 2017 at 6:24 AM82 Comments

SHA-1 Collision Found

The first collision in the SHA-1 hash function has been found.

This is not a surprise. We’ve all expected this for over a decade, watching computing power increase. This is why NIST standardized SHA-3 in 2012.

EDITED TO ADD (2/24): Website for the collision. (Yes, this brute-force example has its own website.)

EDITED TO ADD (3/7): This 2012 cost estimate was pretty accurate.

Posted on February 23, 2017 at 3:29 PM32 Comments

NSA Using Cyberattack for Defense

These days, it’s rare that we learn something new from the Snowden documents. But Ben Buchanan found something interesting. The NSA penetrates enemy networks in order to enhance our defensive capabilities.

The data the NSA collected by penetrating BYZANTINE CANDOR’s networks had concrete forward-looking defensive value. It included information on the adversary’s “future targets,” including “bios of senior White House officials, [cleared defense contractor] employees, [United States government] employees” and more. It also included access to the “source code and [the] new tools” the Chinese used to conduct operations. The computers penetrated by the NSA also revealed information about the exploits in use. In effect, the intelligence gained from the operation, once given to network defenders and fed into automated systems, was enough to guide and enhance the United States’ defensive efforts.

This case alludes to important themes in network defense. It shows the persistence of talented adversaries, the creativity of clever defenders, the challenge of getting actionable intelligence on the threat, and the need for network architecture and defenders capable of acting on that information. But it also highlights an important point that is too often overlooked: not every intrusion is in service of offensive aims. There are genuinely defensive reasons for a nation to launch intrusions against another nation’s networks.

[…]

Other Snowden files show what the NSA can do when it gathers this data, describing an interrelated and complex set of United States programs to collect intelligence and use it to better protect its networks. The NSA’s internal documents call this “foreign intelligence in support of dynamic defense.” The gathered information can “tip” malicious code the NSA has placed on servers and computers around the world. Based on this tip, one of the NSA’s nodes can act on the information, “inject[ing a] response onto the Internet towards [the] target.” There are a variety of responses that the NSA can inject, including resetting connections, delivering malicious code, and redirecting internet traffic.

Similarly, if the NSA can learn about the adversary’s “tools and tradecraft” early enough, it can develop and deploy “tailored countermeasures” to blunt the intended effect. The NSA can then try to discern the intent of the adversary and use its countermeasure to mitigate the attempted intrusion. The signals intelligence agency feeds information about the incoming threat to an automated system deployed on networks that the NSA protects. This system has a number of capabilities, including blocking the incoming traffic outright, sending unexpected responses back to the adversary, slowing the traffic down, and “permitting the activity to appear [to the adversary] to complete without disclosing that it did not reach [or] affect the intended target.”

These defensive capabilities appear to be actively in use by the United States against a wide range of threats. NSA documents indicate that the agency uses the system to block twenty-eight major categories of threats as of 2011. This includes action against significant adversaries, such as China, as well as against non-state actors. Documents provide a number of success stories. These include the thwarting of a BYZANTINE HADES intrusion attempt that targeted four high-ranking American military leaders, including the Chief of Naval Operations and the Chairman of the Joint Chiefs of Staff; the NSA’s network defenders saw the attempt coming and successfully prevented any negative effects. The files also include examples of successful defense against Anonymous and against several other code-named entities.

I recommend Buchanan’s book: The Cybersecurity Dilemma: Hacking, Trust and Fear Between Nations.

Posted on February 22, 2017 at 6:21 AM103 Comments

German Government Classifies Doll as Illegal Spyware

This is interesting:

The My Friend Cayla doll, which is manufactured by the US company Genesis Toys and distributed in Europe by Guildford-based Vivid Toy Group, allows children to access the internet via speech recognition software, and to control the toy via an app.

But Germany’s Federal Network Agency announced this week that it classified Cayla as an “illegal espionage apparatus”. As a result, retailers and owners could face fines if they continue to stock it or fail to permanently disable the doll’s wireless connection.

Under German law it is illegal to manufacture, sell or possess surveillance devices disguised as another object.

Another article.

Posted on February 20, 2017 at 6:55 AM76 Comments

IoT Attack Against a University Network

Verizon’s Data Brief Digest 2017 describes an attack against an unnamed university by attackers who hacked a variety of IoT devices and had them spam network targets and slow them down:

Analysis of the university firewall identified over 5,000 devices making hundreds of Domain Name Service (DNS) look-ups every 15 minutes, slowing the institution’s entire network and restricting access to the majority of internet services.

In this instance, all of the DNS requests were attempting to look up seafood restaurants—and it wasn’t because thousands of students all had an overwhelming urge to eat fish—but because devices on the network had been instructed to repeatedly carry out this request.

“We identified that this was coming from their IoT network, their vending machines and their light sensors were actually looking for seafood domains; 5,000 discreet systems and they were nearly all in the IoT infrastructure,” says Laurance Dine, managing principal of investigative response at Verizon.

The actual Verizon document doesn’t appear to be available online yet, but there is an advance version that only discusses the incident above, available here.

Posted on February 17, 2017 at 8:30 AM18 Comments

Duqu Malware Techniques Used by Cybercriminals

Duqu 2.0 is a really impressive piece of malware, related to Stuxnet and probably written by the NSA. One of its security features is that it stays resident in its host’s memory without ever writing persistent files to the system’s drives. Now, this same technique is being used by criminals:

Now, fileless malware is going mainstream, as financially motivated criminal hackers mimic their nation-sponsored counterparts. According to research Kaspersky Lab plans to publish Wednesday, networks belonging to at least 140 banks and other enterprises have been infected by malware that relies on the same in-memory design to remain nearly invisible. Because infections are so hard to spot, the actual number is likely much higher. Another trait that makes the infections hard to detect is the use of legitimate and widely used system administrative and security tools­—including PowerShell, Metasploit, and Mimikatz—­to inject the malware into computer memory.

[…]

The researchers first discovered the malware late last year, when a bank’s security team found a copy of Meterpreter—­an in-memory component of Metasploit—­residing inside the physical memory of a Microsoft domain controller. After conducting a forensic analysis, the researchers found that the Meterpreter code was downloaded and injected into memory using PowerShell commands. The infected machine also used Microsoft’s NETSH networking tool to transport data to attacker-controlled servers. To obtain the administrative privileges necessary to do these things, the attackers also relied on Mimikatz. To reduce the evidence left in logs or hard drives, the attackers stashed the PowerShell commands into the Windows registry.

BoingBoing post.

Posted on February 16, 2017 at 10:28 AM29 Comments

Research into the Root Causes of Terrorism

Interesting article in Science discussing field research on how people are radicalized to become terrorists.

The potential for research that can overcome existing constraints can be seen in recent advances in understanding violent extremism and, partly, in interdiction and prevention. Most notable is waning interest in simplistic root-cause explanations of why individuals become violent extremists (e.g., poverty, lack of education, marginalization, foreign occupation, and religious fervor), which cannot accommodate the richness and diversity of situations that breed terrorism or support meaningful interventions. A more tractable line of inquiry is how people actually become involved in terror networks (e.g., how they radicalize and are recruited, move to action, or come to abandon cause and comrades).

Reports from the The Soufan Group, International Center for the Study of Radicalisation (King’s College London), and the Combating Terrorism Center (U.S. Military Academy) indicate that approximately three-fourths of those who join the Islamic State or al-Qaeda do so in groups. These groups often involve preexisting social networks and typically cluster in particular towns and neighborhoods.. This suggests that much recruitment does not need direct personal appeals by organization agents or individual exposure to social media (which would entail a more dispersed recruitment pattern). Fieldwork is needed to identify the specific conditions under which these processes play out. Natural growth models of terrorist networks then might be based on an epidemiology of radical ideas in host social networks rather than built in the abstract then fitted to data and would allow for a public health, rather than strictly criminal, approach to violent extremism.

Such considerations have implications for countering terrorist recruitment. The present USG focus is on “counternarratives,” intended as alternative to the “ideologies” held to motivate terrorists. This strategy treats ideas as disembodied from the human conditions in which they are embedded and given life as animators of social groups. In their stead, research and policy might better focus on personalized “counterengagement,” addressing and harnessing the fellowship, passion, and purpose of people within specific social contexts, as ISIS and al-Qaeda often do. This focus stands in sharp contrast to reliance on negative mass messaging and sting operations to dissuade young people in doubt through entrapment and punishment (the most common practice used in U.S. law enforcement) rather than through positive persuasion and channeling into productive life paths. At the very least, we need field research in communities that is capable of capturing evidence to reveal which strategies are working, failing, or backfiring.

Posted on February 15, 2017 at 6:31 AM128 Comments

Survey Data on Americans and Cybersecurity

Pew Research just published their latest research data on Americans and their views on cybersecurity:

This survey finds that a majority of Americans have directly experienced some form of data theft or fraud, that a sizeable share of the public thinks that their personal data have become less secure in recent years, and that many lack confidence in various institutions to keep their personal data safe from misuse. In addition, many Americans are failing to follow digital security best practices in their own personal lives, and a substantial majority expects that major cyberattacks will be a fact of life in the future.

Here’s the full report.

Posted on February 14, 2017 at 6:48 AM22 Comments

Hacking Back

There’s a really interesting paper from George Washington University on hacking back: “Into the Gray Zone: The Private Sector and Active Defense against Cyber Threats.”

I’ve never been a fan of hacking back. There’s a reason we no longer issue letters of marque or allow private entities to commit crimes, and hacking back is a form a vigilante justice. But the paper makes a lot of good points.

Here are three older papers on the topic.

Posted on February 13, 2017 at 6:40 AM31 Comments

CSIS's Cybersecurity Agenda

The Center for Strategic and International Studies (CSIS) published “From Awareness to Action: A Cybersecurity Agenda for the 45th President” (press release here). There’s a lot I agree with—and some things I don’t—but these paragraphs struck me as particularly insightful:

The Obama administration made significant progress but suffered from two conceptual problems in its cybersecurity efforts. The first was a belief that the private sector would spontaneously generate the solutions needed for cybersecurity and minimize the need for government action. The obvious counter to this is that our problems haven’t been solved. There is no technological solution to the problem of cybersecurity, at least any time soon, so turning to technologists was unproductive. The larger national debate over the role of government made it difficult to balance public and private-sector responsibility and created a sense of hesitancy, even timidity, in executive branch actions.

The second was a misunderstanding of how the federal government works. All White Houses tend to float above the bureaucracy, but this one compounded the problem with its desire to bring high-profile business executives into government. These efforts ran counter to what is needed to manage a complex bureaucracy where greatly differing rules, relationships, and procedures determine the success of any initiative. Unlike the private sector, government decisionmaking is more collective, shaped by external pressures both bureaucratic and political, and rife with assorted strictures on resources and personnel.

Posted on February 10, 2017 at 12:01 PM10 Comments

De-Anonymizing Browser History Using Social-Network Data

Interesting research: “De-anonymizing Web Browsing Data with Social Networks“:

Abstract: Can online trackers and network adversaries de-anonymize web browsing data readily available to them? We show—theoretically, via simulation, and through experiments on real user data—that de-identified web browsing histories can be linked to social media profiles using only publicly available data. Our approach is based on a simple observation: each person has a distinctive social network, and thus the set of links appearing in one’s feed is unique. Assuming users visit links in their feed with higher probability than a random user, browsing histories contain tell-tale marks of identity. We formalize this intuition by specifying a model of web browsing behavior and then deriving the maximum likelihood estimate of a user’s social profile. We evaluate this strategy on simulated browsing histories, and show that given a history with 30 links originating from Twitter, we can deduce the corresponding Twitter profile more than 50% of the time. To gauge the real-world effectiveness of this approach, we recruited nearly 400 people to donate their web browsing histories, and we were able to correctly identify more than 70% of them. We further show that several online trackers are embedded on sufficiently many websites to carry out this attack with high accuracy. Our theoretical contribution applies to any type of transactional data and is robust to noisy observations, generalizing a wide range of previous de-anonymization attacks. Finally, since our attack attempts to find the correct Twitter profile out of over 300 million candidates, it is—to our knowledge—the largest scale demonstrated de-anonymization to date.

Posted on February 10, 2017 at 8:25 AM28 Comments

Security and Privacy Guidelines for the Internet of Things

Lately, I have been collecting IoT security and privacy guidelines. Here’s everything I’ve found:

  1. Internet of Things (IoT) Broadband Internet Technical Advisory Group, Broadband Internet Technical Advisory Group, Nov 2016.
  2. IoT Security Guidance,” Open Web Application Security Project (OWASP), May 2016.
  3. Strategic Principles for Securing the Internet of Things (IoT),” US Department of Homeland Security, Nov 2016.
  4. Security,” OneM2M Technical Specification, Aug 2016.
  5. Security Solutions,” OneM2M Technical Specification, Aug 2016.
  6. IoT Security Guidelines Overview Document,” GSM Alliance, Feb 2016.
  7. IoT Security Guidelines For Service Ecosystems,” GSM Alliance, Feb 2016.
  8. IoT Security Guidelines for Endpoint Ecosystems,” GSM Alliance, Feb 2016.
  9. IoT Security Guidelines for Network Operators,” GSM Alliance, Feb 2016.
  10. Establishing Principles for Internet of Things Security,” IoT Security Foundation, undated.
  11. IoT Design Manifesto,” www.iotmanifesto.com, May 2015.
  12. NYC Guidelines for the Internet of Things,” City of New York, undated.
  13. IoT Security Compliance Framework,” IoT Security Foundation, 2016.
  14. Principles, Practices and a Prescription for Responsible IoT and Embedded Systems Development,” IoTIAP, Nov 2016.
  15. IoT Trust Framework,” Online Trust Alliance, Jan 2017.
  16. Five Star Automotive Cyber Safety Framework,” I am the Cavalry, Feb 2015.
  17. Hippocratic Oath for Connected Medical Devices,” I am the Cavalry, Jan 2016.
  18. Industrial Internet of Things Volume G4: Security Framework,” Industrial Internet Consortium, 2016.
  19. Future-proofing the Connected World: 13 Steps to Developing Secure IoT Products,” Cloud Security Alliance, 2016.

Other, related, items:

  1. We All Live in the Computer Now,” The Netgain Partnership, Oct 2016.
  2. Comments of EPIC to the FTC on the Privacy and Security Implications of the Internet of Things,” Electronic Privacy Information Center, Jun 2013.
  3. Internet of Things Software Update Workshop (IoTSU),” Internet Architecture Board, Jun 2016.
  4. Multistakeholder Process; Internet of Things (IoT) Security Upgradability and Patching,” National Telecommunications & Information Administration, Jan 2017.

They all largely say the same things: avoid known vulnerabilities, don’t have insecure defaults, make your systems patchable, and so on.

My guess is that everyone knows that IoT regulation is coming, and is either trying to impose self-regulation to forestall government action or establish principles to influence government action. It’ll be interesting to see how the next few years unfold.

If there are any IoT security or privacy guideline documents that I’m missing, please tell me in the comments.

EDITED TO ADD: Documents added to the list, above.

Posted on February 9, 2017 at 7:14 AM37 Comments

Predicting a Slot Machine's PRNG

Wired is reporting on a new slot machine hack. A Russian group has reverse-engineered a particular brand of slot machine—from Austrian company Novomatic—and can simulate and predict the pseudo-random number generator.

The cell phones from Pechanga, combined with intelligence from investigations in Missouri and Europe, revealed key details. According to Willy Allison, a Las Vegas­-based casino security consultant who has been tracking the Russian scam for years, the operatives use their phones to record about two dozen spins on a game they aim to cheat. They upload that footage to a technical staff in St. Petersburg, who analyze the video and calculate the machine’s pattern based on what they know about the model’s pseudorandom number generator. Finally, the St. Petersburg team transmits a list of timing markers to a custom app on the operative’s phone; those markers cause the handset to vibrate roughly 0.25 seconds before the operative should press the spin button.

“The normal reaction time for a human is about a quarter of a second, which is why they do that,” says Allison, who is also the founder of the annual World Game Protection Conference. The timed spins are not always successful, but they result in far more payouts than a machine normally awards: Individual scammers typically win more than $10,000 per day. (Allison notes that those operatives try to keep their winnings on each machine to less than $1,000, to avoid arousing suspicion.) A four-person team working multiple casinos can earn upwards of $250,000 in a single week.

The easy solution is to use a random-number generator that accepts local entropy, like Fortuna. But there’s probably no way to easily reprogram those old machines.

Posted on February 8, 2017 at 6:48 AM29 Comments

Cryptkeeper Bug

The Linux encryption app Cryptkeeper has a rather stunning security bug: the single-character decryption key “p” decrypts everything:

The flawed version is in Debian 9 (Stretch), currently in testing, but not in Debian 8 (Jessie). The bug appears to be a result of a bad interaction with the encfs encrypted filesystem’s command line interface: Cryptkeeper invokes encfs and attempts to enter paranoia mode with a simulated ‘p’ keypress—instead, it sets passwords for folders to just that letter.

In 2013, I wrote an essay about how an organization might go about designing a perfect backdoor. This one seems much more like a bad mistake than deliberate action. It’s just too dumb, and too obvious. If anyone actually used Cryptkeeper, it would have been discovered long ago.

Posted on February 7, 2017 at 9:50 AM60 Comments

Hacker Leaks Cellebrite's Phone-Hacking Tools

In January we learned that a hacker broke into Cellebrite’s network and stole 900GB of data. Now the hacker has dumped some of Cellebrite’s phone-hacking tools on the Internet.

In their README, the hacker notes much of the iOS-related code is very similar to that used in the jailbreaking scene­a community of iPhone hackers that typically breaks into iOS devices and release its code publicly for free.

Jonathan Zdziarski, a forensic scientist, agreed that some of the iOS files were nearly identical to tools created and used by the jailbreaking community, including patched versions of Apple’s firmware designed to break security mechanisms on older iPhones. A number of the configuration files also reference “limera1n,” the name of a piece of jailbreaking software created by infamous iPhone hacker Geohot. He said he wouldn’t call the released files “exploits” however.

Zdziarski also said that other parts of the code were similar to a jailbreaking project called QuickPwn, but that the code had seemingly been adapted for forensic purposes. For example, some of the code in the dump was designed to brute force PIN numbers, which may be unusual for a normal jailbreaking piece of software.

“If, and it’s a big if, they used this in UFED or other products, it would indicate they ripped off software verbatim from the jailbreak community and used forensically unsound and experimental software in their supposedly scientific and forensically validated products,” Zdziarski continued.

If you remember, Cellebrite was the company that supposedly helped the FBI break into the San Bernadino terrorist iPhone. (I say “supposedly,” because the evidence is unclear.) We do know that they provide this sort of forensic assistance to countries like Russia, Turkey, and the UAE—as well as to many US jurisdictions.

As Cory Doctorow points out:

…suppressing disclosure of security vulnerabilities in commonly used tools does not prevent those vulnerabilities from being independently discovered and weaponized—it just means that users, white-hat hackers and customers are kept in the dark about lurking vulnerabilities, even as they are exploited in the wild, which only end up coming to light when they are revealed by extraordinary incidents like this week’s dump.

We are all safer when vulnerabilities are reported and fixed, not when they are hoarded and used in secret.

Slashdot thread.

Posted on February 6, 2017 at 6:30 AM34 Comments

Security and the Internet of Things

Last year, on October 21, your digital video recorder ­- or at least a DVR like yours ­- knocked Twitter off the internet. Someone used your DVR, along with millions of insecure webcams, routers, and other connected devices, to launch an attack that started a chain reaction, resulting in Twitter, Reddit, Netflix, and many sites going off the internet. You probably didn’t realize that your DVR had that kind of power. But it does.

All computers are hackable. This has as much to do with the computer market as it does with the technologies. We prefer our software full of features and inexpensive, at the expense of security and reliability. That your computer can affect the security of Twitter is a market failure. The industry is filled with market failures that, until now, have been largely ignorable. As computers continue to permeate our homes, cars, businesses, these market failures will no longer be tolerable. Our only solution will be regulation, and that regulation will be foisted on us by a government desperate to “do something” in the face of disaster.

In this article I want to outline the problems, both technical and political, and point to some regulatory solutions. Regulation might be a dirty word in today’s political climate, but security is the exception to our small-government bias. And as the threats posed by computers become greater and more catastrophic, regulation will be inevitable. So now’s the time to start thinking about it.

We also need to reverse the trend to connect everything to the internet. And if we risk harm and even death, we need to think twice about what we connect and what we deliberately leave uncomputerized.

If we get this wrong, the computer industry will look like the pharmaceutical industry, or the aircraft industry. But if we get this right, we can maintain the innovative environment of the internet that has given us so much.

**********

We no longer have things with computers embedded in them. We have computers with things attached to them.

Your modern refrigerator is a computer that keeps things cold. Your oven, similarly, is a computer that makes things hot. An ATM is a computer with money inside. Your car is no longer a mechanical device with some computers inside; it’s a computer with four wheels and an engine. Actually, it’s a distributed system of over 100 computers with four wheels and an engine. And, of course, your phones became full-power general-purpose computers in 2007, when the iPhone was introduced.

We wear computers: fitness trackers and computer-enabled medical devices ­- and, of course, we carry our smartphones everywhere. Our homes have smart thermostats, smart appliances, smart door locks, even smart light bulbs. At work, many of those same smart devices are networked together with CCTV cameras, sensors that detect customer movements, and everything else. Cities are starting to embed smart sensors in roads, streetlights, and sidewalk squares, also smart energy grids and smart transportation networks. A nuclear power plant is really just a computer that produces electricity, and ­- like everything else we’ve just listed -­ it’s on the internet.

The internet is no longer a web that we connect to. Instead, it’s a computerized, networked, and interconnected world that we live in. This is the future, and what we’re calling the Internet of Things.

Broadly speaking, the Internet of Things has three parts. There are the sensors that collect data about us and our environment: smart thermostats, street and highway sensors, and those ubiquitous smartphones with their motion sensors and GPS location receivers. Then there are the “smarts” that figure out what the data means and what to do about it. This includes all the computer processors on these devices and ­- increasingly ­- in the cloud, as well as the memory that stores all of this information. And finally, there are the actuators that affect our environment. The point of a smart thermostat isn’t to record the temperature; it’s to control the furnace and the air conditioner. Driverless cars collect data about the road and the environment to steer themselves safely to their destinations.

You can think of the sensors as the eyes and ears of the internet. You can think of the actuators as the hands and feet of the internet. And you can think of the stuff in the middle as the brain. We are building an internet that senses, thinks, and acts.

This is the classic definition of a robot. We’re building a world-size robot, and we don’t even realize it.

To be sure, it’s not a robot in the classical sense. We think of robots as discrete autonomous entities, with sensors, brain, and actuators all together in a metal shell. The world-size robot is distributed. It doesn’t have a singular body, and parts of it are controlled in different ways by different people. It doesn’t have a central brain, and it has nothing even remotely resembling a consciousness. It doesn’t have a single goal or focus. It’s not even something we deliberately designed. It’s something we have inadvertently built out of the everyday objects we live with and take for granted. It is the extension of our computers and networks into the real world.

This world-size robot is actually more than the Internet of Things. It’s a combination of several decades-old computing trends: mobile computing, cloud computing, always-on computing, huge databases of personal information, the Internet of Things ­- or, more precisely, cyber-physical systems ­- autonomy, and artificial intelligence. And while it’s still not very smart, it’ll get smarter. It’ll get more powerful and more capable through all the interconnections we’re building.

It’ll also get much more dangerous.

**********

Computer security has been around for almost as long as computers have been. And while it’s true that security wasn’t part of the design of the original internet, it’s something we have been trying to achieve since its beginning.

I have been working in computer security for over 30 years: first in cryptography, then more generally in computer and network security, and now in general security technology. I have watched computers become ubiquitous, and have seen firsthand the problems ­- and solutions ­- of securing these complex machines and systems. I’m telling you all this because what used to be a specialized area of expertise now affects everything. Computer security is now everything security. There’s one critical difference, though: The threats have become greater.

Traditionally, computer security is divided into three categories: confidentiality, integrity, and availability. For the most part, our security concerns have largely centered around confidentiality. We’re concerned about our data and who has access to it ­- the world of privacy and surveillance, of data theft and misuse.

But threats come in many forms. Availability threats: computer viruses that delete our data, or ransomware that encrypts our data and demands payment for the unlock key. Integrity threats: hackers who can manipulate data entries can do things ranging from changing grades in a class to changing the amount of money in bank accounts. Some of these threats are pretty bad. Hospitals have paid tens of thousands of dollars to criminals whose ransomware encrypted critical medical files. JPMorgan Chase spends half a billion on cybersecurity a year.

Today, the integrity and availability threats are much worse than the confidentiality threats. Once computers start affecting the world in a direct and physical manner, there are real risks to life and property. There is a fundamental difference between crashing your computer and losing your spreadsheet data, and crashing your pacemaker and losing your life. This isn’t hyperbole; recently researchers found serious security vulnerabilities in St. Jude Medical’s implantable heart devices. Give the internet hands and feet, and it will have the ability to punch and kick.

Take a concrete example: modern cars, those computers on wheels. The steering wheel no longer turns the axles, nor does the accelerator pedal change the speed. Every move you make in a car is processed by a computer, which does the actual controlling. A central computer controls the dashboard. There’s another in the radio. The engine has 20 or so computers. These are all networked, and increasingly autonomous.

Now, let’s start listing the security threats. We don’t want car navigation systems to be used for mass surveillance, or the microphone for mass eavesdropping. We might want it to be used to determine a car’s location in the event of a 911 call, and possibly to collect information about highway congestion. We don’t want people to hack their own cars to bypass emissions-control limitations. We don’t want manufacturers or dealers to be able to do that, either, as Volkswagen did for years. We can imagine wanting to give police the ability to remotely and safely disable a moving car; that would make high-speed chases a thing of the past. But we definitely don’t want hackers to be able to do that. We definitely don’t want them disabling the brakes in every car without warning, at speed. As we make the transition from driver-controlled cars to cars with various driver-assist capabilities to fully driverless cars, we don’t want any of those critical components subverted. We don’t want someone to be able to accidentally crash your car, let alone do it on purpose. And equally, we don’t want them to be able to manipulate the navigation software to change your route, or the door-lock controls to prevent you from opening the door. I could go on.

That’s a lot of different security requirements, and the effects of getting them wrong range from illegal surveillance to extortion by ransomware to mass death.

**********

Our computers and smartphones are as secure as they are because companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support large, dedicated teams because those companies make a huge amount of money, either directly or indirectly, from their software ­ and, in part, compete on its security. Unfortunately, this isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

At a recent hacker conference, a security researcher analyzed 30 home routers and was able to break into half of them, including some of the most popular and common brands. The denial-of-service attacks that forced popular websites like Reddit and Twitter off the internet last October were enabled by vulnerabilities in devices like webcams and digital video recorders. In August, two security researchers demonstrated a ransomware attack on a smart thermostat.

Even worse, most of these devices don’t have any way to be patched. Companies like Microsoft and Apple continuously deliver security patches to your computers. Some home routers are technically patchable, but in a complicated way that only an expert would attempt. And the only way for you to update the firmware in your hackable DVR is to throw it away and buy a new one.

The market can’t fix this because neither the buyer nor the seller cares. The owners of the webcams and DVRs used in the denial-of-service attacks don’t care. Their devices were cheap to buy, they still work, and they don’t know any of the victims of the attacks. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution, because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

**********

Security is an arms race between attacker and defender. Technology perturbs that arms race by changing the balance between attacker and defender. Understanding how this arms race has unfolded on the internet is essential to understanding why the world-size robot we’re building is so insecure, and how we might secure it. To that end, I have five truisms, born from what we’ve already learned about computer and internet security. They will soon affect the security arms race everywhere.

Truism No. 1: On the internet, attack is easier than defense.

There are many reasons for this, but the most important is the complexity of these systems. More complexity means more people involved, more parts, more interactions, more mistakes in the design and development process, more of everything where hidden insecurities can be found. Computer-security experts like to speak about the attack surface of a system: all the possible points an attacker might target and that must be secured. A complex system means a large attack surface. The defender has to secure the entire attack surface. The attacker just has to find one vulnerability ­- one unsecured avenue for attack -­ and gets to choose how and when to attack. It’s simply not a fair battle.

There are other, more general, reasons why attack is easier than defense. Attackers have a natural agility that defenders often lack. They don’t have to worry about laws, and often not about morals or ethics. They don’t have a bureaucracy to contend with, and can more quickly make use of technical innovations. Attackers also have a first-mover advantage. As a society, we’re generally terrible at proactive security; we rarely take preventive security measures until an attack actually happens. So more advantages go to the attacker.

Truism No. 2: Most software is poorly written and insecure.

If complexity isn’t enough, we compound the problem by producing lousy software. Well-written software, like the kind found in airplane avionics, is both expensive and time-consuming to produce. We don’t want that. For the most part, poorly written software has been good enough. We’d all rather live with buggy software than pay the prices good software would require. We don’t mind if our games crash regularly, or our business applications act weird once in a while. Because software has been largely benign, it hasn’t mattered. This has permeated the industry at all levels. At universities, we don’t teach how to code well. Companies don’t reward quality code in the same way they reward fast and cheap. And we consumers don’t demand it.

But poorly written software is riddled with bugs, sometimes as many as one per 1,000 lines of code. Some of them are inherent in the complexity of the software, but most are programming mistakes. Not all bugs are vulnerabilities, but some are.

Truism No. 3: Connecting everything to each other via the internet will expose new vulnerabilities.

The more we network things together, the more vulnerabilities on one thing will affect other things. On October 21, vulnerabilities in a wide variety of embedded devices were all harnessed together to create what hackers call a botnet. This botnet was used to launch a distributed denial-of-service attack against a company called Dyn. Dyn provided a critical internet function for many major internet sites. So when Dyn went down, so did all those popular websites.

These chains of vulnerabilities are everywhere. In 2012, journalist Mat Honan suffered a massive personal hack because of one of them. A vulnerability in his Amazon account allowed hackers to get into his Apple account, which allowed them to get into his Gmail account. And in 2013, the Target Corporation was hacked by someone stealing credentials from its HVAC contractor.

Vulnerabilities like these are particularly hard to fix, because no one system might actually be at fault. It might be the insecure interaction of two individually secure systems.

Truism No. 4: Everybody has to stop the best attackers in the world.

One of the most powerful properties of the internet is that it allows things to scale. This is true for our ability to access data or control systems or do any of the cool things we use the internet for, but it’s also true for attacks. In general, fewer attackers can do more damage because of better technology. It’s not just that these modern attackers are more efficient, it’s that the internet allows attacks to scale to a degree impossible without computers and networks.

This is fundamentally different from what we’re used to. When securing my home against burglars, I am only worried about the burglars who live close enough to my home to consider robbing me. The internet is different. When I think about the security of my network, I have to be concerned about the best attacker possible, because he’s the one who’s going to create the attack tool that everyone else will use. The attacker that discovered the vulnerability used to attack Dyn released the code to the world, and within a week there were a dozen attack tools using it.

Truism No. 5: Laws inhibit security research.

The Digital Millennium Copyright Act is a terrible law that fails at its purpose of preventing widespread piracy of movies and music. To make matters worse, it contains a provision that has critical side effects. According to the law, it is a crime to bypass security mechanisms that protect copyrighted work, even if that bypassing would otherwise be legal. Since all software can be copyrighted, it is arguably illegal to do security research on these devices and to publish the result.

Although the exact contours of the law are arguable, many companies are using this provision of the DMCA to threaten researchers who expose vulnerabilities in their embedded systems. This instills fear in researchers, and has a chilling effect on research, which means two things: (1) Vendors of these devices are more likely to leave them insecure, because no one will notice and they won’t be penalized in the market, and (2) security engineers don’t learn how to do security better.
Unfortunately, companies generally like the DMCA. The provisions against reverse-engineering spare them the embarrassment of having their shoddy security exposed. It also allows them to build proprietary systems that lock out competition. (This is an important one. Right now, your toaster cannot force you to only buy a particular brand of bread. But because of this law and an embedded computer, your Keurig coffee maker can force you to buy a particular brand of coffee.)

**********
In general, there are two basic paradigms of security. We can either try to secure something well the first time, or we can make our security agile. The first paradigm comes from the world of dangerous things: from planes, medical devices, buildings. It’s the paradigm that gives us secure design and secure engineering, security testing and certifications, professional licensing, detailed preplanning and complex government approvals, and long times-to-market. It’s security for a world where getting it right is paramount because getting it wrong means people dying.

The second paradigm comes from the fast-moving and heretofore largely benign world of software. In this paradigm, we have rapid prototyping, on-the-fly updates, and continual improvement. In this paradigm, new vulnerabilities are discovered all the time and security disasters regularly happen. Here, we stress survivability, recoverability, mitigation, adaptability, and muddling through. This is security for a world where getting it wrong is okay, as long as you can respond fast enough.

These two worlds are colliding. They’re colliding in our cars -­ literally -­ in our medical devices, our building control systems, our traffic control systems, and our voting machines. And although these paradigms are wildly different and largely incompatible, we need to figure out how to make them work together.

So far, we haven’t done very well. We still largely rely on the first paradigm for the dangerous computers in cars, airplanes, and medical devices. As a result, there are medical systems that can’t have security patches installed because that would invalidate their government approval. In 2015, Chrysler recalled 1.4 million cars to fix a software vulnerability. In September 2016, Tesla remotely sent a security patch to all of its Model S cars overnight. Tesla sure sounds like it’s doing things right, but what vulnerabilities does this remote patch feature open up?

**********
Until now we’ve largely left computer security to the market. Because the computer and network products we buy and use are so lousy, an enormous after-market industry in computer security has emerged. Governments, companies, and people buy the security they think they need to secure themselves. We’ve muddled through well enough, but the market failures inherent in trying to secure this world-size robot will soon become too big to ignore.

Markets alone can’t solve our security problems. Markets are motivated by profit and short-term goals at the expense of society. They can’t solve collective-action problems. They won’t be able to deal with economic externalities, like the vulnerabilities in DVRs that resulted in Twitter going offline. And we need a counterbalancing force to corporate power.

This all points to policy. While the details of any computer-security system are technical, getting the technologies broadly deployed is a problem that spans law, economics, psychology, and sociology. And getting the policy right is just as important as getting the technology right because, for internet security to work, law and technology have to work together. This is probably the most important lesson of Edward Snowden’s NSA disclosures. We already knew that technology can subvert law. Snowden demonstrated that law can also subvert technology. Both fail unless each work. It’s not enough to just let technology do its thing.

Any policy changes to secure this world-size robot will mean significant government regulation. I know it’s a sullied concept in today’s world, but I don’t see any other possible solution. It’s going to be especially difficult on the internet, where its permissionless nature is one of the best things about it and the underpinning of its most world-changing innovations. But I don’t see how that can continue when the internet can affect the world in a direct and physical manner.

**********

I have a proposal: a new government regulatory agency. Before dismissing it out of hand, please hear me out.

We have a practical problem when it comes to internet regulation. There’s no government structure to tackle this at a systemic level. Instead, there’s a fundamental mismatch between the way government works and the way this technology works that makes dealing with this problem impossible at the moment.

Government operates in silos. In the U.S., the FAA regulates aircraft. The NHTSA regulates cars. The FDA regulates medical devices. The FCC regulates communications devices. The FTC protects consumers in the face of “unfair” or “deceptive” trade practices. Even worse, who regulates data can depend on how it is used. If data is used to influence a voter, it’s the Federal Election Commission’s jurisdiction. If that same data is used to influence a consumer, it’s the FTC’s. Use those same technologies in a school, and the Department of Education is now in charge. Robotics will have its own set of problems, and no one is sure how that is going to be regulated. Each agency has a different approach and different rules. They have no expertise in these new issues, and they are not quick to expand their authority for all sorts of reasons.

Compare that with the internet. The internet is a freewheeling system of integrated objects and networks. It grows horizontally, demolishing old technological barriers so that people and systems that never previously communicated now can. Already, apps on a smartphone can log health information, control your energy use, and communicate with your car. That’s a set of functions that crosses jurisdictions of at least four different government agencies, and it’s only going to get worse.

Our world-size robot needs to be viewed as a single entity with millions of components interacting with each other. Any solutions here need to be holistic. They need to work everywhere, for everything. Whether we’re talking about cars, drones, or phones, they’re all computers.

This has lots of precedent. Many new technologies have led to the formation of new government regulatory agencies. Trains did, cars did, airplanes did. Radio led to the formation of the Federal Radio Commission, which became the FCC. Nuclear power led to the formation of the Atomic Energy Commission, which eventually became the Department of Energy. The reasons were the same in every case. New technologies need new expertise because they bring with them new challenges. Governments need a single agency to house that new expertise, because its applications cut across several preexisting agencies. It’s less that the new agency needs to regulate -­ although that’s often a big part of it -­ and more that governments recognize the importance of the new technologies.

The internet has famously eschewed formal regulation, instead adopting a multi-stakeholder model of academics, businesses, governments, and other interested parties. My hope is that we can keep the best of this approach in any regulatory agency, looking more at the new U.S. Digital Service or the 18F office inside the General Services Administration. Both of those organizations are dedicated to providing digital government services, and both have collected significant expertise by bringing people in from outside of government, and both have learned how to work closely with existing agencies. Any internet regulatory agency will similarly need to engage in a high level of collaborate regulation -­ both a challenge and an opportunity.

I don’t think any of us can predict the totality of the regulations we need to ensure the safety of this world, but here’s a few. We need government to ensure companies follow good security practices: testing, patching, secure defaults -­ and we need to be able to hold companies liable when they fail to do these things. We need government to mandate strong personal data protections, and limitations on data collection and use. We need to ensure that responsible security research is legal and well-funded. We need to enforce transparency in design, some sort of code escrow in case a company goes out of business, and interoperability between devices of different manufacturers, to counterbalance the monopolistic effects of interconnected technologies. Individuals need the right to take their data with them. And internet-enabled devices should retain some minimal functionality if disconnected from the internet.

I’m not the only one talking about this. I’ve seen proposals for a National Institutes of Health analogue for cybersecurity. University of Washington law professor Ryan Calo has proposed a Federal Robotics Commission. I think it needs to be broader: maybe a Department of Technology Policy.

Of course there will be problems. There’s a lack of expertise in these issues inside government. There’s a lack of willingness in government to do the hard regulatory work. Industry is worried about any new bureaucracy: both that it will stifle innovation by regulating too much and that it will be captured by industry and regulate too little. A domestic regulatory agency will have to deal with the fundamentally international nature of the problem.

But government is the entity we use to solve problems like this. Governments have the scope, scale, and balance of interests to address the problems. It’s the institution we’ve built to adjudicate competing social interests and internalize market externalities. Left to their own devices, the market simply can’t. That we’re currently in the middle of an era of low government trust, where many of us can’t imagine government doing anything positive in an area like this, is to our detriment.

Here’s the thing: Governments will get involved, regardless. The risks are too great, and the stakes are too high. Government already regulates dangerous physical systems like cars and medical devices. And nothing motivates the U.S. government like fear. Remember 2001? A nominally small-government Republican president created the Office of Homeland Security 11 days after the terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for over a decade. A fatal disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important, and complex; and they’re coming. We can’t afford to ignore these issues until it’s too late.

We also need to start disconnecting systems. If we cannot secure complex systems to the level required by their real-world capabilities, then we must not build a world where everything is computerized and interconnected.

There are other models. We can enable local communications only. We can set limits on collected and stored data. We can deliberately design systems that don’t interoperate with each other. We can deliberately fetter devices, reversing the current trend of turning everything into a general-purpose computer. And, most important, we can move toward less centralization and more distributed systems, which is how the internet was first envisioned.

This might be a heresy in today’s race to network everything, but large, centralized systems are not inevitable. The technical elites are pushing us in that direction, but they really don’t have any good supporting arguments other than the profits of their ever-growing multinational corporations.

But this will change. It will change not only because of security concerns, it will also change because of political concerns. We’re starting to chafe under the worldview of everything producing data about us and what we do, and that data being available to both governments and corporations. Surveillance capitalism won’t be the business model of the internet forever. We need to change the fabric of the internet so that evil governments don’t have the tools to create a horrific totalitarian state. And while good laws and regulations in Western democracies are a great second line of defense, they can’t be our only line of defense.

My guess is that we will soon reach a high-water mark of computerization and connectivity, and that afterward we will make conscious decisions about what and how we decide to interconnect. But we’re still in the honeymoon phase of connectivity. Governments and corporations are punch-drunk on our data, and the rush to connect everything is driven by an even greater desire for power and market share. One of the presentations released by Edward Snowden contained the NSA mantra: “Collect it all.” A similar mantra for the internet today might be: “Connect it all.”

The inevitable backlash will not be driven by the market. It will be deliberate policy decisions that put the safety and welfare of society above individual corporations and industries. It will be deliberate policy decisions that prioritize the security of our systems over the demands of the FBI to weaken them in order to make their law-enforcement jobs easier. It’ll be hard policy for many to swallow, but our safety will depend on it.

**********

The scenarios I’ve outlined, both the technological and economic trends that are causing them and the political changes we need to make to start to fix them, come from my years of working in internet-security technology and policy. All of this is informed by an understanding of both technology and policy. That turns out to be critical, and there aren’t enough people who understand both.

This brings me to my final plea: We need more public-interest technologists.

Over the past couple of decades, we’ve seen examples of getting internet-security policy badly wrong. I’m thinking of the FBI’s “going dark” debate about its insistence that computer devices be designed to facilitate government access, the “vulnerability equities process” about when the government should disclose and fix a vulnerability versus when it should use it to attack other systems, the debacle over paperless touch-screen voting machines, and the DMCA that I discussed above. If you watched any of these policy debates unfold, you saw policy-makers and technologists talking past each other.

Our world-size robot will exacerbate these problems. The historical divide between Washington and Silicon Valley -­ the mistrust of governments by tech companies and the mistrust of tech companies by governments ­- is dangerous.

We have to fix this. Getting IoT security right depends on the two sides working together and, even more important, having people who are experts in each working on both. We need technologists to get involved in policy, and we need policy-makers to get involved in technology. We need people who are experts in making both technology and technological policy. We need technologists on congressional staffs, inside federal agencies, working for NGOs, and as part of the press. We need to create a viable career path for public-interest technologists, much as there already is one for public-interest attorneys. We need courses, and degree programs in colleges, for people interested in careers in public-interest technology. We need fellowships in organizations that need these people. We need technology companies to offer sabbaticals for technologists wanting to go down this path. We need an entire ecosystem that supports people bridging the gap between technology and law. We need a viable career path that ensures that even though people in this field won’t make as much as they would in a high-tech start-up, they will have viable careers. The security of our computerized and networked future ­ meaning the security of ourselves, families, homes, businesses, and communities ­ depends on it.

This plea is bigger than security, actually. Pretty much all of the major policy debates of this century will have a major technological component. Whether it’s weapons of mass destruction, robots drastically affecting employment, climate change, food safety, or the increasing ubiquity of ever-shrinking drones, understanding the policy means understanding the technology. Our society desperately needs technologists working on the policy. The alternative is bad policy.

**********

The world-size robot is less designed than created. It’s coming without any forethought or architecting or planning; most of us are completely unaware of what we’re building. In fact, I am not convinced we can actually design any of this. When we try to design complex sociotechnical systems like this, we are regularly surprised by their emergent properties. The best we can do is observe and channel these properties as best we can.

Market thinking sometimes makes us lose sight of the human choices and autonomy at stake. Before we get controlled ­ or killed ­ by the world-size robot, we need to rebuild confidence in our collective governance institutions. Law and policy may not seem as cool as digital tech, but they’re also places of critical innovation. They’re where we collectively bring about the world we want to live in.

While I might sound like a Cassandra, I’m actually optimistic about our future. Our society has tackled bigger problems than this one. It takes work and it’s not easy, but we eventually find our way clear to make the hard choices necessary to solve our real problems.

The world-size robot we’re building can only be managed responsibly if we start making real choices about the interconnected world we live in. Yes, we need security systems as robust as the threat landscape. But we also need laws that effectively regulate these dangerous technologies. And, more generally, we need to make moral, ethical, and political decisions on how those systems should work. Until now, we’ve largely left the internet alone. We gave programmers a special right to code cyberspace as they saw fit. This was okay because cyberspace was separate and relatively unimportant: That is, it didn’t matter. Now that that’s changed, we can no longer give programmers and the companies they work for this power. Those moral, ethical, and political decisions need, somehow, to be made by everybody. We need to link people with the same zeal that we are currently linking machines. “Connect it all” must be countered with “connect us all.”

This essay previously appeared in New York Magazine.

Posted on February 1, 2017 at 8:05 AM126 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.