Blog: November 2016 Archives

Hacking and the 2016 Presidential Election

Was the 2016 presidential election hacked? It’s hard to tell. There were no obvious hacks on Election Day, but new reports have raised the question of whether voting machines were tampered with in three states that Donald Trump won this month: Wisconsin, Michigan and Pennsylvania.

The researchers behind these reports include voting rights lawyer John Bonifaz and J. Alex Halderman, the director of the University of Michigan Center for Computer Security and Society, both respected in the community. They have been talking with Hillary Clinton’s campaign, but their analysis is not yet public.

According to a report in New York magazine, the share of votes received by Clinton was significantly lower in precincts that used a particular type of voting machine: The magazine story suggested that Clinton had received 7 percent fewer votes in Wisconsin counties that used electronic machines, which could be hacked, than in counties that used paper ballots. That is exactly the sort of result we would expect to see if there had been some sort of voting machine hack. There are many different types of voting machines, and attacks against one type would not work against the others. So a voting anomaly correlated to machine type could be a red flag, although Trump did better across the entire Midwest than pre-election polls expected, and there are also some correlations between voting machine type and the demographics of the various precincts. Even Halderman wrote early Wednesday morning that “the most likely explanation is that the polls were systematically wrong, rather than that the election was hacked.”

What the allegations, and the ripples they’re causing on social media, really show is how fundamentally untrustworthy our hodgepodge election system is.

Accountability is a major problem for US elections. The candidates are the ones required to petition for recounts, and we throw the matter into the courts when we can’t figure it out. This all happens after an election, and because the battle lines have already been drawn, the process is intensely political. Unlike many other countries, we don’t have an independent body empowered to investigate these matters. There is no government agency empowered to verify these researchers’ claims, even if it would be merely to reassure voters that the election count was accurate.

Instead, we have a patchwork of voting systems: different rules, different machines, different standards. I’ve seen arguments that there is security in this setup ­ an attacker can’t broadly attack the entire country ­ but the downsides of this system are much more critical. National standards would significantly improve our voting process.

Further investigation of the claims raised by the researchers would help settle this particular question. Unfortunately, time is of the essence ­ underscoring another problem with how we conduct elections. For anything to happen, Clinton has to call for a recount and investigation. She has until Friday to do it in Wisconsin, until Monday in Pennsylvania and until next Wednesday in Michigan. I don’t expect the research team to have any better data before then. Without changes to the system, we’re telling future hackers that they can be successful as long as they’re able to hide their attacks for a few weeks until after the recount deadlines pass.

Computer forensics investigations are not easy, and they’re not quick. They require access to the machines. They involve analysis of Internet traffic. If we suspect a foreign country like Russia, the National Security Agency will analyze what they’ve intercepted from that country. This could easily take weeks, perhaps even months. And in the end, we might not even get a definitive answer. And even if we do end up with evidence that the voting machines were hacked, we don’t have rules about what to do next.

Although winning those three states would flip the election, I predict Clinton will do nothing (her campaign, after all, has reportedly been aware of the researchers’ work for nearly a week). Not because she does not believe the researchers ­- although she might not -­ but because she doesn’t want to throw the post-election process into turmoil by starting a highly politicized process whose eventual outcome will have little to do with computer forensics and a lot to do with which party has more power in the three states.

But we only have two years until the next national elections, and it’s time to start fixing things if we don’t want to be wondering the same things about hackers in 2018. The risks are real: Electronic voting machines that don’t use a paper ballot are vulnerable to hacking.

Clinton supporters are seizing on this story as their last lifeline of hope. I sympathize with them. When I wrote about vote-hacking the day after the election, I said: “Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser ­- and all the supporters ­- that he or she lost.” If the election system fails to do the second, we risk undermining the legitimacy of our democratic process. Clinton’s supporters deserve to know whether this apparent statistical anomaly is the result of a hack against our election system or a spurious correlation. They deserve an election that is demonstrably fair and accurate. Our patchwork, ad hoc system means they may never feel confident in the outcome. And that will further erode the trust we have in our election systems.

This essay previously appeared in the Washington Post.

EDITED TO ADD: Green Party candidate Jill Stein is calling for a recount in the three states. I have no idea if a recount includes forensic analysis to ensure that the machines were not hacked, but I doubt it. It would be funny if it wasn’t all so horrible.

Also, here’s an article from 538.com arguing that demographics explains all the discrepancies.

Posted on November 25, 2016 at 10:00 AM133 Comments

Securing Communications in a Trump Administration

Susan Landau has an excellent essay on why it’s more important than ever to have backdoor-free encryption on our computer and communications systems.

Protecting the privacy of speech is crucial for preserving our democracy. We live at a time when tracking an individual—­a journalist, a member of the political opposition, a citizen engaged in peaceful protest­—or listening to their communications is far easier than at any time in human history. Political leaders on both sides now have a responsibility to work for securing communications and devices. This means supporting not only the laws protecting free speech and the accompanying communications, but also the technologies to do so: end-to-end encryption and secured devices; it also means soundly rejecting all proposals for front-door exceptional access. Prior to the election there were strong, sound security arguments for rejecting such proposals. The privacy arguments have now, suddenly, become critically important as well. Threatened authoritarianism means that we need technological protections for our private communications every bit as much as we need the legal ones we presently have.

Unfortunately, the trend is moving in the other direction. The UK just passed the Investigatory Powers Act, giving police and intelligence agencies incredibly broad surveillance powers with very little oversight. And Bits of Freedom just reported that “Croatia, Italy, Latvia, Poland and Hungary all want an EU law to be created to help their law enforcement authorities access encrypted information and share data with investigators in other countries.”

Posted on November 23, 2016 at 2:01 PM107 Comments

Dumb Security Survey Questions

According to a Harris poll, 39% of Americans would give up sex for a year in exchange for perfect computer security:

According to an online survey among over 2,000 U.S. adults conducted by Harris Poll on behalf of Dashlane, the leader in online identity and password management, nearly four in ten Americans (39%) would sacrifice sex for one year if it meant they never had to worry about being hacked, having their identity stolen, or their accounts breached. With a new hack or breach making news almost daily, people are constantly being reminded about the importance of secure passwords, yet some are still not following proper password protocol.

Does anyone think that this hypothetical survey question means anything? What, are they bored at Harris? Oh, I see. This is a paid survey by a computer company looking for some publicity.

Four in 10 people (41%) would rather give up their favorite food for a month than go through the password reset process for all their online accounts.

I guess it’s more fun to ask these questions than to poll the election.

Posted on November 21, 2016 at 6:04 AM43 Comments

Smartphone Secretly Sends Private Data to China

This is pretty amazing:

International customers and users of disposable or prepaid phones are the people most affected by the software. But the scope is unclear. The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature.

Kryptowire, the security firm that discovered the vulnerability, said the Adups software transmitted the full contents of text messages, contact lists, call logs, location information and other data to a Chinese server.

On one hand, the phone secretly sends private user data to China. On the other hand, it only costs $50.

Posted on November 18, 2016 at 2:22 PM35 Comments

Using Wi-Fi to Detect Hand Motions and Steal Passwords

This is impressive research: “When CSI Meets Public WiFi: Inferring Your Mobile Phone Password via WiFi Signals“:

Abstract: In this study, we present WindTalker, a novel and practical keystroke inference framework that allows an attacker to infer the sensitive keystrokes on a mobile device through WiFi-based side-channel information. WindTalker is motivated from the observation that keystrokes on mobile devices will lead to different hand coverage and the finger motions, which will introduce a unique interference to the multi-path signals and can be reflected by the channel state information (CSI). The adversary can exploit the strong correlation between the CSI fluctuation and the keystrokes to infer the user’s number input. WindTalker presents a novel approach to collect the target’s CSI data by deploying a public WiFi hotspot. Compared with the previous keystroke inference approach, WindTalker neither deploys external devices close to the target device nor compromises the target device. Instead, it utilizes the public WiFi to collect user’s CSI data, which is easy-to-deploy and difficult-to-detect. In addition, it jointly analyzes the traffic and the CSI to launch the keystroke inference only for the sensitive period where password entering occurs. WindTalker can be launched without the requirement of visually seeing the smart phone user’s input process, backside motion, or installing any malware on the tablet. We implemented Windtalker on several mobile phones and performed a detailed case study to evaluate the practicality of the password inference towards Alipay, the largest mobile payment platform in the world. The evaluation results show that the attacker can recover the key with a high successful rate.

That “high successful rate” is 81.7%.

News article.

Posted on November 18, 2016 at 6:40 AM22 Comments

Hacking Password-Protected Computers via the USB Port

PoisonTap is an impressive hacking tool that can compromise computers via the USB port, even when they are password-protected. What’s interesting is the chain of vulnerabilities the tool exploits. No individual vulnerability is a problem, but together they create a big problem.

Kamkar’s trick works by chaining together a long, complex series of seemingly innocuous software security oversights that only together add up to a full-blown threat. When PoisonTap—a tiny $5 Raspberry Pi microcomputer loaded with Kamkar’s code and attached to a USB adapter—is plugged into a computer’s USB drive, it starts impersonating a new ethernet connection. Even if the computer is already connected to Wifi, PoisonTap is programmed to tell the victim’s computer that any IP address accessed through that connection is actually on the computer’s local network rather than the internet, fooling the machine into prioritizing its network connection to PoisonTap over that of the Wifi network.

With that interception point established, the malicious USB device waits for any request from the user’s browser for new web content; if you leave your browser open when you walk away from your machine, chances are there’s at least one tab in your browser that’s still periodically loading new bits of HTTP data like ads or news updates. When PoisonTap sees that request, it spoofs a response and feeds your browser its own payload: a page that contains a collection of iframes—a technique for invisibly loading content from one website inside another­that consist of carefully crafted versions of virtually every popular website address on the internet. (Kamkar pulled his list from web-popularity ranking service Alexa‘s top one million sites.)

As it loads that long list of site addresses, PoisonTap tricks your browser into sharing any cookies it’s stored from visiting them, and writes all of that cookie data to a text file on the USB stick. Sites use cookies to check if a visitor has recently logged into the page, allowing visitors to avoid doing so repeatedly. So that list of cookies allows any hacker who walks away with the PoisonTap and its stored text file to access the user’s accounts on those sites.

There’s more. Here’s another article with more details. Also note that HTTPS is a protection.

Yesterday, I testified about this at a joint hearing of the Subcommittee on Communications and Technology, and the Subcommittee on Commerce, Manufacturing, and Trade—both part of the Committee on Energy and Commerce of the US House of Representatives. Here’s the video; my testimony starts around 1:10:10.

The topic was the Dyn attacks and the Internet of Things. I talked about different market failures that will affect security on the Internet of Things. One of them was this problem of emergent vulnerabilities. I worry that as we continue to connect things to the Internet, we’re going to be seeing a lot of these sorts of attacks: chains of tiny vulnerabilities that combine into a massive security risk. It’ll be hard to defend against these types of attacks. If no one product or process is to blame, no one has responsibility to fix the problem. So I gave a mostly Republican audience a pro-regulation message. They were surprisingly polite and receptive.

Posted on November 17, 2016 at 8:22 AM80 Comments

Mass Spectrometry for Surveillance

Yet another way to collect personal data on people without their knowledge or consent: “Lifestyle chemistries from phones for individual profiling“:

Abstract: Imagine a scenario where personal belongings such as pens, keys, phones, or handbags are found at an investigative site. It is often valuable to the investigative team that is trying to trace back the belongings to an individual to understand their personal habits, even when DNA evidence is also available. Here, we develop an approach to translate chemistries recovered from personal objects such as phones into a lifestyle sketch of the owner, using mass spectrometry and informatics approaches. Our results show that phones’ chemistries reflect a personalized lifestyle profile. The collective repertoire of molecules found on these objects provides a sketch of the lifestyle of an individual by highlighting the type of hygiene/beauty products the person uses, diet, medical status, and even the location where this person may have been. These findings introduce an additional form of trace evidence from skin-associated lifestyle chemicals found on personal belongings. Such information could help a criminal investigator narrowing down the owner of an object found at a crime scene, such as a suspect or missing person.

News article.

Posted on November 16, 2016 at 7:40 AM21 Comments

Election Security

It’s over. The voting went smoothly. As of the time of writing, there are no serious fraud allegations, nor credible evidence that anyone tampered with voting rolls or voting machines. And most important, the results are not in doubt.

While we may breathe a collective sigh of relief about that, we can’t ignore the issue until the next election. The risks remain.

As computer security experts have been saying for years, our newly computerized voting systems are vulnerable to attack by both individual hackers and government-sponsored cyberwarriors. It is only a matter of time before such an attack happens.

Electronic voting machines can be hacked, and those machines that do not include a paper ballot that can verify each voter’s choice can be hacked undetectably. Voting rolls are also vulnerable; they are all computerized databases whose entries can be deleted or changed to sow chaos on Election Day.

The largely ad hoc system in states for collecting and tabulating individual voting results is vulnerable as well. While the difference between theoretical if demonstrable vulnerabilities and an actual attack on Election Day is considerable, we got lucky this year. Not just presidential elections are at risk, but state and local elections, too.

To be very clear, this is not about voter fraud. The risks of ineligible people voting, or people voting twice, have been repeatedly shown to be virtually nonexistent, and “solutions” to this problem are largely voter-suppression measures. Election fraud, however, is both far more feasible and much more worrisome.

Here’s my worry. On the day after an election, someone claims that a result was hacked. Maybe one of the candidates points to a wide discrepancy between the most recent polls and the actual results. Maybe an anonymous person announces that he hacked a particular brand of voting machine, describing in detail how. Or maybe it’s a system failure during Election Day: voting machines recording significantly fewer votes than there were voters, or zero votes for one candidate or another. (These are not theoretical occurrences; they have both happened in the United States before, though because of error, not malice.)

We have no procedures for how to proceed if any of these things happen. There’s no manual, no national panel of experts, no regulatory body to steer us through this crisis. How do we figure out if someone hacked the vote? Can we recover the true votes, or are they lost? What do we do then?

First, we need to do more to secure our elections system. We should declare our voting systems to be critical national infrastructure. This is largely symbolic, but it demonstrates a commitment to secure elections and makes funding and other resources available to states.

We need national security standards for voting machines, and funding for states to procure machines that comply with those standards. Voting-security experts can deal with the technical details, but such machines must include a paper ballot that provides a record verifiable by voters. The simplest and most reliable way to do that is already practiced in 37 states: optical-scan paper ballots, marked by the voters, counted by computer but recountable by hand. And we need a system of pre-election and postelection security audits to increase confidence in the system.

Second, election tampering, either by a foreign power or by a domestic actor, is inevitable, so we need detailed procedures to follow—both technical procedures to figure out what happened, and legal procedures to figure out what to do—that will efficiently get us to a fair and equitable election resolution. There should be a board of independent computer-security experts to unravel what happened, and a board of independent election officials, either at the Federal Election Commission or elsewhere, empowered to determine and put in place an appropriate response.

In the absence of such impartial measures, people rush to defend their candidate and their party. Florida in 2000 was a perfect example. What could have been a purely technical issue of determining the intent of every voter became a battle for who would win the presidency. The debates about hanging chads and spoiled ballots and how broad the recount should be were contested by people angling for a particular outcome. In the same way, after a hacked election, partisan politics will place tremendous pressure on officials to make decisions that override fairness and accuracy.

That is why we need to agree on policies to deal with future election fraud. We need procedures to evaluate claims of voting-machine hacking. We need a fair and robust vote-auditing process. And we need all of this in place before an election is hacked and battle lines are drawn.

In response to Florida, the Help America Vote Act of 2002 required each state to publish its own guidelines on what constitutes a vote. Some states—Indiana, in particular—set up a “war room” of public and private cybersecurity experts ready to help if anything did occur. While the Department of Homeland Security is assisting some states with election security, and the F.B.I. and the Justice Department made some preparations this year, the approach is too piecemeal.

Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser—and all the supporters—that he or she lost. To achieve the first purpose, the voting system must be fair and accurate. To achieve the second one, it must be shown to be fair and accurate.

We need to have these conversations before something happens, when everyone can be calm and rational about the issues. The integrity of our elections is at stake, which means our democracy is at stake.

This essay previously appeared in the New York Times.

Posted on November 15, 2016 at 7:09 AM77 Comments

Fake HP Printer That's Actually a Cellular Eavesdropping Device

Julian Oliver has designed and built a cellular eavesdropping device that’s disguised as an old HP printer.

Masquerading as a regular cellular service provider, Stealth Cell Tower surreptitiously catches phones and sends them SMSs written to appear they are from someone that knows the recipient. It does this without needing to know any phone numbers.

With each response to these messages, a transcript is printed revealing the captured message sent, alongside the victim’s unique IMSI number and other identifying information. Every now and again the printer also randomly calls phones in the environment and on answering, Stevie Wonder’s 1984 classic hit I Just Called To Say I Love You is heard.

Okay, so it’s more of a conceptual art piece than an actual piece of eavesdropping equipment, but it still makes the point.

News article. BoingBoing post.

Posted on November 14, 2016 at 1:12 PM10 Comments

Automatically Identifying Government Secrets

Interesting research: “Using Artificial Intelligence to Identify State Secrets,” by Renato Rocha Souza, Flavio Codeco Coelho, Rohan Shah, and Matthew Connelly.

Abstract: Whether officials can be trusted to protect national security information has become a matter of great public controversy, reigniting a long-standing debate about the scope and nature of official secrecy. The declassification of millions of electronic records has made it possible to analyze these issues with greater rigor and precision. Using machine-learning methods, we examined nearly a million State Department cables from the 1970s to identify features of records that are more likely to be classified, such as international negotiations, military operations, and high-level communications. Even with incomplete data, algorithms can use such features to identify 90% of classified cables with <11% false positives. But our results also show that there are longstanding problems in the identification of sensitive information. Error analysis reveals many examples of both overclassification and underclassification. This indicates both the need for research on inter-coder reliability among officials as to what constitutes classified material and the opportunity to develop recommender systems to better manage both classification and declassification.

Posted on November 11, 2016 at 1:18 PM16 Comments

Fooling Facial Recognition Systems

This is some interesting research. You can fool facial recognition systems by wearing glasses printed with elements of other people’s faces.

Mahmood Sharif, Sruti Bhagavatula, Lujo Bauer, and Michael K. Reiter, “Accessorize to a Crime: Real and Stealthy Attacks on State-of-the-Art Face Recognition“:

ABSTRACT: Machine learning is enabling a myriad innovations, including new algorithms for cancer diagnosis and self-driving cars. The broad use of machine learning makes it important to understand the extent to which machine-learning algorithms are subject to attack, particularly when used in applications where physical security or safety is at risk. In this paper, we focus on facial biometric systems, which are widely used in surveillance and access control. We define and investigate a novel class of attacks: attacks that are physically realizable and inconspicuous, and allow an attacker to evade recognition or impersonate another individual. We develop a systematic method to automatically generate such attacks, which are realized through printing a pair of eyeglass frames. When worn by the attacker whose image is supplied to a state-of-the-art face-recognition algorithm, the eyeglasses allow her to evade being recognized or to impersonate another individual. Our investigation focuses on white-box face-recognition systems, but we also demonstrate how similar techniques can be used in black-box scenarios, as well as to avoid face detection.

News articles.

Posted on November 11, 2016 at 7:31 AM11 Comments

Regulation of the Internet of Things

Late last month, popular websites like Twitter, Pinterest, Reddit and PayPal went down for most of a day. The distributed denial-of-service attack that caused the outages, and the vulnerabilities that made the attack possible, was as much a failure of market and policy as it was of technology. If we want to secure our increasingly computerized and connected world, we need more government involvement in the security of the “Internet of Things” and increased regulation of what are now critical and life-threatening technologies. It’s no longer a question of if, it’s a question of when.

First, the facts. Those websites went down because their domain name provider—a company named Dyn—­ was forced offline. We don’t know who perpetrated that attack, but it could have easily been a lone hacker. Whoever it was launched a distributed denial-of-service attack against Dyn by exploiting a vulnerability in large numbers ­—possibly millions—of Internet-of-Things devices like webcams and digital video recorders, then recruiting them all into a single botnet. The botnet bombarded Dyn with traffic, so much that it went down. And when it went down, so did dozens of websites.

Your security on the Internet depends on the security of millions of Internet-enabled devices, designed and sold by companies you’ve never heard of to consumers who don’t care about your security.

The technical reason these devices are insecure is complicated, but there is a market failure at work. The Internet of Things is bringing computerization and connectivity to many tens of millions of devices worldwide. These devices will affect every aspect of our lives, because they’re things like cars, home appliances, thermostats, light bulbs, fitness trackers, medical devices, smart streetlights and sidewalk squares. Many of these devices are low-cost, designed and built offshore, then rebranded and resold. The teams building these devices don’t have the security expertise we’ve come to expect from the major computer and smartphone manufacturers, simply because the market won’t stand for the additional costs that would require. These devices don’t get security updates like our more expensive computers, and many don’t even have a way to be patched. And, unlike our computers and phones, they stay around for years and decades.

An additional market failure illustrated by the Dyn attack is that neither the seller nor the buyer of those devices cares about fixing the vulnerability. The owners of those devices don’t care. They wanted a webcam—­ or thermostat, or refrigerator ­—with nice features at a good price. Even after they were recruited into this botnet, they still work fine ­—you can’t even tell they were used in the attack. The sellers of those devices don’t care: They’ve already moved on to selling newer and better models. There is no market solution because the insecurity primarily affects other people. It’s a form of invisible pollution.

And, like pollution, the only solution is to regulate. The government could impose minimum security standards on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing companies like Dyn to sue them if their devices are used in DDoS attacks. The details would need to be carefully scoped, but either of these options would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

It’s true that this is a domestic solution to an international problem and that there’s no U.S. regulation that will affect, say, an Asian-made product sold in South America, even though that product could still be used to take down U.S. websites. But the main costs in making software come from development. If the United States and perhaps a few other major markets implement strong Internet-security regulations on IoT devices, manufacturers will be forced to upgrade their security if they want to sell to those markets. And any improvements they make in their software will be available in their products wherever they are sold, simply because it makes no sense to maintain two different versions of the software. This is truly an area where the actions of a few countries can drive worldwide change.

Regardless of what you think about regulation vs. market solutions, I believe there is no choice. Governments will get involved in the IoT, because the risks are too great and the stakes are too high. Computers are now able to affect our world in a direct and physical manner.

Security researchers have demonstrated the ability to remotely take control of Internet-enabled cars. They’ve demonstrated ransomware against home thermostats and exposed vulnerabilities in implanted medical devices. They’ve hacked voting machines and power plants. In one recent paper, researchers showed how a vulnerability in smart light bulbs could be used to start a chain reaction, resulting in them all being controlled by the attackers ­—that’s every one in a city. Security flaws in these things could mean people dying and property being destroyed.

Nothing motivates the U.S. government like fear. Remember 2001? A small-government Republican president created the Department of Homeland Security in the wake of the 9/11 terrorist attacks: a rushed and ill-thought-out decision that we’ve been trying to fix for more than a decade. A fatal IoT disaster will similarly spur our government into action, and it’s unlikely to be well-considered and thoughtful action. Our choice isn’t between government involvement and no government involvement. Our choice is between smarter government involvement and stupider government involvement. We have to start thinking about this now. Regulations are necessary, important and complex ­—and they’re coming. We can’t afford to ignore these issues until it’s too late.

In general, the software market demands that products be fast and cheap and that security be a secondary consideration. That was okay when software didn’t matter—­ it was okay that your spreadsheet crashed once in a while. But a software bug that literally crashes your car is another thing altogether. The security vulnerabilities in the Internet of Things are deep and pervasive, and they won’t get fixed if the market is left to sort it out for itself. We need to proactively discuss good regulatory solutions; otherwise, a disaster will impose bad ones on us.

This essay previously appeared in the Washington Post.

Posted on November 10, 2016 at 6:06 AM63 Comments

Whistleblower Investigative Report on NSA Suite B Cryptography

The NSA has been abandoning secret and proprietary cryptographic algorithms in favor of commercial public algorithms, generally known as “Suite B.” In 2010, an NSA employee filed some sort of whistleblower complaint, alleging that this move is both insecure and wasteful. The US DoD Inspector General investigated and wrote a report in 2011.

The report—slightly redacted and declassified—found that there was no wrongdoing. But the report is an interesting window into the NSA’s system of algorithm selection and testing (pages 5 and 6), as well as how they investigate whistleblower complaints.

Posted on November 9, 2016 at 12:00 PM21 Comments

Self-Propagating Smart Light Bulb Worm

This is exactly the sort of Internet-of-Things attack that has me worried:

“IoT Goes Nuclear: Creating a ZigBee Chain Reaction” by Eyal Ronen, Colin OFlynn, Adi Shamir and Achi-Or Weingarten.

Abstract: Within the next few years, billions of IoT devices will densely populate our cities. In this paper we describe a new type of threat in which adjacent IoT devices will infect each other with a worm that will spread explosively over large areas in a kind of nuclear chain reaction, provided that the density of compatible IoT devices exceeds a certain critical mass. In particular, we developed and verified such an infection using the popular Philips Hue smart lamps as a platform. The worm spreads by jumping directly from one lamp to its neighbors, using only their built-in ZigBee wireless connectivity and their physical proximity. The attack can start by plugging in a single infected bulb anywhere in the city, and then catastrophically spread everywhere within minutes, enabling the attacker to turn all the city lights on or off, permanently brick them, or exploit them in a massive DDOS attack. To demonstrate the risks involved, we use results from percolation theory to estimate the critical mass of installed devices for a typical city such as Paris whose area is about 105 square kilometers: The chain reaction will fizzle if there are fewer than about 15,000 randomly located smart lights in the whole city, but will spread everywhere when the number exceeds this critical mass (which had almost certainly been surpassed already).

To make such an attack possible, we had to find a way to remotely yank already installed lamps from their current networks, and to perform over-the-air firmware updates. We overcame the first problem by discovering and exploiting a major bug in the implementation of the Touchlink part of the ZigBee Light Link protocol, which is supposed to stop such attempts with a proximity test. To solve the second problem, we developed a new version of a side channel attack to extract the global AES-CCM key that Philips uses to encrypt and authenticate new firmware. We used only readily available equipment costing a few hundred dollars, and managed to find this key without seeing any actual updates. This demonstrates once again how difficult it is to get security right even for a large company that uses standard cryptographic techniques to protect a major product.

EDITED TO ADD: BoingBoing post. Slashdot thread.

Posted on November 9, 2016 at 6:54 AM29 Comments

Lessons From the Dyn DDoS Attack

A week ago Friday, someone took down numerous popular websites in a massive distributed denial-of-service (DDoS) attack against the domain name provider Dyn. DDoS attacks are neither new nor sophisticated. The attacker sends a massive amount of traffic, causing the victim’s system to slow to a crawl and eventually crash. There are more or less clever variants, but basically, it’s a datapipe-size battle between attacker and victim. If the defender has a larger capacity to receive and process data, he or she will win. If the attacker can throw more data than the victim can process, he or she will win.

The attacker can build a giant data cannon, but that’s expensive. It is much smarter to recruit millions of innocent computers on the internet. This is the “distributed” part of the DDoS attack, and pretty much how it’s worked for decades. Cybercriminals infect innocent computers around the internet and recruit them into a botnet. They then target that botnet against a single victim.

You can imagine how it might work in the real world. If I can trick tens of thousands of others to order pizzas to be delivered to your house at the same time, I can clog up your street and prevent any legitimate traffic from getting through. If I can trick many millions, I might be able to crush your house from the weight. That’s a DDoS attack ­ it’s simple brute force.

As you’d expect, DDoSers have various motives. The attacks started out as a way to show off, then quickly transitioned to a method of intimidation ­ or a way of just getting back at someone you didn’t like. More recently, they’ve become vehicles of protest. In 2013, the hacker group Anonymous petitioned the White House to recognize DDoS attacks as a legitimate form of protest. Criminals have used these attacks as a means of extortion, although one group found that just the fear of attack was enough. Military agencies are also thinking about DDoS as a tool in their cyberwar arsenals. A 2007 DDoS attack against Estonia was blamed on Russia and widely called an act of cyberwar.

The DDoS attack against Dyn two weeks ago was nothing new, but it illustrated several important trends in computer security.

These attack techniques are broadly available. Fully capable DDoS attack tools are available for free download. Criminal groups offer DDoS services for hire. The particular attack technique used against Dyn was first used a month earlier. It’s called Mirai, and since the source code was released four weeks ago, over a dozen botnets have incorporated the code.

The Dyn attacks were probably not originated by a government. The perpetrators were most likely hackers mad at Dyn for helping Brian Krebs identify ­ and the FBI arrest ­ two Israeli hackers who were running a DDoS-for-hire ring. Recently I have written about probing DDoS attacks against internet infrastructure companies that appear to be perpetrated by a nation-state. But, honestly, we don’t know for sure.

This is important. Software spreads capabilities. The smartest attacker needs to figure out the attack and write the software. After that, anyone can use it. There’s not even much of a difference between government and criminal attacks. In December 2014, there was a legitimate debate in the security community as to whether the massive attack against Sony had been perpetrated by a nation-state with a $20 billion military budget or a couple of guys in a basement somewhere. The internet is the only place where we can’t tell the difference. Everyone uses the same tools, the same techniques and the same tactics.

These attacks are getting larger. The Dyn DDoS attack set a record at 1.2 Tbps. The previous record holder was the attack against cybersecurity journalist Brian Krebs a month prior at 620 Gbps. This is much larger than required to knock the typical website offline. A year ago, it was unheard of. Now it occurs regularly.

The botnets attacking Dyn and Brian Krebs consisted largely of unsecure Internet of Things (IoT) devices ­ webcams, digital video recorders, routers and so on. This isn’t new, either. We’ve already seen internet-enabled refrigerators and TVs used in DDoS botnets. But again, the scale is bigger now. In 2014, the news was hundreds of thousands of IoT devices ­ the Dyn attack used millions. Analysts expect the IoT to increase the number of things on the internet by a factor of 10 or more. Expect these attacks to similarly increase.

The problem is that these IoT devices are unsecure and likely to remain that way. The economics of internet security don’t trickle down to the IoT. Commenting on the Krebs attack last month, I wrote:

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: They’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: It’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

To be fair, one company that made some of the unsecure things used in these attacks recalled its unsecure webcams. But this is more of a publicity stunt than anything else. I would be surprised if the company got many devices back. We already know that the reputational damage from having your unsecure software made public isn’t large and doesn’t last. At this point, the market still largely rewards sacrificing security in favor of price and time-to-market.

DDoS prevention works best deep in the network, where the pipes are the largest and the capability to identify and block the attacks is the most evident. But the backbone providers have no incentive to do this. They don’t feel the pain when the attacks occur and they have no way of billing for the service when they provide it. So they let the attacks through and force the victims to defend themselves. In many ways, this is similar to the spam problem. It, too, is best dealt with in the backbone, but similar economics dump the problem onto the endpoints.

We’re unlikely to get any regulation forcing backbone companies to clean up either DDoS attacks or spam, just as we are unlikely to get any regulations forcing IoT manufacturers to make their systems secure. This is me again:

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

That leaves the victims to pay. This is where we are in much of computer security. Because the hardware, software and networks we use are so unsecure, we have to pay an entire industry to provide after-the-fact security.

There are solutions you can buy. Many companies offer DDoS protection, although they’re generally calibrated to the older, smaller attacks. We can safely assume that they’ll up their offerings, although the cost might be prohibitive for many users. Understand your risks. Buy mitigation if you need it, but understand its limitations. Know the attacks are possible and will succeed if large enough. And the attacks are getting larger all the time. Prepare for that.

This essay previously appeared on the SecurityIntelligence website.

Posted on November 8, 2016 at 6:25 AM37 Comments

Firefox Removing Battery Status API

Firefox is removing the battery status API, citing privacy concerns. Here’s the paper that described those concerns:

Abstract. We highlight privacy risks associated with the HTML5 Battery Status API. We put special focus on its implementation in the Firefox browser. Our study shows that websites can discover the capacity of users’ batteries by exploiting the high precision readouts provided by Firefox on Linux. The capacity of the battery, as well as its level, expose a fingerprintable surface that can be used to track web users in short time intervals. Our analysis shows that the risk is much higher for old or used batteries with reduced capacities, as the battery capacity may potentially serve as a tracking identifier. The fingerprintable surface of the API could be drastically reduced without any loss in the API’s functionality by reducing the precision of the readings. We propose minor modifications to Battery Status API and its implementation in the Firefox browser to address the privacy issues presented in the study. Our bug report for Firefox was accepted and a fix is deployed.

W3C is updating the spec. Here’s a battery tracker found in the wild.

Posted on November 7, 2016 at 12:59 PM21 Comments

Teaching a Neural Network to Encrypt

Researchers have trained a neural network to encrypt its communications.

In their experiment, computers were able to make their own form of encryption using machine learning, without being taught specific cryptographic algorithms. The encryption was very basic, especially compared to our current human-designed systems. Even so, it is still an interesting step for neural nets, which the authors state “are generally not meant to be great at cryptography:.

This story is more about AI and neural networks than it is about cryptography. The algorithm isn’t any good, but is a perfect example of what I’ve heard called “Schneier’s Law“: Anyone can design a cipher that they themselves cannot break.

Research paper. Note that the researchers work at Google.

Posted on November 3, 2016 at 6:05 AM12 Comments

Another Shadow Brokers Leak

There’s another leak of NSA hacking tools and data from the Shadow Brokers. This one includes a list of hacked sites.

According to analyses from researchers here and here, Monday’s dump contains 352 distinct IP addresses and 306 domain names that purportedly have been hacked by the NSA. The timestamps included in the leak indicate that the servers were targeted between August 22, 2000 and August 18, 2010. The addresses include 32 .edu domains and nine .gov domains. In all, the targets were located in 49 countries, with the top 10 being China, Japan, Korea, Spain, Germany, India, Taiwan, Mexico, Italy, and Russia. Vitali Kremez, a senior intelligence analyst at security firm Flashpoint, also provides useful analysis here.

The dump also includes various other pieces of data. Chief among them are configuration settings for an as-yet unknown toolkit used to hack servers running Unix operating systems. If valid, the list could be used by various organizations to uncover a decade’s worth of attacks that until recently were closely guarded secrets. According to this spreadsheet, the servers were mostly running Solaris, an operating system from Sun Microsystems that was widely used in the early 2000s. Linux and FreeBSD are also shown.

The data is old, but you can see if you’ve been hacked.

Honestly, I am surprised by this release. I thought that the original Shadow Brokers dump was everything. Now that we know they held things back, there could easily be more releases.

EDITED TO ADD (11/6): More on the NSA targets. Note that the Hague-based Organization for the Prohibition of Chemical Weapons is on the list, hacked in 2000.

Posted on November 1, 2016 at 2:10 PM16 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.