Blog: July 2016 Archives

How Altruism Might Have Evolved

I spend a lot of time in my book Liars and Outliers on cooperating versus defecting. Cooperating is good for the group at the expense of the individual. Defecting is good for the individual at the expense of the group. Given that evolution concerns individuals, there has been a lot of controversy over how altruism might have evolved.

Here’s one possible answer: it’s favored by chance:

The key insight is that the total size of population that can be supported depends on the proportion of cooperators: more cooperation means more food for all and a larger population. If, due to chance, there is a random increase in the number of cheats then there is not enough food to go around and total population size will decrease. Conversely, a random decrease in the number of cheats will allow the population to grow to a larger size, disproportionally benefitting the cooperators. In this way, the cooperators are favoured by chance, and are more likely to win in the long term.

Dr George Constable, soon to join the University of Bath from Princeton, uses the analogy of flipping a coin, where heads wins £20 but tails loses £10:

“Although the odds [of] winning or losing are the same, winning is more good than losing is bad. Random fluctuations in cheat numbers are exploited by the cooperators, who benefit more than they lose out.”

EDITED TO ADD (8/12): Journal article.

Related article.

Posted on July 29, 2016 at 12:23 PM42 Comments

The Security of Our Election Systems

Russia was behind the hacks into the Democratic National Committee’s computer network that led to the release of thousands of internal emails just before the party’s convention began, U.S. intelligence agencies have reportedly concluded.

The FBI is investigating. WikiLeaks promises there is more data to come. The political nature of this cyberattack means that Democrats and Republicans are trying to spin this as much as possible. Even so, we have to accept that someone is attacking our nation’s computer systems in an apparent attempt to influence a presidential election. This kind of cyberattack targets the very core of our democratic process. And it points to the possibility of an even worse problem in November ­ that our election systems and our voting machines could be vulnerable to a similar attack.

If the intelligence community has indeed ascertained that Russia is to blame, our government needs to decide what to do in response. This is difficult because the attacks are politically partisan, but it is essential. If foreign governments learn that they can influence our elections with impunity, this opens the door for future manipulations, both document thefts and dumps like this one that we see and more subtle manipulations that we don’t see.

Retaliation is politically fraught and could have serious consequences, but this is an attack against our democracy. We need to confront Russian President Vladimir Putin in some way ­ politically, economically or in cyberspace ­ and make it clear that we will not tolerate this kind of interference by any government. Regardless of your political leanings this time, there’s no guarantee the next country that tries to manipulate our elections will share your preferred candidates.

Even more important, we need to secure our election systems before autumn. If Putin’s government has already used a cyberattack to attempt to help Trump win, there’s no reason to believe he won’t do it again ­ especially now that Trump is inviting the “help.”

Over the years, more and more states have moved to electronic voting machines and have flirted with Internet voting. These systems are insecure and vulnerable to attack.

But while computer security experts like me have sounded the alarm for many years, states have largely ignored the threat, and the machine manufacturers have thrown up enough obfuscating babble that election officials are largely mollified.

We no longer have time for that. We must ignore the machine manufacturers’ spurious claims of security, create tiger teams to test the machines’ and systems’ resistance to attack, drastically increase their cyber-defenses and take them offline if we can’t guarantee their security online.

Longer term, we need to return to election systems that are secure from manipulation. This means voting machines with voter-verified paper audit trails, and no Internet voting. I know it’s slower and less convenient to stick to the old-fashioned way, but the security risks are simply too great.

There are other ways to attack our election system on the Internet besides hacking voting machines or changing vote tallies: deleting voter records, hijacking candidate or party websites, targeting and intimidating campaign workers or donors. There have already been multiple instances of political doxing ­ publishing personal information and documents about a person or organization ­ and we could easily see more of it in this election cycle. We need to take these risks much more seriously than before.

Government interference with foreign elections isn’t new, and in fact, that’s something the United States itself has repeatedly done in recent history. Using cyberattacks to influence elections is newer but has been done before, too ­ most notably in Latin America. Hacking of voting machines isn’t new, either. But what is new is a foreign government interfering with a U.S. national election on a large scale. Our democracy cannot tolerate it, and we as citizens cannot accept it.

Last April, the Obama administration issued an executive order outlining how we as a nation respond to cyberattacks against our critical infrastructure. While our election technology was not explicitly mentioned, our political process is certainly critical. And while they’re a hodgepodge of separate state-run systems, together their security affects every one of us. After everyone has voted, it is essential that both sides believe the election was fair and the results accurate. Otherwise, the election has no legitimacy.

Election security is now a national security issue; federal officials need to take the lead, and they need to do it quickly.

This essay originally appeared in the Washington Post.

Posted on July 29, 2016 at 6:29 AM102 Comments

Real-World Security and the Internet of Things

Disaster stories involving the Internet of Things are all the rage. They feature cars (both driven and driverless), the power grid, dams, and tunnel ventilation systems. A particularly vivid and realistic one, near-future fiction published last month in New York Magazine, described a cyberattack on New York that involved hacking of cars, the water system, hospitals, elevators, and the power grid. In these stories, thousands of people die. Chaos ensues. While some of these scenarios overhype the mass destruction, the individual risks are all real. And traditional computer and network security isn’t prepared to deal with them.

Classic information security is a triad: confidentiality, integrity, and availability. You’ll see it called “CIA,” which admittedly is confusing in the context of national security. But basically, the three things I can do with your data are steal it (confidentiality), modify it (integrity), or prevent you from getting it (availability).

So far, Internet threats have largely been about confidentiality. These can be expensive; one survey estimated that data breaches cost an average of $3.8 million each. They can be embarrassing, as in the theft of celebrity photos from Apple’s iCloud in 2014 or the Ashley Madison breach in 2015. They can be damaging, as when the government of North Korea stole tens of thousands of internal documents from Sony or when hackers stole data about 83 million customer accounts from JPMorgan Chase, both in 2014. They can even affect national security, as in the case of the Office of Personnel Management data breach by—presumptively—China in 2015.

On the Internet of Things, integrity and availability threats are much worse than confidentiality threats. It’s one thing if your smart door lock can be eavesdropped upon to know who is home. It’s another thing entirely if it can be hacked to allow a burglar to open the door—or prevent you from opening your door. A hacker who can deny you control of your car, or take over control, is much more dangerous than one who can eavesdrop on your conversations or track your car’s location.

With the advent of the Internet of Things and cyber-physical systems in general, we’ve given the Internet hands and feet: the ability to directly affect the physical world. What used to be attacks against data and information have become attacks against flesh, steel, and concrete.

Today’s threats include hackers crashing airplanes by hacking into computer networks, and remotely disabling cars, either when they’re turned off and parked or while they’re speeding down the highway. We’re worried about manipulated counts from electronic voting machines, frozen water pipes through hacked thermostats, and remote murder through hacked medical devices. The possibilities are pretty literally endless. The Internet of Things will allow for attacks we can’t even imagine.

The increased risks come from three things: software control of systems, interconnections between systems, and automatic or autonomous systems. Let’s look at them in turn:

Software Control. The Internet of Things is a result of everything turning into a computer. This gives us enormous power and flexibility, but it brings insecurities with it as well. As more things come under software control, they become vulnerable to all the attacks we’ve seen against computers. But because many of these things are both inexpensive and long-lasting, many of the patch and update systems that work with computers and smartphones won’t work. Right now, the only way to patch most home routers is to throw them away and buy new ones. And the security that comes from replacing your computer and phone every few years won’t work with your refrigerator and thermostat: on the average, you replace the former every 15 years, and the latter approximately never. A recent Princeton survey found 500,000 insecure devices on the Internet. That number is about to explode.

Interconnections. As these systems become interconnected, vulnerabilities in one lead to attacks against others. Already we’ve seen Gmail accounts compromised through vulnerabilities in Samsung smart refrigerators, hospital IT networks compromised through vulnerabilities in medical devices, and Target Corporation hacked through a vulnerability in its HVAC system. Systems are filled with externalities that affect other systems in unforeseen and potentially harmful ways. What might seem benign to the designers of a particular system becomes harmful when it’s combined with some other system. Vulnerabilities on one system cascade into other systems, and the result is a vulnerability that no one saw coming and no one bears responsibility for fixing. The Internet of Things will make exploitable vulnerabilities much more common. It’s simple mathematics. If 100 systems are all interacting with each other, that’s about 5,000 interactions and 5,000 potential vulnerabilities resulting from those interactions. If 300 systems are all interacting with each other, that’s 45,000 interactions. 1,000 systems: 12.5 million interactions. Most of them will be benign or uninteresting, but some of them will be very damaging.

Autonomy. Increasingly, our computer systems are autonomous. They buy and sell stocks, turn the furnace on and off, regulate electricity flow through the grid, and—in the case of driverless cars—automatically pilot multi-ton vehicles to their destinations. Autonomy is great for all sorts of reasons, but from a security perspective it means that the effects of attacks can take effect immediately, automatically, and ubiquitously. The more we remove humans from the loop, faster attacks can do their damage and the more we lose our ability to rely on actual smarts to notice something is wrong before it’s too late.

We’re building systems that are increasingly powerful, and increasingly useful. The necessary side effect is that they are increasingly dangerous. A single vulnerability forced Chrysler to recall 1.4 million vehicles in 2015. We’re used to computers being attacked at scale—think of the large-scale virus infections from the last decade—but we’re not prepared for this happening to everything else in our world.

Governments are taking notice. Last year, both Director of National Intelligence James Clapper and NSA Director Mike Rogers testified before Congress, warning of these threats. They both believe we’re vulnerable.

This is how it was phrased in the DNI’s 2015 Worldwide Threat Assessment: “Most of the public discussion regarding cyber threats has focused on the confidentiality and availability of information; cyber espionage undermines confidentiality, whereas denial-of-service operations and data-deletion attacks undermine availability. In the future, however, we might also see more cyber operations that will change or manipulate electronic information in order to compromise its integrity (i.e. accuracy and reliability) instead of deleting it or disrupting access to it. Decision-making by senior government officials (civilian and military), corporate executives, investors, or others will be impaired if they cannot trust the information they are receiving.”

The DNI 2016 threat assessment included something similar: “Future cyber operations will almost certainly include an increased emphasis on changing or manipulating data to compromise its integrity (i.e., accuracy and reliability) to affect decision making, reduce trust in systems, or cause adverse physical effects. Broader adoption of IoT devices and AI—in settings such as public utilities and healthcare—will only exacerbate these potential effects.”

Security engineers are working on technologies that can mitigate much of this risk, but many solutions won’t be deployed without government involvement. This is not something that the market can solve. Like data privacy, the risks and solutions are too technical for most people and organizations to understand; companies are motivated to hide the insecurity of their own systems from their customers, their users, and the public; the interconnections can make it impossible to connect data breaches with resultant harms; and the interests of the companies often don’t match the interests of the people.

Governments need to play a larger role: setting standards, policing compliance, and implementing solutions across companies and networks. And while the White House Cybersecurity National Action Plan says some of the right things, it doesn’t nearly go far enough, because so many of us are phobic of any government-led solution to anything.

The next president will probably be forced to deal with a large-scale Internet disaster that kills multiple people. I hope he or she responds with both the recognition of what government can do that industry can’t, and the political will to make it happen.

This essay previously appeared on Vice Motherboard.

BoingBoing post.

EDITED TO ADD (8/11): An essay that agrees with me.

Posted on July 28, 2016 at 5:51 AM83 Comments

Detecting When a Smartphone Has Been Compromised

Andrew “bunnie” Huang and Edward Snowden have designed a smartphone case that detects unauthorized transmissions by the phone. Paper. Three news articles.

Looks like a clever design. Of course, it has to be outside the device; otherwise, it could be compromised along with the device. Note that this is still in the research design stage; there are no public prototypes.

Posted on July 27, 2016 at 1:09 PM42 Comments

The NSA and "Intelligence Legalism"

Interesting law journal paper: “Intelligence Legalism and the National Security Agency’s Civil Liberties Gap,” by Margo Schlanger:

Abstract: This paper examines the National Security Agency, its compliance with legal constraints and its respect for civil liberties. But even if perfect compliance could be achieved, it is too paltry a goal. A good oversight system needs its institutions not just to support and enforce compliance but also to design good rules. Yet as will become evident, the offices that make up the NSA’s compliance system are nearly entirely compliance offices, not policy offices; they work to improve compliance with existing rules, but not to consider the pros and cons of more individually-protective rules and try to increase privacy or civil liberties where the cost of doing so is acceptable. The NSA and the administration in which it sits have thought of civil liberties and privacy only in compliance terms. That is, they have asked only “Can we (legally) do X?” and not “Should we do X?” This preference for the can question over the should question is part and parcel, I argue, of a phenomenon I label “intelligence legalism,” whose three crucial and simultaneous features are imposition of substantive rules given the status of law rather than policy; some limited court enforcement of those rules; and empowerment of lawyers. Intelligence legalism has been a useful corrective to the lawlessness that characterized surveillance prior to intelligence reform, in the late 1970s. But I argue that it gives systematically insufficient weight to individual liberty, and that its relentless focus on rights, and compliance, and law has obscured the absence of what should be an additional focus on interests, or balancing, or policy. More is needed; additional attention should be directed both within the NSA and by its overseers to surveillance policy, weighing the security gains from surveillance against the privacy and civil liberties risks and costs. That attention will not be a panacea, but it can play a useful role in filling the civil liberties gap intelligence legalism creates.

This is similar to what I wrote in Data and Goliath:

There are two levels of oversight. The first is strategic: are the rules we’re imposing the correct ones? For example, the NSA can implement its own procedures to ensure that it’s following the rules, but it should not get to decide what rules it should follow….

The other kind of oversight is tactical: are the rules being followed? Mechanisms for this kind of oversight include procedures, audits, approvals, troubleshooting protocols, and so on. The NSA, for example, trains its analysts in the regulations governing their work, audits systems to ensure that those regulations are actually followed, and has instituted reporting and disciplinary procedures for occasions when they’re not.

It’s not enough that the NSA makes sure there is a plausible legal interpretation that authorizes what they do. We need to make sure that their understanding of the law is shared with the outside world, and that what they’re doing is a good idea.

EDITED TO ADD: The paper is from 2014. Also worth reading are these two related essays.

Posted on July 27, 2016 at 6:47 AM9 Comments

Russian Hack of the DNC

Amazingly enough, the preponderance of the evidence points to Russia as the source of the DNC leak. I was going to summarize the evidence, but Thomas Rid did a great job here. Much of that is based on June’s forensic analysis by Crowdstrike, which I wrote about here. More analysis here.

Jack Goldsmith discusses the political implications.

The FBI is investigating. It’s not unreasonable to expect the NSA has some additional intelligence on this attack, similarly to what they had on the North Korea attack on Sony.

EDITED TO ADD (7/27): More on the FBI’s investigation. Another summary of the evidence pointing to Russia.

Posted on July 26, 2016 at 1:40 PM

Tracking the Owner of Kickass Torrents

Here’s the story of how it was done. First, a fake ad on torrent listings linked the site to a Latvian bank account, an e-mail address, and a Facebook page.

Using basic website-tracking services, Der-Yeghiayan was able to uncover (via a reverse DNS search) the hosts of seven apparent KAT website domains: kickasstorrents.com, kat.cr, kickass.to, kat.ph, kastatic.com, thekat.tv and kickass.cr. This dug up two Chicago IP addresses, which were used as KAT name servers for more than four years. Agents were then able to legally gain a copy of the server’s access logs (explaining why it was federal authorities in Chicago that eventually charged Vaulin with his alleged crimes).

Using similar tools, Homeland Security investigators also performed something called a WHOIS lookup on a domain that redirected people to the main KAT site. A WHOIS search can provide the name, address, email and phone number of a website registrant. In the case of kickasstorrents.biz, that was Artem Vaulin from Kharkiv, Ukraine.

Der-Yeghiayan was able to link the email address found in the WHOIS lookup to an Apple email address that Vaulin purportedly used to operate KAT. It’s this Apple account that appears to tie all of pieces of Vaulin’s alleged involvement together.

On July 31st 2015, records provided by Apple show that the me.com account was used to purchase something on iTunes. The logs show that the same IP address was used on the same day to access the KAT Facebook page. After KAT began accepting Bitcoin donations in 2012, $72,767 was moved into a Coinbase account in Vaulin’s name. That Bitcoin wallet was registered with the same me.com email address.

Another article.

Posted on July 26, 2016 at 6:42 AM34 Comments

The Economist on Hacking the Financial System

The Economist has an article on the potential hacking of the global financial system, either for profit or to cause mayhem. It’s reasonably balanced.

So how might such an attack unfold? Step one, several months before mayhem is unleashed, is to get into the system. Financial institutions have endless virtual doors that could be used to trespass, but one of the easiest to force is still the front door. By getting someone who works at an FMI or a partner company to click on a corrupt link through a “phishing” attack (an attempt to get hold of sensitive information by masquerading as someone trustworthy), or stealing their credentials when they use public Wi-Fi, hackers can impersonate them and install malware to watch over employees’ shoulders and see how the institution’s system functions. This happened in the Carbanak case: hackers installed a “RAT” (remote-access tool) to make videos of employees’ computers.

Step two is to study the system and set up booby traps. Once in, the gang quietly observes the quirks and defences of the system in order to plan the perfect attack from within; hackers have been known to sit like this for years. Provided they are not detected, they pick their places to plant spyware or malware that can be activated at the click of a button.

Step three is the launch. One day, preferably when there is already distracting market turmoil, they unleash a series of attacks on, say, multiple clearing houses.

The attackers might start with small changes, tweaking numbers in transactions as they are processed (Bank A gets credited $1,000, for example, but on the other side of the transaction Bank B is debited $0, or $900 or $100,000). As lots of erroneous payments travel the globe, and as it becomes clear that these are not just “glitches”, eventually the entire system would be deemed unreliable. Unsure how much money they have, banks could not settle their books when markets close. Settlement is a legally defined, binding moment. Regulators and central banks would become agitated if they could not see how solvent the nation’s banks were at the end of the financial day.

In many aspects of our society, as attackers become more powerful the potential for catastrophe increases. We need to ensure that the likelihood of catastrophe remains low.

Posted on July 25, 2016 at 6:10 AM34 Comments

Cyberweapons vs. Nuclear Weapons

Good essay pointing out the absurdity of comparing cyberweapons with nuclear weapons.

On the surface, the analogy is compelling. Like nuclear weapons, the most powerful cyberweapons—malware capable of permanently damaging critical infrastructure and other key assets of society—are potentially catastrophically destructive, have short delivery times across vast distances, and are nearly impossible to defend against. Moreover, only the most technically competent of states appear capable of wielding cyberweapons to strategic effect right now, creating the temporary illusion of an exclusive cyber club. To some leaders who matured during the nuclear age, these tempting similarities and the pressing nature of the strategic cyberthreat provide firm justification to use nuclear deterrence strategies in cyberspace. Indeed, Cold War-style cyberdeterrence is one of the foundational cornerstones of the 2015 U.S. Department of Defense Cyber Strategy.

However, dive a little deeper and the analogy becomes decidedly less convincing. At the present time, strategic cyberweapons simply do not share the three main deterrent characteristics of nuclear weapons: the sheer destructiveness of a single weapon, the assuredness of that destruction, and a broad debate over the use of such weapons.

Posted on July 22, 2016 at 11:08 AM28 Comments

Detecting Spoofed Messages Using Clock Skew

Two researchers are working on a system to detect spoofed messages sent to automobiles by fingerprinting the clock skew of the various computer components within the car, and then detecting when those skews are off. It’s a clever system, with applications outside of automobiles (and isn’t new).

To perform that fingerprinting, they use a weird characteristic of all computers: tiny timing errors known as “clock skew.” Taking advantage of the fact that those errors are different in every computer­—including every computer inside a car­—the researchers were able to assign a fingerprint to each ECU based on its specific clock skew. The CIDS’ device then uses those fingerprints to differentiate between the ECUs, and to spot when one ECU impersonates another, like when a hacker corrupts the vehicle’s radio system to spoof messages that are meant to come from a brake pedal or steering system.

Paper: “Fingerprinting Electronic Control Units for Vehicle Intrusion Detection,” by Kyong-Tak Cho and Kang G. Shin.

Abstract: As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thus-derived fingerprints are then used for constructing a baseline of ECUs’ clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors—a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS’s fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks.

Posted on July 20, 2016 at 7:26 AM33 Comments

Stealing Money from ISPs Through Premium Rate Calls

I think the best hacks are the ones that are obvious once they’re explained, but no one has thought of them before. Here’s an example:

Instagram ($2000), Google ($0) and Microsoft ($500) were vulnerable to direct money theft via premium phone number calls. They all offer services to supply users with a token via a computer-voiced phone call, but neglected to properly verify whether supplied phone numbers were legitimate, non-premium numbers. This allowed a dedicated attacker to steal thousands of EUR/USD/GBP/… . Microsoft was exceptionally vulnerable to mass exploitation by supporting virtually unlimited concurrent calls to one premium number. The vulnerabilities were submitted to the respective Bug Bounty programs and properly resolved.

News articles. Slashdot threads.

Posted on July 19, 2016 at 6:21 AM13 Comments

Futuristic Cyberattack Scenario

This is a piece of near-future fiction about a cyberattack on New York, including hacking of cars, the water system, hospitals, elevators, and the power grid. Although it is definitely a movie-plot attack, all the individual pieces are plausible and will certainly happen individually and separately.

Worth reading—it’s probably the best example of this sort of thing to date.

Posted on July 18, 2016 at 6:27 AM38 Comments

Security Effectiveness of the Israeli West Bank Barrier

Interesting analysis:

Abstract: Objectives—Informed by situational crime prevention (SCP) this study evaluates the effectiveness of the “West Bank Barrier” that the Israeli government began to construct in 2002 in order to prevent suicide bombing attacks.

Methods—Drawing on crime wave models of past SCP research, the study uses a time series of terrorist attacks and fatalities and their location in respect to the Barrier, which was constructed in different sections over different periods of time, between 1999 and 2011.

Results—The Barrier together with associated security activities was effective in preventing suicide bombings and other attacks and fatalities with little if any apparent displacement. Changes in terrorist behavior likely resulted from the construction of the Barrier, not from other external factors or events.

Conclusions—In some locations, terrorists adapted to changed circumstances by committing more opportunistic attacks that require less planning. Fatalities and attacks were also reduced on the Palestinian side of the Barrier, producing an expected “diffusion of benefits” though the amount of reduction was considerably more than in past SCP studies. The defensive roles of the Barrier and offensive opportunities it presents, are identified as possible explanations. The study highlights the importance of SCP in crime and counter-terrorism policy.

Unfortunately, the whole paper is behind a paywall.

Note: This is not a political analysis of the net positive and negative effects of the wall, just a security analysis. Of course any full analysis needs to take the geopolitics into account. The comment section is not the place for this broader discussion.

Posted on July 14, 2016 at 5:58 AM71 Comments

Visiting a Website against the Owner's Wishes Is Now a Federal Crime

While we’re on the subject of terrible 9th Circuit Court rulings:

The U.S. Court of Appeals for the 9th Circuit has handed down a very important decision on the Computer Fraud and Abuse Act…. Its reasoning appears to be very broad. If I’m reading it correctly, it says that if you tell people not to visit your website, and they do it anyway knowing you disapprove, they’re committing a federal crime of accessing your computer without authorization.

Posted on July 13, 2016 at 2:10 PM64 Comments

Password Sharing Is Now a Crime

In a truly terrible ruling, the US 9th Circuit Court ruled that using someone else’s password with their permission but without the permission of the site owner is a federal crime.

The argument McKeown made is that the employee who shared the password with Nosal “had no authority from Korn/Ferry to provide her password to former employees.”

At issue is language in the CFAA that makes it illegal to access a computer system “without authorization.” McKeown said that “without authorization” is “an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission.” The question that legal scholars, groups such as the Electronic Frontier Foundation, and dissenting judge Stephen Reinhardt ask is an important one: Authorization from who?

Reinhardt argues that Nosal’s use of the database was unauthorized by the firm, but was authorized by the former employee who shared it with him. For you and me, this case means that unless Netflix specifically authorizes you to share your password with your friend, you’re breaking federal law.

The EFF:

While the majority opinion said that the facts of this case “bear little resemblance” to the kind of password sharing that people often do, Judge Reinhardt’s dissent notes that it fails to provide an explanation of why that is. Using an analogy in which a woman uses her husband’s user credentials to access his bank account to pay bills, Judge Reinhardt noted: “So long as the wife knows that the bank does not give her permission to access its servers in any manner, she is in the same position as Nosal and his associates.” As a result, although the majority says otherwise, the court turned anyone who has ever used someone else’s password without the approval of the computer owner into a potential felon.

The Computer Fraud and Abuse Act has been a disaster for many reasons, this being one of them. There will be an appeal of this ruling.

Posted on July 13, 2016 at 11:07 AM54 Comments

Google's Post-Quantum Cryptography

News has been bubbling about an announcement by Google that it’s starting to experiment with public-key cryptography that’s resistant to cryptanalysis by a quantum computer. Specifically, it’s experimenting with the New Hope algorithm.

It’s certainly interesting that Google is thinking about this, and probably okay that it’s available in the Canary version of Chrome, but this algorithm is by no means ready for operational use. Secure public-key algorithms are very hard to create, and this one has not had nearly enough analysis to be trusted. Lattice-based public-key cryptosystems such as New Hope are particularly subtle—and we cryptographers are still learning a lot about how they can be broken.

Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.

Posted on July 12, 2016 at 12:53 PM34 Comments

Report on the Vulnerabilities Equities Process

I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.

Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:

  • The President should issue an Executive Order mandating government-wide compliance with the VEP.
  • Make the general criteria used to decide whether or not to disclose a vulnerability public.
  • Clearly define the VEP.
  • Make sure any undisclosed vulnerabilities are reviewed periodically.
  • Ensure that the government has the right to disclose any vulnerabilities it purchases.
  • Transfer oversight of the VEP from the NSA to the DHS.
  • Issue an annual report on the VEP.
  • Expand Congressional oversight of the VEP.
  • Mandate oversight by other independent bodies inside the Executive Branch.
  • Expand funding for both offensive and defensive vulnerability research.

These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that’s only going to get more important in the Internet of Things.

News article.

Posted on July 11, 2016 at 12:15 PM25 Comments

Anonymization and the Law

Interesting paper: “Anonymization and Risk,” by Ira S. Rubinstein and Woodrow Hartzog:

Abstract: Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been “anonymized.” It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk.

Posted on July 11, 2016 at 6:31 AM2 Comments

Researchers Discover Tor Nodes Designed to Spy on Hidden Services

Two researchers have discovered over 100 Tor nodes that are spying on hidden services. Cory Doctorow explains:

These nodes—ordinary nodes, not exit nodes—sorted through all the traffic that passed through them, looking for anything bound for a hidden service, which allowed them to discover hidden services that had not been advertised. These nodes then attacked the hidden services by making connections to them and trying common exploits against the server-software running on them, seeking to compromise and take them over.

The researchers used “honeypot” .onion servers to find the spying computers: these honeypots were .onion sites that the researchers set up in their own lab and then connected to repeatedly over the Tor network, thus seeding many Tor nodes with the information of the honions’ existence. They didn’t advertise the honions’ existence in any other way and there was nothing of interest at these sites, and so when the sites logged new connections, the researchers could infer that they were being contacted by a system that had spied on one of their Tor network circuits.

This attack was already understood as a theoretical problem for the Tor project, which had recently undertaken a rearchitecting of the hidden service system that would prevent it from taking place.

No one knows who is running the spying nodes: they could be run by criminals, governments, private suppliers of “infowar” weapons to governments, independent researchers, or other scholars (though scholarly research would not normally include attempts to hack the servers once they were discovered).

The Tor project is working on redesigning its system to block this attack.

Vice Motherboard article. Defcon talk announcement.

Posted on July 8, 2016 at 7:01 AM22 Comments

Hijacking Someone's Facebook Account with a Fake Passport Copy

BBC has the story. The confusion is that a scan of a passport is much easier to forge than an actual passport. This is a truly hard problem: how do you give people the ability to get back into their accounts after they’ve lost their credentials, while at the same time prohibiting hackers from using the same mechanism to hijack accounts? Demanding an easy-to-forge copy of a hard-to-forge document isn’t a good solution.

Posted on July 7, 2016 at 1:27 PM49 Comments

The Difficulty of Routing around Internet Surveillance States

Interesting research: “Characterizing and Avoiding Routing Detours Through Surveillance States,” by Anne Edmundson, Roya Ensafi, Nick Feamster, and Jennifer Rexford.

Abstract: An increasing number of countries are passing laws that facilitate the mass surveillance of Internet traffic. In response, governments and citizens are increasingly paying attention to the countries that their Internet traffic traverses. In some cases, countries are taking extreme steps, such as building new Internet Exchange Points (IXPs), which allow networks to interconnect directly, and encouraging local interconnection to keep local traffic local. We find that although many of these efforts are extensive, they are often futile, due to the inherent lack of hosting and route diversity for many popular sites. By measuring the country-level paths to popular domains, we characterize transnational routing detours. We find that traffic is traversing known surveillance states, even when the traffic originates and ends in a country that does not conduct mass surveillance. Then, we investigate how clients can use overlay network relays and the open DNS resolver infrastructure to prevent their traffic from traversing certain jurisdictions. We find that 84% of paths originating in Brazil traverse the United States, but when relays are used for country avoidance, only 37% of Brazilian paths traverse the United States. Using the open DNS resolver infrastructure allows Kenyan clients to avoid the United States on 17% more paths. Unfortunately, we find that some of the more prominent surveillance states (e.g., the U.S.) are also some of the least avoidable countries.

Posted on July 7, 2016 at 6:47 AM21 Comments

Good Article on Airport Security

The New York Times wrote a good piece comparing airport security around the world, and pointing out that moving the security perimeter doesn’t make any difference if the attack can occur just outside the perimeter. Mark Stewart has the good quote:

“Perhaps the most cost-effective measure is policing and intelligence—to stop them before they reach the target,” Mr. Stewart said.

Sounds like something I would say.

Posted on July 6, 2016 at 9:45 AM45 Comments

Intellectual Property as National Security

Interesting research: Debora Halbert, “Intellectual property theft and national security: Agendas and assumptions“:

Abstract: About a decade ago, intellectual property started getting systematically treated as a national security threat to the United States. The scope of the threat is broadly conceived to include hacking, trade secret theft, file sharing, and even foreign students enrolling in American universities. In each case, the national security of the United States is claimed to be at risk, not just its economic competitiveness. This article traces the U.S. government’s efforts to establish and articulate intellectual property theft as a national security issue. It traces the discourse on intellectual property as a security threat and its place within the larger security dialogue of cyberwar and cybersecurity. It argues that the focus on the theft of intellectual property as a security issue helps justify enhanced surveillance and control over the Internet and its future development. Such a framing of intellectual property has consequences for how we understand information exchange on the Internet and for the future of U.S. diplomatic relations around the globe.

EDITED TO ADD (7/6): Preliminary version, no paywall.

Posted on July 5, 2016 at 10:54 AM37 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.