More Spectre/Meltdown-Like Attacks

Back in January, we learned about a class of vulnerabilities against microprocessors that leverages various performance and efficiency shortcuts for attack. I wrote that the first two attacks would be just the start:

It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.

We saw several variants over the year. And now researchers have discovered seven more.

Researchers say they've discovered the seven new CPU attacks while performing "a sound and extensible systematization of transient execution attacks" -- a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU's internal caches, and other internal execution stages.

The research team says they've successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers.

Microprocessor designers have spent the year rethinking the security of their architectures. My guess is that they have a lot more rethinking to do.

Posted on November 14, 2018 at 3:30 PM0 Comments

Upcoming Speaking Engagements

This is a current list of where and when I am scheduled to speak:


The list is maintained on this page.

Posted on November 14, 2018 at 8:03 AM2 Comments

Oracle and "Responsible Disclosure"

I've been writing about "responsible disclosure" for over a decade; here's an essay from 2007. Basically, it's a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.

When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.

Software vendors might not like responsible disclosure, but it's the best solution we have. Making it illegal to publish vulnerabilities without the vendor's consent means that they won't get fixed quickly -- and everyone will be less secure. It also means less security research.

This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.

Posted on November 14, 2018 at 6:46 AM9 Comments

New IoT Security Regulations

Due to ever-evolving technological advances, manufacturers are connecting consumer goods­ -- from toys to light bulbs to major appliances­ -- to the Internet at breakneck speeds. This is the Internet of Things, and it's a security nightmare.

The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon's Alexa, which not only answers questions and plays music but allows you to control your home's lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.

But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner -- ­cars, pacemakers, thermostats­ -- the risks include loss of life and property.

By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.

It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or "IoT" devices sold in the state­ -- and the effects will soon be felt worldwide.

California's new SB 327 law, which will take effect in January 2020, requires all "connected devices" to have a "reasonable security feature." The good news is that the term "connected devices" is broadly defined to include just about everything connected to the Internet. The not-so-good news is that "reasonable security" remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.

The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California's attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.

There's just one specific in the law that's not subject to the attorney general's interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it's just one of dozens of awful "security" measures commonly found in IoT devices.

This law is not a panacea. But we have to start somewhere, and it is a start.

Though the legislation covers only the state of California, its effects will reach much further. All of us­ -- in the United States or elsewhere­ -- are likely to benefit because of the way software is written and sold.

Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.

But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won't make sense to have two versions: one for California and another for everywhere else. It's much easier to maintain the single, more secure version and sell it everywhere.

The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you've read and agreed to the website's privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR­ -- people physically in the European Union, and EU citizens wherever they are -- ­and those who are not. It's easier to extend the protection to everyone.

Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.

Insecurity is profitable only if you can get away with it worldwide. Once you can't, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.

Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don't have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.

But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there's a market for new ideas and new products. We've seen this pattern again and again in safety and security engineering, and we'll see it with the Internet of Things as well.

IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it's heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.

This essay previously appeared on CNN.com.

Posted on November 13, 2018 at 7:04 AM31 Comments

Hiding Secret Messages in Fingerprints

This is a fun steganographic application: hiding a message in a fingerprint image.

Can't see any real use for it, but that's okay.

Posted on November 12, 2018 at 6:17 AM10 Comments

Friday Squid Blogging: Australian Fisherman Gets Inked

Pretty good video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Posted on November 9, 2018 at 4:07 PM54 Comments

The Pentagon Is Publishing Foreign Nation-State Malware

This is a new thing:

The Pentagon has suddenly started uploading malware samples from APTs and other nation-state sources to the website VirusTotal, which is essentially a malware zoo that's used by security pros and antivirus/malware detection engines to gain a better understanding of the threat landscape.

This feels like an example of the US's new strategy of actively harassing foreign government actors. By making their malware public, the US is forcing them to continually find and use new vulnerabilities.

EDITED TO ADD (11/13): This is another good article. And here is some background on the malware.

Posted on November 9, 2018 at 1:52 PM35 Comments

Privacy and Security of Data at Universities

Interesting paper: "Open Data, Grey Data, and Stewardship: Universities at the Privacy Frontier," by Christine Borgman:

Abstract: As universities recognize the inherent value in the data they collect and hold, they encounter unforeseen challenges in stewarding those data in ways that balance accountability, transparency, and protection of privacy, academic freedom, and intellectual property. Two parallel developments in academic data collection are converging: (1) open access requirements, whereby researchers must provide access to their data as a condition of obtaining grant funding or publishing results in journals; and (2) the vast accumulation of "grey data" about individuals in their daily activities of research, teaching, learning, services, and administration. The boundaries between research and grey data are blurring, making it more difficult to assess the risks and responsibilities associated with any data collection. Many sets of data, both research and grey, fall outside privacy regulations such as HIPAA, FERPA, and PII. Universities are exploiting these data for research, learning analytics, faculty evaluation, strategic decisions, and other sensitive matters. Commercial entities are besieging universities with requests for access to data or for partnerships to mine them. The privacy frontier facing research universities spans open access practices, uses and misuses of data, public records requests, cyber risk, and curating data for privacy protection. This Article explores the competing values inherent in data stewardship and makes recommendations for practice by drawing on the pioneering work of the University of California in privacy and information security, data governance, and cyber risk.

Posted on November 9, 2018 at 6:04 AM6 Comments

iOS 12.1 Vulnerability

This is really just to point out that computer security is really hard:

Almost as soon as Apple released iOS 12.1 on Tuesday, a Spanish security researcher discovered a bug that exploits group Facetime calls to give anyone access to an iPhone users' contact information with no need for a passcode.

[...]

A bad actor would need physical access to the phone that they are targeting and has a few options for viewing the victim's contact information. They would need to either call the phone from another iPhone or have the phone call itself. Once the call connects they would need to:

  • Select the Facetime icon
  • Select "Add Person"
  • Select the plus icon
  • Scroll through the contacts and use 3D touch on a name to view all contact information that's stored.

Making the phone call itself without entering a passcode can be accomplished by either telling Siri the phone number or, if they don't know the number, they can say "call my phone." We tested this with both the owners' voice and a strangers voice, in both cases, Siri initiated the call.

Posted on November 8, 2018 at 6:35 AM9 Comments

Consumer Reports Reviews Wireless Home-Security Cameras

Consumer Reports is starting to evaluate the security of IoT devices. As part of that, it's reviewing wireless home-security cameras.

It found significant security vulnerabilities in D-Link cameras:

In contrast, D-Link doesn't store video from the DCS-2630L in the cloud. Instead, the camera has its own, onboard web server, which can deliver video to the user in different ways.

Users can view the video using an app, mydlink Lite. The video is encrypted, and it travels from the camera through D-Link's corporate servers, and ultimately to the user's phone. Users can also access the same encrypted video feed through a company web page, mydlink.com. Those are both secure methods of accessing the video.

But the D-Link camera also lets you bypass the D-Link corporate servers and access the video directly through a web browser on a laptop or other device. If you do this, the web server on the camera doesn't encrypt the video.

If you set up this kind of remote access, the camera and unencrypted video is open to the web. They could be discovered by anyone who finds or guesses the camera's IP address­ -- and if you haven't set a strong password, a hacker might find it easy to gain access.

The real news is that Consumer Reports is able to put pressure on device manufacturers:

In response to a Consumer Reports query, D-Link said that security would be tightened through updates this fall. Consumer Reports will evaluate those updates once they are available.

This is the sort of sustained pressure we need on IoT device manufacturers.

Boing Boing link.

EDITED TO ADD (11/13): In related news, the US Federal Trade Commission is suing D-Link because their routers are so insecure. The lawsuit was filed in January 2017.

Posted on November 7, 2018 at 6:39 AM20 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.