I understand his frustration, but this is extreme:
When police asked Cryptopay what could have motivated Salonen to send the company a pipe bomb or, rather, two pipe bombs, which is what investigators found when they picked apart the explosive package the only thing the company could think of was that it had declined his request for a password change.
A fair point, as it's never a good idea to send a new password in an email. A password-reset link is safer all round, although it's not clear if Cryptopay offered this option to Salonen.
Both the US Drug Enforcement Administration (DEA) and Immigration and Customs Enforcement (ICE) are hiding surveillance cameras in streetlights.
According to government procurement data, the DEA has paid a Houston, Texas company called Cowboy Streetlight Concealments LLC roughly $22,000 since June 2018 for "video recording and reproducing equipment." ICE paid out about $28,000 to Cowboy Streetlight Concealments over the same period of time.
It's unclear where the DEA and ICE streetlight cameras have been installed, or where the next deployments will take place. ICE offices in Dallas, Houston, and San Antonio have provided funding for recent acquisitions from Cowboy Streetlight Concealments; the DEA's most recent purchases were funded by the agency's Office of Investigative Technology, which is located in Lorton, Virginia.
Fifty thousand dollars doesn't buy a lot of streetlight surveillance cameras, so either this is a pilot program or there are a lot more procurements elsewhere that we don't know about.
A new study finds that credit card fraud has not declined since the introduction of chip cards in the US. The majority of stolen card information comes from hacked point-of-sale terminals.
The reasons seem to be twofold. One, the US uses chip-and-signature instead of chip-and-PIN, obviating the most critical security benefit of the chip. And two, US merchants still accept magnetic stripe cards, meaning that thieves can steal credentials from a chip card and create a working cloned mag stripe card.
Boing Boing post.
Back in January, we learned about a class of vulnerabilities against microprocessors that leverages various performance and efficiency shortcuts for attack. I wrote that the first two attacks would be just the start:
It shouldn't be surprising that microprocessor designers have been building insecure hardware for 20 years. What's surprising is that it took 20 years to discover it. In their rush to make computers faster, they weren't thinking about security. They didn't have the expertise to find these vulnerabilities. And those who did were too busy finding normal software vulnerabilities to examine microprocessors. Security researchers are starting to look more closely at these systems, so expect to hear about more vulnerabilities along these lines.
Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown. There will be vulnerabilities that will allow attackers to manipulate or delete data across processes, potentially fatal in the computers controlling our cars or implanted medical devices. These will be similarly impossible to fix, and the only strategy will be to throw our devices away and buy new ones.
Researchers say they've discovered the seven new CPU attacks while performing "a sound and extensible systematization of transient execution attacks" -- a catch-all term the research team used to describe attacks on the various internal mechanisms that a CPU uses to process data, such as the speculative execution process, the CPU's internal caches, and other internal execution stages.
The research team says they've successfully demonstrated all seven attacks with proof-of-concept code. Experiments to confirm six other Meltdown-attacks did not succeed, according to a graph published by researchers.
Microprocessor designers have spent the year rethinking the security of their architectures. My guess is that they have a lot more rethinking to do.
This is a current list of where and when I am scheduled to speak:
- I'm speaking at Kiwicon in Wellington, New Zealand on November 16, 2018.
- I'm appearing on IBM Resilient's End of Year Review webinar on "The Top Cyber Security Trends in 2018 and Predictions for the Year Ahead," December 6, 2018 at 12:00 PM EST.
- I'm giving a talk on "Securing a World of Physically Capable Computers" at MIT on December 6, 2018.
- I'm speaking at the The Digital Society Conference 2018: Empowering Ecosystems on December 11, 2018.
- I'm speaking at the University of Basel in Basel, Switzerland on December 12, 2018.
- I'm speaking at the Hyperledger Forum in Basel, Switzerland on December 13, 2018.
- I'm speaking at the OECD Global Forum on Digital Security for Prosperity in Paris, France on December 14, 2018.
The list is maintained on this page.
I've been writing about "responsible disclosure" for over a decade; here's an essay from 2007. Basically, it's a tacit agreement between researchers and software vendors. Researchers agree to withhold their work until software companies fix the vulnerabilities, and software vendors agree not to harass researchers and fix the vulnerabilities quickly.
When that agreement breaks down, things go bad quickly. This story is about a researcher who published an Oracle zero-day because Oracle has a history of harassing researchers and ignoring vulnerabilities.
Software vendors might not like responsible disclosure, but it's the best solution we have. Making it illegal to publish vulnerabilities without the vendor's consent means that they won't get fixed quickly -- and everyone will be less secure. It also means less security research.
This will become even more critical with software that affects the world in a direct physical manner, like cars and airplanes. Responsible disclosure makes us safer, but it only works if software vendors take the vulnerabilities seriously and fix them quickly. Without any regulations that enforce that, the threat of disclosure is the only incentive we can impose on software vendors.
Due to ever-evolving technological advances, manufacturers are connecting consumer goods -- from toys to light bulbs to major appliances -- to the Internet at breakneck speeds. This is the Internet of Things, and it's a security nightmare.
The Internet of Things fuses products with communications technology to make daily life more effortless. Think Amazon's Alexa, which not only answers questions and plays music but allows you to control your home's lights and thermostat. Or the current generation of implanted pacemakers, which can both receive commands and send information to doctors over the Internet.
But like nearly all innovation, there are risks involved. And for products born out of the Internet of Things, this means the risk of having personal information stolen or devices being overtaken and controlled remotely. For devices that affect the world in a direct physical manner -- cars, pacemakers, thermostats -- the risks include loss of life and property.
By developing more advanced security features and building them into these products, hacks can be avoided. The problem is that there is no monetary incentive for companies to invest in the cybersecurity measures needed to keep their products secure. Consumers will buy products without proper security features, unaware that their information is vulnerable. And current liability laws make it hard to hold companies accountable for shoddy software security.
It falls upon lawmakers to create laws that protect consumers. While the US government is largely absent in this area of consumer protection, the state of California has recently stepped in and started regulating the Internet of Things, or "IoT" devices sold in the state -- and the effects will soon be felt worldwide.
California's new SB 327 law, which will take effect in January 2020, requires all "connected devices" to have a "reasonable security feature." The good news is that the term "connected devices" is broadly defined to include just about everything connected to the Internet. The not-so-good news is that "reasonable security" remains defined such that companies trying to avoid compliance can argue that the law is unenforceable.
The legislation requires that security features must be able to protect the device and the information on it from a variety of threats and be appropriate to both the nature of the device and the information it collects. California's attorney general will interpret the law and define the specifics, which will surely be the subject of much lobbying by tech companies.
There's just one specific in the law that's not subject to the attorney general's interpretation: default passwords are not allowed. This is a good thing; they are a terrible security practice. But it's just one of dozens of awful "security" measures commonly found in IoT devices.
This law is not a panacea. But we have to start somewhere, and it is a start.
Though the legislation covers only the state of California, its effects will reach much further. All of us -- in the United States or elsewhere -- are likely to benefit because of the way software is written and sold.
Automobile manufacturers sell their cars worldwide, but they are customized for local markets. The car you buy in the United States is different from the same model sold in Mexico, because the local environmental laws are not the same and manufacturers optimize engines based on where the product will be sold. The economics of building and selling automobiles easily allows for this differentiation.
But software is different. Once California forces minimum security standards on IoT devices, manufacturers will have to rewrite their software to comply. At that point, it won't make sense to have two versions: one for California and another for everywhere else. It's much easier to maintain the single, more secure version and sell it everywhere.
The European General Data Protection Regulation (GDPR), which implemented the annoying warnings and agreements that pop up on websites, is another example of a law that extends well beyond physical borders. You might have noticed an increase in websites that force you to acknowledge you've read and agreed to the website's privacy policies. This is because it is tricky to differentiate between users who are subject to the protections of the GDPR -- people physically in the European Union, and EU citizens wherever they are -- and those who are not. It's easier to extend the protection to everyone.
Once this kind of sorting is possible, companies will, in all likelihood, return to their profitable surveillance capitalism practices on those who are still fair game. Surveillance is still the primary business model of the Internet, and companies want to spy on us and our activities as much as they can so they can sell us more things and monetize what they know about our behavior.
Insecurity is profitable only if you can get away with it worldwide. Once you can't, you might as well make a virtue out of necessity. So everyone will benefit from the California regulation, as they would from similar security regulations enacted in any market around the world large enough to matter, just like everyone will benefit from the portion of GDPR compliance that involves data security.
Most importantly, laws like these spur innovations in cybersecurity. Right now, we have a market failure. Because the courts have traditionally not held software manufacturers liable for vulnerabilities, and because consumers don't have the expertise to differentiate between a secure product and an insecure one, manufacturers have prioritized low prices, getting devices out on the market quickly and additional features over security.
But once a government steps in and imposes more stringent security regulations, companies have an incentive to meet those standards as quickly, cheaply, and effectively as possible. This means more security innovation, because now there's a market for new ideas and new products. We've seen this pattern again and again in safety and security engineering, and we'll see it with the Internet of Things as well.
IoT devices are more dangerous than our traditional computers because they sense the world around us, and affect that world in a direct physical manner. Increasing the cybersecurity of these devices is paramount, and it's heartening to see both individual states and the European Union step in where the US federal government is abdicating responsibility. But we need more, and soon.
This essay previously appeared on CNN.com.
Pretty good video.
As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.
Read my blog posting guidelines here.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.