Blog: January 2017 Archives

IoT Ransomware against Austrian Hotel

Attackers held an Austrian hotel network for ransom, demanding $1,800 in bitcoin to unlock the network. Among other things, the locked network wouldn't allow any of the guests to open their hotel room doors.

I expect IoT ransomware to become a major area of crime in the next few years. How long before we see this tactic used against cars? Against home thermostats? Within the year is my guess. And as long as the ransom price isn't too onerous, people will pay.

EDITED TO ADD: There seems to be a lot of confusion about exactly what the ransomware did. Early reports said that hotel guests were locked inside their rooms, which is of course ridiculous. Now some reports are saying that no one was locked out of their rooms.

EDITED TO ADD (2/13): More information.

Posted on January 31, 2017 at 8:49 AM34 Comments

New Rules on Data Privacy for Non-US Citizens

Last week, President Trump signed an executive order affecting the privacy rights of non-US citizens with respect to data residing in the US.

Here's the relevant text:

Privacy Act. Agencies shall, to the extent consistent with applicable law, ensure that their privacy policies exclude persons who are not United States citizens or lawful permanent residents from the protections of the Privacy Act regarding personally identifiable information.

At issue is the EU-US Privacy Shield, which is the voluntary agreement among the US government, US companies, and the EU that makes it possible for US companies to store Europeans' data without having to follow all EU privacy requirements.

Interpretations of what this means are all over the place: from extremely serious, to more measured, to don't worry and we still have PPD-28.

This is clearly still in flux. And, like pretty much everything so far in the Trump administration, we have no idea where this is headed.

Posted on January 30, 2017 at 6:04 AM85 Comments

Friday Squid Blogging: Squid Fossils from the Early Jurassic

New fossil bed discovered in Alberta:

The finds at the site include 16 vampyropods, a relative of the vampire squid with its ink sac and fine details of its muscles still preserved in exquisite detail.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 27, 2017 at 4:37 PM212 Comments

Research into Twitter Bots

There are a lot of them.

In a world where the number of fans, friends, followers, and likers are social currency -- and where the number of reposts is a measure of popularity -- this kind of gaming the system is inevitable.

EDITED TO ADD (2/13): Here's the original paper.

Posted on January 27, 2017 at 6:18 AM16 Comments

Duress Codes for Fingerprint Access Control

Mike Specter has an interesting idea on how to make biometric access-control systems more secure: add a duress code. For example, you might configure your iPhone so that either thumb or forefinger unlocks the device, but your left middle finger disables the fingerprint mechanism (useful in the US where being compelled to divulge your password is a 5th Amendment violation but being forced to place your finger on the fingerprint reader is not) and the right middle finger permanently wipes the phone (useful in other countries where coercion techniques are much more severe).

Posted on January 26, 2017 at 2:03 PM65 Comments

Security Risks of the President's Android Phone

Reports are that President Trump is still using his old Android phone. There are security risks here, but they are not the obvious ones.

I'm not concerned about the data. Anything he reads on that screen is coming from the insecure network that we all use, and any e-mails, texts, Tweets, and whatever are going out to that same network. But this is a consumer device, and it's going to have security vulnerabilities. He's at risk from everybody, ranging from lone hackers to the better-funded intelligence agencies of the world. And while the risk of a forged e-mail is real -- it could easily move the stock market -- the bigger risk is eavesdropping. That Android has a microphone, which means that it can be turned into a room bug without anyone's knowledge. That's my real fear.

I commented in this story.

EDITED TO ADD (1/27): Nicholas Weaver comments.

Posted on January 26, 2017 at 7:06 AM183 Comments

Capturing Pattern-Lock Authentication

Interesting research -- "Cracking Android Pattern Lock in Five Attempts":

Abstract: Pattern lock is widely used as a mechanism for authentication and authorization on Android devices. In this paper, we demonstrate a novel video-based attack to reconstruct Android lock patterns from video footage filmed u sing a mobile phone camera. Unlike prior attacks on pattern lock, our approach does not require the video to capture any content displayed on the screen. Instead, we employ a computer vision algorithm to track the fingertip movements to infer the pattern. Using the geometry information extracted from the tracked fingertip motions, our approach is able to accurately identify a small number of (often one) candidate patterns to be tested by an adversary. We thoroughly evaluated our approach using 120 unique patterns collected from 215 independent users, by applying it to reconstruct patterns from video footage filmed using smartphone cameras. Experimental results show that our approach can break over 95% of the patterns in five attempts before the device is automatically locked by the Android system. We discovered that, in contrast to many people's belief, complex patterns do not offer stronger protection under our attacking scenarios. This is demonstrated by the fact that we are able to break all but one complex patterns (with a 97.5% success rate) as opposed to 60% of the simple patterns in the first attempt. Since our threat model is common in day-to-day lives, our work calls for the community to revisit the risks of using Android pattern lock to protect sensitive information.

News article.

Posted on January 25, 2017 at 6:18 AM11 Comments

How the Media Influences Our Fear of Terrorism

Good article that crunches the data and shows that the press's coverage of terrorism is disproportional to its comparative risk.

This isn't new. I've written about it before, and wrote about it more generally when I wrote about the psychology of risk, fear, and security. Basically, the issue is the availability heuristic. We tend to infer the probability of something by how easy it is to bring examples of the thing to mind. So if we can think of a lot of tiger attacks in our community, we infer that the risk is high. If we can't think of many lion attacks, we infer that the risk is low. But while this is a perfectly reasonable heuristic when living in small family groups in the East African highlands in 100,000 BC, it fails in the face of modern media. The media makes the rare seem more common by spending a lot of time talking about it. It's not the media's fault. By definition, news is "something that hardly ever happens." But when the coverage of terrorist deaths exceeds the coverage of homicides, we have a tendency to mistakenly inflate the risk of the former while discount the risk of the latter.

Our brains aren't very good at probability and risk analysis. We tend to exaggerate spectacular, strange and rare events, and downplay ordinary, familiar and common ones. We think rare risks are more common than they are. We fear them more than probability indicates we should.

There is a lot of psychological research that tries to explain this, but one of the key findings is this: People tend to base risk analysis more on stories than on data. Stories engage us at a much more visceral level, especially stories that are vivid, exciting or personally involving.

If a friend tells you about getting mugged in a foreign country, that story is more likely to affect how safe you feel traveling to that country than reading a page of abstract crime statistics will.

Novelty plus dread plus a good story equals overreaction.

It's not just murders. It's flying vs. driving: the former is much safer, but accidents are so more spectacular when they occur.

Posted on January 24, 2017 at 6:31 AM138 Comments

Obama's Legacy in Internet Security

NextGov has a nice article summarizing President Obama's accomplishments in Internet security: what he did, what he didn't do, and how it turned out.

Posted on January 23, 2017 at 6:55 AM33 Comments

Friday Squid Blogging: Know Your Cephalopods

This graphic shows the important difference between arms and tentacles.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 20, 2017 at 4:19 PM171 Comments

New White House Privacy Report

Two days ago, the White House released a report on privacy: "Privacy in our Digital Lives: Protecting Individuals and Promoting Innovation." The report summarizes things the administration has done, and lists future challenges:

Areas for Further Attention

  1. Technology will pose new consumer privacy and security challenges.
  2. Emerging technology may simultaneously create new challenges and opportunities for law enforcement and national security.
  3. The digital economy is making privacy a global value.
  4. Consumers' voices are being heard -- and must continue to be heard -- in the regulatory process.
  5. The Federal Government benefits from hiring more privacy professionals.
  6. Transparency is vital for earning and retaining public trust.
  7. Privacy is a bipartisan issue.

I especially like the framing of privacy as a right. From President Obama's introduction:

Privacy is more than just, as Justice Brandeis famously proclaimed, the "right to be let alone." It is the right to have our most personal information be kept safe by others we trust. It is the right to communicate freely and to do so without fear. It is the right to associate freely with others, regardless of the medium. In an age where so many of our thoughts, words, and movements are digitally recorded, privacy cannot simply be an abstract concept in our lives; privacy must be an embedded value.

The conclusion:

For the past 240 years, the core of our democracy -- the values that have helped propel the United States of America -- have remained largely the same. We are still a people founded on the beliefs of equality and economic prosperity for all. The fierce independence that encouraged us to break from an oppressive king is the same independence found in young women and men across the country who strive to make their own path in this world and create a life unique unto to themselves. So long as that independence is encouraged, so long as it is fostered by the ability to transcend past data points and by the ability to speak and create free from intrusion, the United States will continue to lead the world. Privacy is necessary to our economy, free expression, and the digital free flow of data because it is fundamental to ourselves.

Privacy, as a right that has been enjoyed by past generations, must be protected in our digital ecosystem so that future generations are given the same freedoms to engage, explore, and create the future we all seek.

I know; rhetoric is easy, policy is hard. But we can't change policy without a changed rhetoric.

EDITED TO ADD: The document was originally on the whitehouse.gov website, but was deleted in the Trump transition. It's now available at the Obama archives site.

Posted on January 20, 2017 at 9:51 AM121 Comments

Heartbeat as Biometric Password

There's research in using a heartbeat as a biometric password. No details in the article. My guess is that there isn't nearly enough entropy in the reproducible biometric, but I might be surprised. The article's suggestion to use it as a password for health records seems especially problematic. "I'm sorry, but we can't access the patient's health records because he's having a heart attack."

I wrote about this before here.

Posted on January 19, 2017 at 6:22 AM41 Comments

WhatsApp Security Vulnerability

Back in March, Rolf Weber wrote about a potential vulnerability in the WhatsApp protocol that would allow Facebook to defeat perfect forward secrecy by forcibly change users' keys, allowing it -- or more likely, the government -- to eavesdrop on encrypted messages.

It seems that this vulnerability is real:

WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users' messages.

The security loophole was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: "If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys."

The vulnerability is not inherent to the Signal protocol. Open Whisper Systems' messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp's implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Note that it's an attack against current and future messages, and not something that would allow the government to reach into the past. In that way, it is no more troubling than the government hacking your mobile phone and reading your WhatsApp conversations that way.

An unnamed "WhatsApp spokesperson" said that they implemented the encryption this way for usability:

In WhatsApp's implementation of the Signal protocol, we have a "Show Security Notifications" setting (option under Settings > Account > Security) that notifies you when a contact's security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people's messages are delivered, not lost in transit.

He's technically correct. This is not a backdoor. This really isn't even a flaw. It's a design decision that put usability ahead of security in this particular instance. Moxie Marlinspike, creator of Signal and the code base underlying WhatsApp's encryption, said as much:

Under normal circumstances, when communicating with a contact who has recently changed devices or reinstalled WhatsApp, it might be possible to send a message before the sending client discovers that the receiving client has new keys. The recipient's device immediately responds, and asks the sender to reencrypt the message with the recipient's new identity key pair. The sender displays the "safety number has changed" notification, reencrypts the message, and delivers it.

The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a "double check mark," it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption.

The fact that WhatsApp handles key changes is not a "backdoor," it is how cryptography works. Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system.

The only question it might be reasonable to ask is whether these safety number change notifications should be "blocking" or "non-blocking." In other words, when a contact's key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.

Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience. The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.

How serious this is depends on your threat model. If you are worried about the US government -- or any other government that can pressure Facebook -- snooping on your messages, then this is a small vulnerability. If not, then it's nothing to worry about.

Slashdot thread. Hacker News thread. BoingBoing post. More here.

EDITED TO ADD (1/24): Zeynep Tufekci takes the Guardian to task for their reporting on this vulnerability. (Note: I signed on to her letter.)

EDITED TO ADD (2/13): The vulnerability explained by the person who discovered it.

This is a good explanation of the security/usability trade-off that's at issue here.

Posted on January 17, 2017 at 6:09 AM123 Comments

Cloudflare's Experience with a National Security Letter

Interesting post on Cloudflare's experience with receiving a National Security Letter.

News article.

Posted on January 16, 2017 at 6:40 AM31 Comments

Friday Squid Blogging: 1874 Giant Squid Attack

This article discusses a giant squid attack on a schooner off the coast of Sri Lanka in 1874.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 13, 2017 at 4:52 PM185 Comments

A Comment on the Trump Dossier

Imagine that you are someone in the CIA, concerned about the future of America. You have this Russian dossier on Donald Trump, which you have some evidence might be true. The smartest thing you can do is to leak it to the public. By doing so, you are eliminating any leverage Russia has over Trump and probably reducing the effectiveness of any other blackmail material any government might have on Trump. I believe you do this regardless of whether you ultimately believe the document's findings or not, and regardless of whether you support or oppose Trump. It's simple game-theory.

This document is particularly safe to release. Because it's not a classified report of the CIA, leaking it is not a crime. And you release it now, before Trump becomes president, because doing so afterwards becomes much more dangerous.

MODERATION NOTE: Please keep comments focused on this particular point. More general comments, especially uncivil comments, will be deleted.

Posted on January 13, 2017 at 11:58 AM185 Comments

Internet Filtering in Authoritarian Regimes

Interesting research: Sebastian Hellmeier, "The Dictator's Digital Toolkit: Explaining Variation in Internet Filtering in Authoritarian Regimes," Politics & Policy, 2016 (full paper is behind a paywall):

Abstract: Following its global diffusion during the last decade, the Internet was expected to become a liberation technology and a threat for autocratic regimes by facilitating collective action. Recently, however, autocratic regimes took control of the Internet and filter online content. Building on the literature concerning the political economy of repression, this article argues that regime characteristics, economic conditions, and conflict in bordering states account for variation in Internet filtering levels among autocratic regimes. Using OLS-regression, the article analyzes the determinants of Internet filtering as measured by the Open Net Initiative in 34 autocratic regimes. The results show that monarchies, regimes with higher levels of social unrest, regime changes in neighboring countries, and less oppositional competition in the political arena are more likely to filter the Internet. The article calls for a systematic data collection to analyze the causal mechanisms and the temporal dynamics of Internet filtering.

Posted on January 13, 2017 at 6:48 AM22 Comments

NSA Given More Ability to Share Raw Intelligence Data

President Obama has changed the rules regarding raw intelligence, allowing the NSA to share raw data with the US's other 16 intelligence agencies.

The new rules significantly relax longstanding limits on what the N.S.A. may do with the information gathered by its most powerful surveillance operations, which are largely unregulated by American wiretapping laws. These include collecting satellite transmissions, phone calls and emails that cross network switches abroad, and messages between people abroad that cross domestic network switches.

The change means that far more officials will be searching through raw data. Essentially, the government is reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.

Here are the new procedures.

This rule change has been in the works for a while. Here are two blog posts from April discussing the then-proposed changes.

From a privacy perspective, this feels like a really bad idea to me.

Posted on January 12, 2017 at 12:07 PM29 Comments

Twofish Power Analysis Attack

New paper: "A Simple Power Analysis Attack on the Twofish Key Schedule." This shouldn't be a surprise; these attacks are devastating if you don't take steps to mitigate them.

The general issue is if an attacker has physical control of the computer performing the encryption, it is very hard to secure the encryption inside the computer. I wrote a paper about this back in 1999.

Posted on January 12, 2017 at 6:28 AM24 Comments

Law Enforcement Access to IoT Data

In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.

The particulars of the case are weird. Amazon's Echo does not constantly record; it only listens for its name. So it's unclear that there is any evidence to be turned over. But this general issue isn't going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.

Related: A newscaster discussed Amazon's Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products. This year, the same technology is coming to LG appliances such as refrigerators.

Posted on January 11, 2017 at 6:22 AM50 Comments

FDA Recommendations on Medical-Device Cybersecurity

The FDA has issued a report giving medical devices guidance on computer and network security. There's nothing particularly new or interesting; it reads like standard security advice: write secure software, patch bugs, and so on.

Note that these are "non-binding recommendations," so I'm really not sure why they bothered.

EDITED TO ADD (1/13): Why they bothered.

Posted on January 10, 2017 at 7:15 AM32 Comments

Classifying Elections as "Critical Infrastructure"

I am co-author on a paper discussing whether elections be classified as "critical infrastructure" in the US, based on experiences in other countries:

Abstract: With the Russian government hack of the Democratic National Convention email servers, and further leaks expected over the coming months that could influence an election, the drama of the 2016 U.S. presidential race highlights an important point: Nefarious hackers do not just pose a risk to vulnerable companies, cyber attacks can potentially impact the trajectory of democracies. Yet, to date, a consensus has not been reached as to the desirability and feasibility of reclassifying elections, in particular voting machines, as critical infrastructure due in part to the long history of local and state control of voting procedures. This Article takes on the debate in the U.S. using the 2016 elections as a case study but puts the issue in a global context with in-depth case studies from South Africa, Estonia, Brazil, Germany, and India. Governance best practices are analyzed by reviewing these differing approaches to securing elections, including the extent to which trend lines are converging or diverging. This investigation will, in turn, help inform ongoing minilateral efforts at cybersecurity norm building in the critical infrastructure context, which are considered here for the first time in the literature through the lens of polycentric governance.

The paper was speculative, but now it's official. The U.S. election has been classified as critical infrastructure. I am tentatively in favor of this, but what really matter is what happens now. What does this mean? What sorts of increased security will election systems get? Will we finally get rid of computerized touch-screen voting?

EDITED TO ADD (1/16): This is a good article.

Posted on January 10, 2017 at 6:02 AM104 Comments

Attributing the DNC Hacks to Russia

President Barack Obama's public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It's true that it's easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can't verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don't come with return addresses, and it's easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I'm sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It's rarely just one thing, and you'll often hear the term "constellation of evidence" to describe how a particular attacker is identified. It's analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the New York Times. While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government -- until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn't want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts -- myself included­ -- were skeptical of both the government's attribution claims and the flimsy evidence associated with it. I only became convinced when the New York Times ran a story about the government's attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn't want to escalate the political situation or because it didn't want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It's one thing for the government to know who attacked it. It's quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence­ -- as it did with North Korea's attack of Sony in 2014 and almost certainly does regarding Russia and the previous election -- ­the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn't going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It's possible that there was more than one attack. It's possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC­ -- both the FSB, Russia's principal security agency, and the GRU, Russia's military intelligence unit -- ­are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we've told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they're enough.

This essay previously appeared on CNN.com.

EDITED TO ADD: The ODNI released a declassified report on the Russian attacks. Here's a New York Times article on the report.

And last week there were Senate hearings on this issue.

EDITED TO ADD: A Washington Post article talks about some of the intelligence behind the assessment.

EDITED TO ADD (1/10): The UK connection.

Posted on January 9, 2017 at 5:53 AM251 Comments

Friday Squid Blogging: Simple Grilled Squid Recipe

Easy recipe from America's Test Kitchen.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Posted on January 6, 2017 at 4:29 PM210 Comments

An SQL Injection Attack Is a Legal Company Name in the UK

Someone just registered their company name as ; DROP TABLE "COMPANIES";-- LTD.

Reddit thread. Obligatory xkcd comic.

Posted on January 4, 2017 at 3:17 PM22 Comments

Are We Becoming More Moral Faster Than We're Becoming More Dangerous?

In The Better Angels of Our Nature, Steven Pinker convincingly makes the point that by pretty much every measure you can think of, violence has declined on our planet over the long term. More generally, "the world continues to improve in just about every way." He's right, but there are two important caveats.

One, he is talking about the long term. The trend lines are uniformly positive across the centuries and mostly positive across the decades, but go up and down year to year. While this is an important development for our species, most of us care about changes year to year -- and we can't make any predictions about whether this year will be better or worse than last year in any individual measurement.

The second caveat is both more subtle and more important. In 2013, I wrote about how technology empowers attackers. By this measure, the world is getting more dangerous:

Because the damage attackers can cause becomes greater as technology becomes more powerful. Guns become more harmful, explosions become bigger, malware becomes more pernicious... and so on. A single attacker, or small group of attackers, can cause more destruction than ever before.

This is exactly why the whole post-9/11 weapons-of-mass-destruction debate was so overwrought: Terrorists are scary, terrorists flying airplanes into buildings are even scarier, and the thought of a terrorist with a nuclear bomb is absolutely terrifying.

Pinker's trends are based both on increased societal morality and better technology, and both are based on averages: the average person with the average technology. My increased attack capability trend is based on those two trends as well, but on the outliers: the most extreme person with the most extreme technology. Pinker's trends are noisy, but over the long term they're strongly linear. Mine seem to be exponential.

When Pinker expresses optimism that the overall trends he identifies will continue into the future, he's making a bet. He's betting that his trend lines and my trend lines won't cross. That is, that our society's gradual improvement in overall morality will continue to outpace the potentially exponentially increasing ability of the extreme few to destroy everything. I am less optimistic:

But the problem isn't that these security measures won't work -- even as they shred our freedoms and liberties -- it's that no security is perfect.

Because sooner or later, the technology will exist for a hobbyist to explode a nuclear weapon, print a lethal virus from a bio-printer, or turn our electronic infrastructure into a vehicle for large-scale murder. We'll have the technology eventually to annihilate ourselves in great numbers, and sometime after, that technology will become cheap enough to be easy.

As it gets easier for one member of a group to destroy the entire group, and the group size gets larger, the odds of someone in the group doing it approaches certainty. Our global interconnectedness means that our group size encompasses everyone on the planet, and since government hasn't kept up, we have to worry about the weakest-controlled member of the weakest-controlled country. Is this a fundamental limitation of technological advancement, one that could end civilization? First our fears grip us so strongly that, thinking about the short term, we willingly embrace a police state in a desperate attempt to keep us safe; then, someone goes off and destroys us anyway?

Clearly we're not at the point yet where any of these disaster scenarios have come to pass, and Pinker rightly expresses skepticism when he says that historical doomsday scenarios have so far never come to pass. But that's the thing about exponential curves; it's hard to predict the future from the past. So either I have discovered a fundamental problem with any intelligent individualistic species and have therefore explained the Fermi Paradox, or there is some other factor in play that will ensure that the two trend lines won't cross.

Posted on January 4, 2017 at 7:42 AM74 Comments

Class Breaks

There's a concept from computer security known as a class break. It's a particular security vulnerability that breaks not just one system, but an entire class of systems. Examples might be a vulnerability in a particular operating system that allows an attacker to take remote control of every computer that runs on that system's software. Or a vulnerability in Internet-enabled digital video recorders and webcams that allow an attacker to recruit those devices into a massive botnet.

It's a particular way computer systems can fail, exacerbated by the characteristics of computers and software. It only takes one smart person to figure out how to attack the system. Once he does that, he can write software that automates his attack. He can do it over the Internet, so he doesn't have to be near his victim. He can automate his attack so it works while he sleeps. And then he can pass the ability to someone­ -- or to lots of people -- ­without the skill. This changes the nature of security failures, and completely upends how we need to defend against them.

An example: Picking a mechanical door lock requires both skill and time. Each lock is a new job, and success at one lock doesn't guarantee success with another of the same design. Electronic door locks, like the ones you now find in hotel rooms, have different vulnerabilities. An attacker can find a flaw in the design that allows him to create a key card that opens every door. If he publishes his attack software, not just the attacker, but anyone can now open every lock. And if those locks are connected to the Internet, attackers could potentially open door locks remotely -- ­they could open every door lock remotely at the same time. That's a class break.

It's how computer systems fail, but it's not how we think about failures. We still think about automobile security in terms of individual car thieves manually stealing cars. We don't think of hackers remotely taking control of cars over the Internet. Or, remotely disabling every car over the Internet. We think about voting fraud as unauthorized individuals trying to vote. We don't think about a single person or organization remotely manipulating thousands of Internet-connected voting machines.

In a sense, class breaks are not a new concept in risk management. It's the difference between home burglaries and fires, which happen occasionally to different houses in a neighborhood over the course of the year, and floods and earthquakes, which either happen to everyone in the neighborhood or no one. Insurance companies can handle both types of risk, but they are inherently different. The increasing computerization of everything is moving us from a burglary/fire risk model to a flood/earthquake model, which a given threat either affects everyone in town or doesn't happen at all.

But there's a key difference between floods/earthquakes and class breaks in computer systems: the former are random natural phenomena, while the latter is human-directed. Floods don't change their behavior to maximize their damage based on the types of defenses we build. Attackers do that to computer systems. Attackers examine our systems, looking for class breaks. And once one of them finds one, they'll exploit it again and again until the vulnerability is fixed.

As we move into the world of the Internet of Things, where computers permeate our lives at every level, class breaks will become increasingly important. The combination of automation and action at a distance will give attackers more power and leverage than they have ever had before. Security notions like the precautionary principle­ -- where the potential of harm is so great that we err on the side of not deploying a new technology without proofs of security -- will become more important in a world where an attacker can open all of the door locks or hack all of the power plants. It's not an inherently less secure world, but it's a differently secure world. It's a world where driverless cars are much safer than people-driven cars, until suddenly they're not. We need to build systems that assume the possibility of class breaks -- and maintain security despite them.

This essay originally appeared on Edge.org as part of their annual question. This year it was: "What scientific term or concept ought to be more widely known?"

Posted on January 3, 2017 at 6:50 AM46 Comments

Photocopier Security

A modern photocopier is basically a computer with a scanner and printer attached. This computer has a hard drive, and scans of images are regularly stored on that drive. This means that when a photocopier is thrown away, that hard drive is filled with pages that the machine copied over its lifetime. As you might expect, some of those pages will contain sensitive information.

This 2011 report was written by the Inspector General of the National Archives and Records Administration (NARA). It found that the organization did nothing to safeguard its photocopiers.

Our audit found that opportunities exist to strengthen controls to ensure photocopier hard drives are protected from potential exposure. Specifically, we found the following weaknesses.

  • NARA lacks appropriate controls to ensure all photocopiers across the agency are accounted for and that any hard drives residing on these machines are tracked and properly sanitized or destroyed prior to disposal.

  • There are no policies documenting security measures to be taken for photocopiers utilized for general use nor are there procedures to ensure photocopier hard drives are sanitized or destroyed prior to disposal or at the end of the lease term.

  • Photocopier lease agreements and contracts do not include a "keep disk"1 or similar clause as required by NARA's IT Security Methodology for Media Protection Policy version 5.1.

I don't mean to single this organization out. Pretty much no one thinks about this security threat.

Posted on January 2, 2017 at 6:12 AM43 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.