Blog: October 2016 Archives

Eavesdropping on Typing Over Voice-Over-IP

Interesting research: “Don’t Skype & Type! Acoustic Eavesdropping in Voice-Over-IP“:

Abstract: Acoustic emanations of computer keyboards represent a serious privacy issue. As demonstrated in prior work, spectral and temporal properties of keystroke sounds might reveal what a user is typing. However, previous attacks assumed relatively strong adversary models that are not very practical in many real-world settings. Such strong models assume: (i) adversary’s physical proximity to the victim, (ii) precise profiling of the victim’s typing style and keyboard, and/or (iii) significant amount of victim’s typed information (and its corresponding sounds) available to the adversary.

In this paper, we investigate a new and practical keyboard acoustic eavesdropping attack, called Skype & Type (S&T), which is based on Voice-over-IP (VoIP). S&T relaxes prior strong adversary assumptions. Our work is motivated by the simple observation that people often engage in secondary activities (including typing) while participating in VoIP calls. VoIP software can acquire acoustic emanations of pressed keystrokes (which might include passwords and other sensitive information) and transmit them to others involved in the call. In fact, we show that very popular VoIP software (Skype) conveys enough audio information to reconstruct the victim’s input ­ keystrokes typed on the remote keyboard. In particular, our results demonstrate
that, given some knowledge on the victim’s typing style and the keyboard, the attacker attains top-5 accuracy of 91:7% in guessing a random key pressed by the victim. (The accuracy goes down to still alarming 41:89% if the attacker is oblivious to both the typing style and the keyboard). Finally, we provide evidence that Skype & Type attack is robust to various VoIP issues (e.g., Internet bandwidth fluctuations and presence of voice over keystrokes), thus confirming feasibility of this attack.

News article.

Posted on October 28, 2016 at 5:24 AM15 Comments

Hardware Bit-Flipping Attacks in Practice

A year and a half ago, I wrote about hardware bit-flipping attacks, which were then largely theoretical. Now, they can be used to root Android phones:

The breakthrough has the potential to make millions of Android phones vulnerable, at least until a security fix is available, to a new form of attack that seizes control of core parts of the operating system and neuters key security defenses. Equally important, it demonstrates that the new class of exploit, dubbed Rowhammer, can have malicious and far-reaching effects on a much wider number of devices than was previously known, including those running ARM chips.

Previously, some experts believed Rowhammer attacks that altered specific pieces of security-sensitive data weren’t reliable enough to pose a viable threat because exploits depended on chance hardware faults or advanced memory-management features that could be easily adapted to repel the attacks. But the new proof-of-concept attack developed by an international team of academic researchers is challenging those assumptions.

An app containing the researchers’ rooting exploit requires no user permissions and doesn’t rely on any vulnerability in Android to work. Instead, their attack exploits a hardware vulnerability, using a Rowhammer exploit that alters crucial bits of data in a way that completely roots name brand Android devices from LG, Motorola, Samsung, OnePlus, and possibly other manufacturers.

[…]

Drammer was devised by many of the same researchers behind Flip Feng Shui, and it adopts many of the same approaches. Still, it represents a significant improvement over Flip Feng Shui because it’s able to alter specific pieces of sensitive-security data using standard memory management interfaces built into the Android OS. Using crucial information about the layout of Android memory chips gleaned from a side channel the researchers discovered in ARM processors, Drammer is able to carry out what the researchers call a deterministic attack, meaning one that can reliably target security-sensitive data. The susceptibility of Android devices to Rowhammer exploits likely signals a similar vulnerability in memory chips used in iPhones and other mobile devices as well.

Here’s the paper.

And here’s the project’s website.

Posted on October 27, 2016 at 2:23 PM16 Comments

Malicious AI

It’s not hard to imagine the criminal possibilities of automation, autonomy, and artificial intelligence. But the imaginings are becoming mainstream—and the future isn’t too far off.

Along similar lines, computers are able to predict court verdicts. My guess is that the real use here isn’t to predict actual court verdicts, but for well-paid defense teams to test various defensive tactics.

Posted on October 26, 2016 at 6:38 AM43 Comments

UK Admitting "Offensive Cyber" Against ISIS/Daesh

I think this might be the first time it has been openly acknowledged:

Sir Michael Fallon, the defence secretary, has said Britain is using cyber warfare in the bid to retake Mosul from Islamic State. Speaking at an international conference on waging war through advanced technology, Fallon made it clear Britain was unleashing its cyber capability on IS, also known as Daesh. Asked if the UK was launching cyber attacks in the bid to take the northern Iraqi city from IS, he replied:

I’m not going into operational specifics, but yes, you know we are conducting military operations against Daesh as part of the international coalition, and I can confirm that we are using offensive cyber for the first time in this campaign.

Posted on October 24, 2016 at 2:12 PM38 Comments

How Different Stakeholders Frame Security

Josephine Wolff examines different Internet governance stakeholders and how they frame security debates.

Her conclusion:

The tensions that arise around issues of security among different groups of internet governance stakeholders speak to the many tangled notions of what online security is and whom it is meant to protect that are espoused by the participants in multistakeholder governance forums. What makes these debates significant and unique in the context of internet governance is not that the different stakeholders often disagree (indeed, that is a common occurrence), but rather that they disagree while all using the same vocabulary of security to support their respective stances. Government stakeholders advocate for limitations on WHOIS privacy/proxy services in order to aid law enforcement and protect their citizens from crime and fraud. Civil society stakeholders advocate against those limitations in order to aid activists and minorities and protect those online users from harassment. Both sides would claim that their position promotes a more secure internet and a more secure society—­and in a sense, both would be right, except that each promotes a differently secure internet and society, protecting different classes of people and behaviour from different threats.

While vague notions of security may be sufficiently universally accepted as to appear in official documents and treaties, the specific details of individual decisions­—such as the implementation of dotless domains, changes to the WHOIS database privacy policy, and proposals to grant government greater authority over how their internet traffic is routed­—require stakeholders to disentangle the many different ideas embedded in that language. For the idea of security to truly foster cooperation and collaboration as a boundary object in internet governance circles, the participating stakeholders will have to more concretely agree on what their vision of a secure internet is and how it will balance the different ideas of security espoused by different groups. Alternatively, internet governance stakeholders may find it more useful to limit their discussions on security, as a whole, and try to force their discussions to focus on more specific threats and issues within that space as a means of preventing themselves from succumbing to a façade of agreement without grappling with the sources of disagreement that linger just below the surface.

The intersection of multistakeholder internet governance and definitional issues of security is striking because of the way that the multistakeholder model both reinforces and takes advantage of the ambiguity surrounding the idea of security explored in the security studies literature. That ambiguity is a crucial component of maintaining a functional multistakeholder model of governance because it lends itself well to high-level agreements and discussions, contributing to the sense of consensus building across stakeholders. At the same time, gathering those different stakeholders together to decide specific issues related to the internet and its infrastructure brings to a fore the vast variety of definitions of security they employ and forces them to engage in security-versus-security fights, with each trying to promote their own particular notion of security. Security has long been a contested concept, but rarely do these contestations play out as directly and dramatically as in the multistakeholder arena of internet governance, where all parties are able to face off on what really constitutes security in a digital world.

We certainly saw this in the “going dark” debate: e.g. the FBI vs. Apple and their iPhone security.

Posted on October 24, 2016 at 6:03 AM21 Comments

DDoS Attacks against Dyn

Yesterday’s DDoS attacks against Dyn are being reported everywhere.

I have received a gazillion press requests, but I am traveling in Australia and Asia and have had to decline most of them. That’s okay, really, because we don’t know anything much of anything about the attacks.

If I had to guess, though, I don’t think it’s China. I think it’s more likely related to the DDoS attacks against Brian Krebs than the probing attacks against the Internet infrastructure, despite how prescient that essay seems right now. And, no, I don’t think China is going to launch a preemptive attack on the Internet.

Posted on October 22, 2016 at 8:47 AM67 Comments

Friday Squid Blogging: Which Squid Can I Eat?

Interesting article listing the squid species that can still be ethically eaten.

The problem, of course, is that on a restaurant menu it’s just labeled “squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

EDITED TO ADD: By “ethically,” I meant that the article discusses which species can be sustainably caught. The article does not address the moral issues of eating squid—and other cephalapods—in the first place.

Posted on October 21, 2016 at 4:00 PM182 Comments

President Obama Talks About AI Risk, Cybersecurity, and More

Interesting interview:

Obama: Traditionally, when we think about security and protecting ourselves, we think in terms of armor or walls. Increasingly, I find myself looking to medicine and thinking about viruses, antibodies. Part of the reason why cybersecurity continues to be so hard is because the threat is not a bunch of tanks rolling at you but a whole bunch of systems that may be vulnerable to a worm getting in there. It means that we’ve got to think differently about our security, make different investments that may not be as sexy but may actually end up being as important as anything.

What I spend a lot of time worrying about are things like pandemics. You can’t build walls in order to prevent the next airborne lethal flu from landing on our shores. Instead, what we need to be able to do is set up systems to create public health systems in all parts of the world, click triggers that tell us when we see something emerging, and make sure we’ve got quick protocols and systems that allow us to make vaccines a lot smarter. So if you take a public health model, and you think about how we can deal with, you know, the problems of cybersecurity, a lot may end up being really helpful in thinking about the AI threats.

Posted on October 20, 2016 at 6:16 AM33 Comments

Security Lessons from a Power Saw

Lance Spitzner looks at the safety features of a power saw and tries to apply them to Internet security:

By the way, here are some of the key safety features that are built into the DeWalt Mitre Saw. Notice in all three of these the human does not have to do anything special, just use the device. This is how we need to think from a security perspective.

  • Safety Cover: There is a plastic safety cover that protects the entire rotating blade. The only time the blade is actually exposed is when you lower the saw to actually cut into the wood. The moment you start to raise the blade after cutting, the plastic cover protects everything again. This means to hurt yourself you have to manually lower the blade with one hand then insert your hand into the cutting blade zone.
  • Power Switch: Actually, there is no power switch. Instead, after the saw is plugged in, to activate the saw you have to depress a lever. Let the lever go and saw stops. This means if you fall, slip, blackout, have a heart attack or any other type of accident and let go of the lever, the saw automatically stops. In other words, the saw always fails to the off (safe) position.
  • Shadow: The saw has a light that projects a shadow of the cutting blade precisely on the wood where the blade will cut. No guessing where the blade is going to cut.

Safety is like security, you cannot eliminate risk. But I feel this is a great example of how security can learn from others on how to take people into account.

Posted on October 19, 2016 at 6:45 AM45 Comments

Intelligence Oversight and How It Can Fail

Former NSA attorneys John DeLong and Susan Hennessay have written a fascinating article describing a particular incident of oversight failure inside the NSA. Technically, the story hinges on a definitional difference between the NSA and the FISA court meaning of the word “archived.” (For the record, I would have defaulted to the NSA’s interpretation, which feels more accurate technically.) But while the story is worth reading, what’s especially interesting are the broader issues about how a nontechnical judiciary can provide oversight over a very technical data collection-and-analysis organization—especially if the oversight must largely be conducted in secret.

From the article:

Broader root cause analysis aside, the BR FISA debacle made clear that the specific matter of shared legal interpretation needed to be addressed. Moving forward, the government agreed that NSA would coordinate all significant legal interpretations with DOJ. That sounds like an easy solution, but making it meaningful in practice is highly complex. Consider this example: a court order might require that “all collected data must be deleted after two years.” NSA engineers must then make a list for the NSA attorneys:

  1. What does deleted mean? Does it mean make inaccessible to analysts or does it mean forensically wipe off the system so data is gone forever? Or does it mean something in between?
  2. What about backup systems used solely for disaster recovery? Does the data need to be removed there, too, within two years, even though it’s largely inaccessible and typically there is a planned delay to account for mistakes in the operational system?
  3. When does the timer start?
  4. What’s the legally-relevant unit of measurement for timestamp computation­—a day, an hour, a second, a millisecond?
  5. If a piece of data is deleted one second after two years, is that an incident of noncompliance? What about a delay of one day? ….
  6. What about various system logs that simply record the fact that NSA had a data object, but no significant details of the actual object? Do those logs need to be deleted too? If so, how soon?
  7. What about hard copy printouts?

And that is only a tiny sample of the questions that need to be answered for that small sentence fragment. Put yourself in the shoes of an NSA attorney: which of these questions—­in particular the answers­—require significant interpretations to be coordinated with DOJ and which determinations can be made internally?

Now put yourself in the shoes of a DOJ attorney who receives from an NSA attorney a subset of this list for advice and counsel. Which questions are truly significant from your perspective? Are there any questions here that are so significant they should be presented to the Court so that that government can be sufficiently confident that the Court understands how the two-year rule is really being interpreted and applied?

In many places I have separated different kinds of oversight: are we doing things right versus are we doing the right things? This is very much about the first: is the NSA complying with the rules the courts impose on them? I believe that the NSA tries very hard to follow the rules it’s given, while at the same time being very aggressive about how it interprets any kind of ambiguities and using its nonadversarial relationship with its overseers to its advantage.

The only possible solution I can see to all of this is more public scrutiny. Secrecy is toxic here.

Posted on October 18, 2016 at 2:29 PM51 Comments

Virtual Kidnapping

This is a harrowing story of a scam artist that convinced a mother that her daughter had been kidnapped. More stories are here. It’s unclear if these virtual kidnappers use data about their victims, or just call people at random and hope to get lucky. Still, it’s a new criminal use of smartphones and ubiquitous information.

Reminds me of the scammers who call low-wage workers at retail establishments late at night and convince them to do outlandish and occasionally dangerous things.

Posted on October 17, 2016 at 6:28 AM20 Comments

Cybersecurity Issues for the Next Administration

On today’s Internet, too much power is concentrated in too few hands. In the early days of the Internet, individuals were empowered. Now governments and corporations hold the balance of power. If we are to leave a better Internet for the next generations, governments need to rebalance Internet power more towards the individual. This means several things.

First, less surveillance. Surveillance has become the business model of the Internet, and an aspect that is appealing to governments worldwide. While computers make it easier to collect data, and networks to aggregate it, governments should do more to ensure that any surveillance is exceptional, transparent, regulated and targeted. It’s a tall order; governments such as that of the US need to overcome their own mass-surveillance desires, and at the same time implement regulations to fetter the ability of Internet companies to do the same.

Second, less censorship. The early days of the Internet were free of censorship, but no more. Many countries censor their Internet for a variety of political and moral reasons, and many large social networking platforms do the same thing for business reasons. Turkey censors anti-government political speech; many countries censor pornography. Facebook has censored both nudity and videos of police brutality. Governments need to commit to the free flow of information, and to make it harder for others to censor.

Third, less propaganda. One of the side-effects of free speech is erroneous speech. This naturally corrects itself when everybody can speak, but an Internet with centralized power is one that invites propaganda. For example, both China and Russia actively use propagandists to influence public opinion on social media. The more governments can do to counter propaganda in all forms, the better we all are.

And fourth, less use control. Governments need to ensure that our Internet systems are open and not closed, that neither totalitarian governments nor large corporations can limit what we do on them. This includes limits on what apps you can run on your smartphone, or what you can do with the digital files you purchase or are collected by the digital devices you own. Controls inhibit innovation: technical, business, and social.

Solutions require both corporate regulation and international cooperation. They require Internet governance to remain in the hands of the global community of engineers, companies, civil society groups, and Internet users. They require governments to be agile in the face of an ever-evolving Internet. And they’ll result in more power and control to the individual and less to powerful institutions. That’s how we built an Internet that enshrined the best of our societies, and that’s how we’ll keep it that way for future generations.

This essay previously appeared on Time.com, in a section about issues for the next president. It was supposed to appear in the print magazine, but was preempted by Donald Trump coverage.

Posted on October 14, 2016 at 6:20 AM39 Comments

Indiana's Voter Registration Data Is Frighteningly Insecure

You can edit anyone’s information you want:

The question, boiled down, was haunting: Want to see how easy it would be to get into someone’s voter registration and make changes to it? The offer from Steve Klink—a Lafayette-based public consultant who works mainly with Indiana public school districts—was to use my voter registration record as a case study.

Only with my permission, of course.

“I will not require any information from you,” he texted. “Which is the problem.”

Turns out he didn’t need anything from me. He sent screenshots of every step along the way, as he navigated from the “Update My Voter Registration” tab at the Indiana Statewide Voter Registration System maintained since 2010 at www.indianavoters.com to the blank screen that cleared the way for changes to my name, address, age and more.

The only magic involved was my driver’s license number, one of two log-in options to make changes online. And that was contained in a copy of every county’s voter database, a public record already in the hands of political parties, campaigns, media and, according to Indiana open access laws, just about anyone who wants the beefy spreadsheet.

Posted on October 11, 2016 at 2:04 PM32 Comments

Security Economics of the Internet of Things

Brian Krebs is a popular reporter on the cybersecurity beat. He regularly exposes cybercriminals and their tactics, and consequently is regularly a target of their ire. Last month, he wrote about an online attack-for-hire service that resulted in the arrest of the two proprietors. In the aftermath, his site was taken down by a massive DDoS attack.

In many ways, this is nothing new. Distributed denial-of-service attacks are a family of attacks that cause websites and other Internet-connected systems to crash by overloading them with traffic. The “distributed” part means that other insecure computers on the Internet—sometimes in the millions­—are recruited to a botnet to unwittingly participate in the attack. The tactics are decades old; DDoS attacks are perpetrated by lone hackers trying to be annoying, criminals trying to extort money, and governments testing their tactics. There are defenses, and there are companies that offer DDoS mitigation services for hire.

Basically, it’s a size vs. size game. If the attackers can cobble together a fire hose of data bigger than the defender’s capability to cope with, they win. If the defenders can increase their capability in the face of attack, they win.

What was new about the Krebs attack was both the massive scale and the particular devices the attackers recruited. Instead of using traditional computers for their botnet, they used CCTV cameras, digital video recorders, home routers, and other embedded computers attached to the Internet as part of the Internet of Things.

Much has been written about how the IoT is wildly insecure. In fact, the software used to attack Krebs was simple and amateurish. What this attack demonstrates is that the economics of the IoT mean that it will remain insecure unless government steps in to fix the problem. This is a market failure that can’t get fixed on its own.

Our computers and smartphones are as secure as they are because there are teams of security engineers working on the problem. Companies like Microsoft, Apple, and Google spend a lot of time testing their code before it’s released, and quickly patch vulnerabilities when they’re discovered. Those companies can support such teams because those companies make a huge amount of money, either directly or indirectly, from their software­—and, in part, compete on its security. This isn’t true of embedded systems like digital video recorders or home routers. Those systems are sold at a much lower margin, and are often built by offshore third parties. The companies involved simply don’t have the expertise to make them secure.

Even worse, most of these devices don’t have any way to be patched. Even though the source code to the botnet that attacked Krebs has been made public, we can’t update the affected devices. Microsoft delivers security patches to your computer once a month. Apple does it just as regularly, but not on a fixed schedule. But the only way for you to update the firmware in your home router is to throw it away and buy a new one.

The security of our computers and phones also comes from the fact that we replace them regularly. We buy new laptops every few years. We get new phones even more frequently. This isn’t true for all of the embedded IoT systems. They last for years, even decades. We might buy a new DVR every five or ten years. We replace our refrigerator every 25 years. We replace our thermostat approximately never. Already the banking industry is dealing with the security problems of Windows 95 embedded in ATMs. This same problem is going to occur all over the Internet of Things.

The market can’t fix this because neither the buyer nor the seller cares. Think of all the CCTV cameras and DVRs used in the attack against Brian Krebs. The owners of those devices don’t care. Their devices were cheap to buy, they still work, and they don’t even know Brian. The sellers of those devices don’t care: they’re now selling newer and better models, and the original buyers only cared about price and features. There is no market solution because the insecurity is what economists call an externality: it’s an effect of the purchasing decision that affects other people. Think of it kind of like invisible pollution.

What this all means is that the IoT will remain insecure unless government steps in and fixes the problem. When we have market failures, government is the only solution. The government could impose security regulations on IoT manufacturers, forcing them to make their devices secure even though their customers don’t care. They could impose liabilities on manufacturers, allowing people like Brian Krebs to sue them. Any of these would raise the cost of insecurity and give companies incentives to spend money making their devices secure.

Of course, this would only be a domestic solution to an international problem. The Internet is global, and attackers can just as easily build a botnet out of IoT devices from Asia as from the United States. Long term, we need to build an Internet that is resilient against attacks like this. But that’s a long time coming. In the meantime, you can expect more attacks that leverage insecure IoT devices.

This essay previously appeared on Vice Motherboard.

Slashdot thread.

Here are some of the things that are vulnerable.

EDITED TO ADD (10/17: DARPA is looking for IoT-security ideas from the private sector.

Posted on October 10, 2016 at 10:26 AM71 Comments

NSA Contractor Arrested for Stealing Classified Information

The NSA has another contractor who stole classified documents. It’s a weird story: “But more than a month later, the authorities cannot say with certainty whether Mr. Martin leaked the information, passed them on to a third party or whether he simply downloaded them.” So maybe a potential leaker. Or a spy. Or just a document collector.

My guess is that there are many leakers inside the US government, even more than what’s on this list from last year.

EDITED TO ADD (10/7): More information.

Posted on October 7, 2016 at 6:07 AM28 Comments

Yahoo Scanned Everyone's E-mails for the NSA

News here and here.

Other companies have been quick to deny that they did the same thing, but I generally don’t believe those carefully worded statements about what they have and haven’t done. We do know that the NSA uses bribery, coercion, threat, legal compulsion, and outright theft to get what they want. We just don’t know which one they use in which case.

EDITED TO ADD (10/7): More news. This and this, too.

EDITED TO ADD (10/17): A related story.

Posted on October 6, 2016 at 1:58 PM72 Comments

Quantum Tokens for Digital Signatures

This paper wins “best abstract” award: “Quantum Tokens for Digital Signatures,” by Shalev Ben David and Or Sattath:

Abstract: The fisherman caught a quantum fish. “Fisherman, please let me go,” begged the fish, “and I will grant you three wishes.” The fisherman agreed. The fish gave the fisherman a quantum computer, three quantum signing tokens and his classical public key.

The fish explained: “to sign your three wishes, use the tokenized signature scheme on this quantum computer, then show your valid signature to the king, who owes me a favor.”

The fisherman used one of the signing tokens to sign the document “give me a castle!” and rushed to the palace. The king executed the classical verification algorithm using the fish’s public key, and since it was valid, the king complied.

The fisherman’s wife wanted to sign ten wishes using their two remaining signing tokens. The fisherman did not want to cheat, and secretly sailed to meet the fish. “Fish, my wife wants to sign ten more wishes.”

But the fish was not worried: “I have learned quantum cryptography following the previous story (The Fisherman and His Wife by the brothers Grimm). The quantum tokens are consumed during the signing. Your polynomial wife cannot even sign four wishes using the three signing tokens I gave you.”

“How does it work?” wondered the fisherman.

“Have you heard of quantum money? These are quantum states which can be easily verified but are hard to copy. This tokenized quantum signature scheme extends Aaronson and Christiano’s quantum money scheme, which is why the signing tokens cannot be copied.”

“Does your scheme have additional fancy properties?” the fisherman asked.

“Yes, the scheme has other security guarantees: revocability, testability and everlasting security. Furthermore, if you’re at the sea and your quantum phone has only classical reception, you can use this scheme to transfer the value of the quantum money to shore,” said the fish, and swam his way.

Posted on October 6, 2016 at 7:03 AM11 Comments

Is WhatsApp Hacked?

Forbes is reporting that the Israeli cyberweapons arms manufacturer Wintego has a man-in-the-middle exploit against WhatsApp.

It’s a weird story. I’m not sure how they do it, but something doesn’t sound right.

Another possibility is that CatchApp is malware thrust onto a device over Wi-Fi that specifically targets WhatsApp. But it’s almost certain the product cannot crack the latest standard of WhatsApp cryptography, said Matthew Green, a cryptography expert and assistant professor at the Johns Hopkins Information Security Institute. Green, who has been impressed by the quality of the Signal code, added: “They would have to defeat both the encryption to and from the server and the end-to-end Signal encryption. That does not seem feasible at all, even with a Wi-Fi access point.

“I would bet mundanely the password stuff is just plain phishing. You go to some site, it asks for your Google account, you type it in without looking closely at the address bar.

“But the WhatsApp stuff manifestly should not be vulnerable like that. Interesting.”

Neither WhatsApp nor the crypto whizz behind Signal, Moxie Marlinspike, were happy to comment unless more specific details were revealed about the tool’s capability. Either Wintego is embellishing what its real capability is, or it has a set of exploits that the rest of the world doesn’t yet know about.

Posted on October 4, 2016 at 1:47 PM31 Comments

The Culture of Cybersecurity

Interesting survey of the cybersecurity culture in Norway.

96% of all Norwegian are online, more than 90% embrace new technology, and 6 of 10 feel capable of judging what is safe to do online. Still cyber-crime costs Norway approximately 19 billion NKR annually. At the same time 73.9% argue that the Internet will not be safer even if their personal computer is secure. We have also found that a majority of Norwegians accepts that their online activities may be monitored by the authorities. But less than half the population believe the Police is capable of helping them if they are subject to cybercrime, and 4 of 10 sees cyber activists (e.g. Anonymous) play a role in the fight against cybercrime and cyberwar. 44% of the participants in this study say that they have refrained from using an online service after they have learned about threats or security incidents. This should obviously influence digitalization policy.

Lots of details in the report.

Posted on October 3, 2016 at 6:23 AM24 Comments

Security Design: Stop Trying to Fix the User

Every few years, a researcher replicates a security study by littering USB sticks around an organization’s grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as “teachable moments” for others. “If only everyone was more security aware and had more security training,” they say, “the Internet would be a much safer place.”

Enough of that. The problem isn’t the users: it’s that we’ve designed our computer systems’ security so badly that we demand the user do all of these counterintuitive things. Why can’t users choose easy-to-remember passwords? Why can’t they click on links in emails with wild abandon? Why can’t they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we’ve thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This “either/or” thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers’ good intentions, these warnings just inure people to them. I’ve read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don’t see “the certificate has expired; are you sure you want to go to this webpage?” They see, “I’m an annoying message preventing you from reading a webpage. Click here to get rid of me.”

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the “I forgot my password” link as a way to bypass the system completely—­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can’t train users not to click on links when you’ve spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We’ll never get there, and research toward those goals just obscures the real problems. Usable security does not mean “getting people to do what we want.” It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users’ security goals without­—as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­—”stress of mind, or knowledge of a long series of rules.”

I’ve been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People—­and developers—­are finally starting to listen. Many security updates happen automatically so users don’t have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user’s system so they don’t have to worry about embedded malware. And programs can run in sandboxes that don’t compromise the entire computer. We’ve come a long way, but we have a lot further to go.

“Blame the victim” thinking is older than the Internet, of course. But that doesn’t make it right. We owe it to our users to make the Information Age a safe place for everyone—­not just those with “security awareness.”

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

EDITED TO ADD (10/13): Commentary.

Posted on October 3, 2016 at 6:12 AM116 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.