Entries Tagged "usability"

Page 3 of 9

Choosing Secure Passwords

As insecure as passwords generally are, they’re not going away anytime soon. Every year you have more and more passwords to deal with, and every year they get easier and easier to break. You need a strategy.

The best way to explain how to choose a good password is to explain how they’re broken. The general attack model is what’s known as an offline password-guessing attack. In this scenario, the attacker gets a file of encrypted passwords from somewhere people want to authenticate to. His goal is to turn that encrypted file into unencrypted passwords he can use to authenticate himself. He does this by guessing passwords, and then seeing if they’re correct. He can try guesses as fast as his computer will process them—and he can parallelize the attack—and gets immediate confirmation if he guesses correctly. Yes, there are ways to foil this attack, and that’s why we can still have four-digit PINs on ATM cards, but it’s the correct model for breaking passwords.

There are commercial programs that do password cracking, sold primarily to police departments. There are also hacker tools that do the same thing. And they’re really good.

The efficiency of password cracking depends on two largely independent things: power and efficiency.

Power is simply computing power. As computers have become faster, they’re able to test more passwords per second; one program advertises eight million per second. These crackers might run for days, on many machines simultaneously. For a high-profile police case, they might run for months.

Efficiency is the ability to guess passwords cleverly. It doesn’t make sense to run through every eight-letter combination from “aaaaaaaa” to “zzzzzzzz” in order. That’s 200 billion possible passwords, most of them very unlikely. Password crackers try the most common passwords first.

A typical password consists of a root plus an appendage. The root isn’t necessarily a dictionary word, but it’s usually something pronounceable. An appendage is either a suffix (90% of the time) or a prefix (10% of the time). One cracking program I saw started with a dictionary of about 1,000 common passwords, things like “letmein,” “temp,” “123456,” and so on. Then it tested them each with about 100 common suffix appendages: “1,” “4u,” “69,” “abc,” “!,” and so on. It recovered about a quarter of all passwords with just these 100,000 combinations.

Crackers use different dictionaries: English words, names, foreign words, phonetic patterns and so on for roots; two digits, dates, single symbols and so on for appendages. They run the dictionaries with various capitalizations and common substitutions: “$” for “s”, “@” for “a,” “1” for “l” and so on. This guessing strategy quickly breaks about two-thirds of all passwords.

Modern password crackers combine different words from their dictionaries:

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

This is why the oft-cited XKCD scheme for generating passwords—string together individual words like “correcthorsebatterystaple”—is no longer good advice. The password crackers are on to this trick.

The attacker will feed any personal information he has access to about the password creator into the password crackers. A good password cracker will test names and addresses from the address book, meaningful dates, and any other personal information it has. Postal codes are common appendages. If it can, the guesser will index the target hard drive and create a dictionary that includes every printable string, including deleted files. If you ever saved an e-mail with your password, or kept it in an obscure file somewhere, or if your program ever stored it in memory, this process will grab it. And it will speed the process of recovering your password.

Last year, Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break as many as possible. The winner got 90% of them, the loser 62%—in a few hours. It’s the same sort of thing we saw in 2012, 2007, and earlier. If there’s any new news, it’s that this kind of thing is getting easier faster than people think.

Pretty much anything that can be remembered can be cracked.

There’s still one scheme that works. Back in 2008, I described the “Schneier scheme”:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Here are some examples:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence with some personally memorable tricks to modify that sentence into a password to create a lengthy password. Of course, the site has to accept all of those non-alpha-numeric characters and an arbitrarily long password. Otherwise, it’s much harder.

Even better is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to create and store them. Password Safe includes a random password generation function. Tell it how many characters you want—twelve is my default—and it’ll give you passwords like y.)v_|.7)7Bl, B3h4_[%}kgv), and QG6,FN4nFAm_. The program supports cut and paste, so you’re not actually typing those characters very much. I’m recommending Password Safe for Windows because I wrote the first version, know the person currently in charge of the code, and trust its security. There are ports of Password Safe to other OSs, but I had nothing to do with those. There are also other password managers out there, if you want to shop around.

There’s more to passwords than simply choosing a good one:

  1. Never reuse a password you care about. Even if you choose a secure password, the site it’s for could leak it because of its own incompetence. You don’t want someone who gets your password for one application or site to be able to use it for another.
  2. Don’t bother updating your password regularly. Sites that require 90-day—or whatever—password upgrades do more harm than good. Unless you think your password might be compromised, don’t change it.
  3. Beware the “secret question.” You don’t want a backup system for when you forget your password to be easier to break than your password. Really, it’s smart to use a password manager. Or to write your passwords down on a piece of paper and secure that piece of paper.
  4. One more piece of advice: if a site offers two-factor authentication, seriously consider using it. It’s almost certainly a security improvement.

This essay previously appeared on BoingBoing.

Posted on March 3, 2014 at 7:48 AMView Comments

Telepathwords: A New Password Strength Estimator

Telepathwords is a pretty clever research project that tries to evaluate password strength. It’s different from normal strength meters, and I think better.

Telepathwords tries to predict the next character of your passwords by using knowledge of:

  • common passwords, such as those made public as a result of security breaches
  • common phrases, such as those that appear frequently on web pages or in common search queries
  • common password-selection behaviors, such as the use of sequences of adjacent keys

Password-strength evaluators have generally been pretty poor, regularly assessing weak passwords as strong (and vice versa). I like seeing new research in this area.

Posted on December 6, 2013 at 6:19 AMView Comments

Human-Machine Trust Failures

I jacked a visitor’s badge from the Eisenhower Executive Office Building in Washington, DC, last month. The badges are electronic; they’re enabled when you check in at building security. You’re supposed to wear it on a chain around your neck at all times and drop it through a slot when you leave.

I kept the badge. I used my body as a shield, and the chain made a satisfying noise when it hit bottom. The guard let me through the gate.

The person after me had problems, though. Some part of the system knew something was wrong, and wouldn’t let her out. Eventually, the guard had to manually override something.

My point in telling this story is not to demonstrate how I beat the EEOB’s security—I’m sure the badge was quickly deactivated and showed up in some missing-badge log next to my name—but to illustrate how security vulnerabilities can result from human/machine trust failures. Something went wrong between when I went through the gate and when the person after me did. The system knew it but couldn’t adequately explain it to the guards. The guards knew it but didn’t know the details. Because the failure occurred when the person after me tried to leave the building, they assumed she was the problem. And when they cleared her of wrongdoing, they blamed the system.

In any hybrid security system, the human portion needs to trust the machine portion. To do so, both must understand the expected behavior for every state—how the system can fail and what those failures look like. The machine must be able to communicate its state and have the capacity to alert the humans when an expected state transition doesn’t happen as expected. Things will go wrong, either by accident or as the result of an attack, and the humans are going to need to troubleshoot the system in real time—that requires understanding on both parts. Each time things go wrong, and the machine portion doesn’t communicate well, the human portion trusts it a little less.

This problem is not specific to security systems, but inducing this sort of confusion is a good way to attack systems. When the attackers understand the system—especially the machine part—better than the humans in the system do, they can create a failure to exploit. Many social engineering attacks fall into this category. Failures also happen the other way. We’ve all experienced trust without understanding, when the human part of the system defers to the machine, even though it makes no sense: “The computer is always right.”

Humans and machines have different strengths. Humans are flexible and can do creative thinking in ways that machines cannot. But they’re easily fooled. Machines are more rigid and can handle state changes and process flows much better than humans can. But they’re bad at dealing with exceptions. If humans are to serve as security sensors, they need to understand what is being sensed. (That’s why “if you see something, say something” fails so often.) If a machine automatically processes input, it needs to clearly flag anything unexpected.

The more machine security is automated, and the more the machine is expected to enforce security without human intervention, the greater the impact of a successful attack. If this sounds like an argument for interface simplicity, it is. The machine design will be necessarily more complicated: more resilience, more error handling, and more internal checking. But the human/computer communication needs to be clear and straightforward. That’s the best way to give humans the trust and understanding they need in the machine part of any security system.

This essay previously appeared in IEEE Security & Privacy.

Posted on September 5, 2013 at 8:32 AMView Comments

Hacking Consumer Devices

Last weekend, a Texas couple apparently discovered that the electronic baby monitor in their children’s bedroom had been hacked. According to a local TV station, the couple said they heard an unfamiliar voice coming from the room, went to investigate and found that someone had taken control of the camera monitor remotely and was shouting profanity-laden abuse. The child’s father unplugged the monitor.

What does this mean for the rest of us? How secure are consumer electronic systems, now that they’re all attached to the Internet?

The answer is not very, and it’s been this bad for many years. Security vulnerabilities have been found in all types of webcams, cameras of all sorts, implanted medical devices, cars, and even smart toilets—not to mention yachts, ATM machines, industrial control systems and military drones.

All of these things have long been hackable. Those of us who work in security are often amazed that most people don’t know about it.

Why are they hackable? Because security is very hard to get right. It takes expertise, and it takes time. Most companies don’t care because most customers buying security systems and smart appliances don’t know enough to care. Why should a baby monitor manufacturer spend all sorts of money making sure its security is good when the average customer won’t even notice?

Even worse, that consumer will look at two competing baby monitors—a more expensive one with better security, and a cheaper one with minimal security—and buy the cheaper. Without the expertise to make an informed buying decision, cheaper wins.

A lot of hacks happen because the users don’t configure or install their devices properly, but that’s really the fault of the manufacturer. These are supposed to be consumer devices, not specialized equipment for security experts only.

This sort of thing is true in other aspects of society, and we have a variety of mechanisms to deal with it. Government regulation is one of them. For example, few of us can differentiate real pharmaceuticals from snake oil, so the FDA regulates what can be sold and what sorts of claims vendors can make. Independent product testing is another. You and I might not be able to tell a well-made car from a poorly-made one at a glance, but we can both read the reports from a variety of testing agencies.

Computer security has resisted these mechanisms, both because the industry changes so quickly and because this sort of testing is hard and expensive. But the effect is that we’re all being sold a lot of insecure consumer products with embedded computers. And as these computers get connected to the Internet, the problems will get worse.

The moral here isn’t that your baby monitor could be hacked. The moral is that pretty much every “smart” everything can be hacked, and because consumers don’t care, the market won’t fix the problem.

This essay previously appeared on CNN.com. I wrote it in about half an hour, on request, and I’m not really happy with it. I should have talked more about the economics of good security, as well as the economics of hacking. The point is that we don’t have to worry about hackers smart enough to figure out these vulnerabilities, but those dumb hackers who just use software tools written and distributed by the smart hackers. Ah well, next time.

Posted on August 23, 2013 at 6:00 AMView Comments

How Apple Continues to Make Security Invisible

Interesting article:

Apple is famously focused on design and human experience as their top guiding principles. When it comes to security, that focus created a conundrum. Security is all about placing obstacles in the way of attackers, but (despite the claims of security vendors) those same obstacles can get in the way of users, too.

[…]

For many years, Apple tended to choose good user experience at the expense of leaving users vulnerable to security risks. That strategy worked for a long time, in part because Apple’s comparatively low market share made its products less attractive targets. But as Apple products began to gain in popularity, many of us in the security business wondered how Apple would adjust its security strategies to its new position in the spotlight.

As it turns out, the company not only handled that change smoothly, it has embraced it. Despite a rocky start, Apple now applies its impressive design sensibilities to security, playing the game its own way and in the process changing our expectations for security and technology.

EDITED TO ADD (7/11): iOS security white paper.

Posted on July 5, 2013 at 1:33 PMView Comments

Finding Sociopaths on Facebook

On his blog, Scott Adams suggests that it might be possible to identify sociopaths based on their interactions on social media.

My hypothesis is that science will someday be able to identify sociopaths and terrorists by their patterns of Facebook and Internet use. I’ll bet normal people interact with Facebook in ways that sociopaths and terrorists couldn’t duplicate.

Anyone can post fake photos and acquire lots of friends who are actually acquaintances. But I’ll bet there are so many patterns and tendencies of “normal” use on Facebook that a terrorist wouldn’t be able to successfully fake it.

Okay, but so what? Imagine you had such an amazingly accurate test…then what? Do we investigate those who test positive, even though there’s no suspicion that they’ve actually done anything? Do we follow them around? Subject them to additional screening at airports? Throw them in jail because we know the streets will be safer because of it? Do we want to live in a Minority Report world?

The problem isn’t just that such a system is wrong, it’s that the mathematics of testing makes this sort of thing pretty ineffective in practice. It’s called the “base rate fallacy.” Suppose you have a test that’s 90% accurate in identifying both sociopaths and non-sociopaths. If you assume that 4% of people are sociopaths, then the chance of someone who tests positive actually being a sociopath is 26%. (For every thousand people tested, 90% of the 40 sociopaths will test positive, but so will 10% of the 960 non-sociopaths.) You have postulate a test with an amazing 99% accuracy—only a 1% false positive rate—even to have an 80% chance of someone testing positive actually being a sociopath.

This fallacy isn’t new. It’s the same thinking that caused us to intern Japanese-Americans during World War II, stop people in their cars because they’re black, and frisk them at airports because they’re Muslim. It’s the same thinking behind massive NSA surveillance programs like PRISM. It’s one of the things that scares me about police DNA databases.

Many authors have written stories about thoughtcrime. Who has written about genecrime?

BTW, if you want to meet an actual sociopath, I recommend this book (review here) and this blog.

Posted on June 19, 2013 at 11:19 AMView Comments

Trust in IT

Ignore the sensationalist headline. This article is a good summary of the need for trust in IT, and provides some ideas for how to enable more of it.

Virtually everything we work with on a day-to-day basis is built by someone else. Avoiding insanity requires trusting those who designed, developed and manufactured the instruments of our daily existence.

All these other industries we rely on have evolved codes of conduct, regulations, and ultimately laws to ensure minimum quality, reliability and trust. In this light, I find the modern technosphere’s complete disdain for obtaining and retaining trust baffling, arrogant and at times enraging.

Posted on June 11, 2013 at 6:21 AMView Comments

A Really Good Article on How Easy it Is to Crack Passwords

Ars Technica gave three experts a 16,000-entry encrypted password file, and asked them to break them. The winner got 90% of them, the loser 62%—in a few hours.

The list of “plains,” as many crackers refer to deciphered hashes, contains the usual list of commonly used passcodes that are found in virtually every breach involving consumer websites. “123456,” “1234567,” and “password” are there, as is “letmein,” “Destiny21,” and “pizzapizza.” Passwords of this ilk are hopelessly weak. Despite the additional tweaking, “p@$$word,” “123456789j,” “letmein1!,” and “LETMEin3” are equally awful….

As big as the word lists that all three crackers in this article wielded—close to 1 billion strong in the case of Gosney and Steube—none of them contained “Coneyisland9/,” “momof3g8kids,” or the more than 10,000 other plains that were revealed with just a few hours of effort. So how did they do it? The short answer boils down to two variables: the website’s unfortunate and irresponsible use of MD5 and the use of non-randomized passwords by the account holders.

The article goes on to explain how dictionary attacks work, how well they do, and the sorts of passwords they find.

Steube was able to crack “momof3g8kids” because he had “momof3g” in his 111 million dict and “8kids” in a smaller dict.

“The combinator attack got it! It’s cool,” he said. Then referring to the oft-cited xkcd comic, he added: “This is an answer to the batteryhorsestaple thing.”

What was remarkable about all three cracking sessions were the types of plains that got revealed. They included passcodes such as “k1araj0hns0n,” “Sh1a-labe0uf,” “Apr!l221973,” “Qbesancon321,” “DG091101%,” “@Yourmom69,” “ilovetofunot,” “windermere2313,” “tmdmmj17,” and “BandGeek2014.” Also included in the list: “all of the lights” (yes, spaces are allowed on many sites), “i hate hackers,” “allineedislove,” “ilovemySister31,” “iloveyousomuch,” “Philippians4:13,” “Philippians4:6-7,” and “qeadzcwrsfxv1331.” “gonefishing1125” was another password Steube saw appear on his computer screen. Seconds after it was cracked, he noted, “You won’t ever find it using brute force.”

Great reading, but nothing theoretically new. Ars Technica wrote about this last year, and Joe Bonneau wrote an excellent commentary.

Password cracking can be evaluated on two nearly independent axes: power (the ability to check a large number of guesses quickly and cheaply using optimized software, GPUs, FPGAs, and so on) and efficiency (the ability to generate large lists of candidate passwords accurately ranked by real-world likelihood using sophisticated models).

I wrote about this same thing back in 2007. The news in 2013, such as it is, is that this kind of thing is getting easier faster than people think. Pretty much anything that can be remembered can be cracked.

If you need to memorize a password, I still stand by the Schneier scheme from 2008:

So if you want your password to be hard to guess, you should choose something that this process will miss. My advice is to take a sentence and turn it into a password. Something like “This little piggy went to market” might become “tlpWENT2m”. That nine-character password won’t be in anyone’s dictionary. Of course, don’t use this one, because I’ve written about it. Choose your own sentence—something personal.

Until this very moment, these passwords were still secure:

  • WIw7,mstmsritt… = When I was seven, my sister threw my stuffed rabbit in the toilet.
  • Wow…doestcst::amazon.cccooommm = Wow, does that couch smell terrible.
  • Ltime@go-inag~faaa! = Long time ago in a galaxy not far away at all.
  • uTVM,TPw55:utvm,tpwstillsecure = Until this very moment, these passwords were still secure.

You get the idea. Combine a personally memorable sentence, some personal memorable tricks to modify that sentence into a password, and create a long-length password.

Better, though, is to use random unmemorable alphanumeric passwords (with symbols, if the site will allow them), and a password manager like Password Safe to store them. (If anyone wants to port it to the Mac, iPhone, iPad, or Android, please contact me.) This article does a good job of explaining the same thing. David Pogue likes Dashlane, but doesn’t know if it’s secure.

In related news, Password Safe is a candidate for July’s project-of-the-month on SourceForge. Please vote for it.

EDITED TO ADD (6/7): As a commenter noted, none of this is useful advice if the site puts artificial limits on your password.

EDITED TO ADD (6/14): Various ports of Password Safe. I know nothing about them, nor can I vouch for their security.

Analysis of the xkcd scheme.

Posted on June 7, 2013 at 6:41 AMView Comments

The Problems with Managing Privacy by Asking and Giving Consent

New paper from the Harvard Law Review by Daniel Solove: “Privacy Self-Management and the Consent Dilemma“:

Privacy self-management takes refuge in consent. It attempts to be neutral about substance—whether certain forms of collecting, using, or disclosing personal data are good or bad—and instead focuses on whether people consent to various privacy practices. Consent legitimizes nearly any form of collection, use, or disclosure of personal data. Although privacy self-management is certainly a laudable and necessary component of any regulatory regime, I contend that it is being tasked with doing work beyond its capabilities. Privacy self-management does not provide people with meaningful control over their data. First, empirical and social science research demonstrates that there are severe cognitive problems that undermine privacy self-management. These cognitive problems impair individuals’ ability to make informed, rational choices about the costs and benefits of consenting to the collection, use, and disclosure of their personal data.

Second, and more troubling, even well-informed and rational individuals cannot appropriately self-manage their privacy due to several structural problems. There are too many entities collecting and using personal data to make it feasible for people to manage their privacy separately with each entity. Moreover, many privacy harms are the result of an aggregation of pieces of data over a period of time by different entities. It is virtually impossible for people to weigh the costs and benefits of revealing information or permitting its use or transfer without an understanding of the potential downstream uses, further limiting the effectiveness of the privacy self-management framework.

Posted on June 3, 2013 at 6:15 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.