Entries Tagged "usability"

Page 7 of 8

Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AMView Comments

"One Laptop per Child" Security System

It’s called “Bitfrost,” and it’s interesting:

We have set out to create a system that is both drastically more secure and provides drastically more usable security than any mainstream system currently on the market. One result of the dedication to usability is that there is only one protection provided by the Bitfrost platform that requires user response, and even then, it’s a simple ‘yes or no’ question understandable even by young children. The remainder of the security is provided behind the scenes. But pushing the envelope on both security and usability is a tall order, and it’s important to note that we have neither tried to create, nor do we believe we have created, a “perfectly secure” system. Notions of perfect security in the real world are foolish, and we distance ourselves up front from any such claims.

Read the design principles and design goals. And there’s an article on the Wired website, and there’s a Slashdot thread.

What they propose to do is radical, and different—just like the whole One Laptop Per Child program. Definitely worth paying attention to, and supporting if possible.

Posted on February 14, 2007 at 7:04 AMView Comments

Architecture and Airport Security

Good essay by Matt Blaze:

Somehow, for all the attention to minutiae in the guidelines, everything ends up just slightly wrong by the time it gets put together at an airport. Even if we accept some form of passenger screening as a necessary evil these days, today’s checkpoints seem like case studies in basic usability failure designed to inflict maximum frustration on everyone involved. The tables aren’t quite at the right height to smoothly enter the X-ray machines, bins slide off the edges of tables, there’s never enough space or seating for putting shoes back on as you leave the screening area, basic instructions have to be yelled across crowded hallways. According to the TSA’s manual, there are four models of standard approved X-ray machines, from two different manufacturers. All four have sightly different heights, and all are different from the heights of the standard approved tables. Do the people setting this stuff up ever actually fly? And if they can’t even get something as simple as the furniture right, how confident should we be in the less visible but more critical parts of the system that we don’t see every time we fly?

Yes, Matt Blaze now has a blog. See also his essay on making your own fake boarding pass.

Posted on January 12, 2007 at 7:08 AMView Comments

Real-World Passwords

How good are the passwords people are choosing to protect their computers and online accounts?

It’s a hard question to answer because data is scarce. But recently, a colleague sent me some spoils from a MySpace phishing attack: 34,000 actual user names and passwords.

The attack was pretty basic. The attackers created a fake MySpace login page, and collected login information when users thought they were accessing their own account on the site. The data was forwarded to various compromised web servers, where the attackers would harvest it later.

MySpace estimates that more than 100,000 people fell for the attack before it was shut down. The data I have is from two different collection points, and was cleaned of the small percentage of people who realized they were responding to a phishing attack. I analyzed the data, and this is what I learned.

Password Length: While 65 percent of passwords contain eight characters or less, 17 percent are made up of six characters or less. The average password is eight characters long.

Specifically, the length distribution looks like this:

1-4 0.82 percent
5 1.1 percent
6 15 percent
7 23 percent
8 25 percent
9 17 percent
10 13 percent
11 2.7 percent
12 0.93 percent
13-32 0.93 percent

Yes, there’s a 32-character password: “1ancheste23nite41ancheste23nite4.” Other long passwords are “fool2thinkfool2thinkol2think” and “dokitty17darling7g7darling7.”

Character Mix: While 81 percent of passwords are alphanumeric, 28 percent are just lowercase letters plus a single final digit—and two-thirds of those have the single digit 1. Only 3.8 percent of passwords are a single dictionary word, and another 12 percent are a single dictionary word plus a final digit—once again, two-thirds of the time that digit is 1.

numbers only 1.3 percent
letters only 9.6 percent
alphanumeric 81 percent
non-alphanumeric 8.3 percent

Only 0.34 percent of users have the user name portion of their e-mail address as their password.

Common Passwords: The top 20 passwords are (in order): password1, abc123, myspace1, password, blink182, qwerty1, fuckyou, 123abc, baseball1, football1, 123456, soccer, monkey1, liverpool1, princess1, jordan23, slipknot1, superman1, iloveyou1 and monkey. (Different analysis here.)

The most common password, “password1,” was used in 0.22 percent of all accounts. The frequency drops off pretty fast after that: “abc123” and “myspace1” were only used in 0.11 percent of all accounts, “soccer” in 0.04 percent and “monkey” in 0.02 percent.

For those who don’t know, Blink 182 is a band. Presumably lots of people use the band’s name because it has numbers in its name, and therefore it seems like a good password. The band Slipknot doesn’t have any numbers in its name, which explains the 1. The password “jordan23” refers to basketball player Michael Jordan and his number. And, of course, “myspace” and “myspace1” are easy-to-remember passwords for a MySpace account. I don’t know what the deal is with monkeys.

We used to quip that “password” is the most common password. Now it’s “password1.” Who said users haven’t learned anything about security?

But seriously, passwords are getting better. I’m impressed that less than 4 percent were dictionary words and that the great majority were at least alphanumeric. Writing in 1989, Daniel Klein was able to crack (.gz) 24 percent of his sample passwords with a small dictionary of just 63,000 words, and found that the average password was 6.4 characters long.

And in 1992 Gene Spafford cracked (.pdf) 20 percent of passwords with his dictionary, and found an average password length of 6.8 characters. (Both studied Unix passwords, with a maximum length at the time of 8 characters.) And they both reported a much greater percentage of all lowercase, and only upper- and lowercase, passwords than emerged in the MySpace data. The concept of choosing good passwords is getting through, at least a little.

On the other hand, the MySpace demographic is pretty young. Another password study (.pdf) in November looked at 200 corporate employee passwords: 20 percent letters only, 78 percent alphanumeric, 2.1 percent with non-alphanumeric characters, and a 7.8-character average length. Better than 15 years ago, but not as good as MySpace users. Kids really are the future.

None of this changes the reality that passwords have outlived their usefulness as a serious security device. Over the years, password crackers have been getting faster and faster. Current commercial products can test tens—even hundreds—of millions of passwords per second. At the same time, there’s a maximum complexity to the passwords average people are willing to memorize (.pdf). Those lines crossed years ago, and typical real-world passwords are now software-guessable. AccessData’s Password Recovery Toolkit—at 200,000 guesses per second—would have been able to crack 23 percent of the MySpace passwords in 30 minutes, 55 percent in 8 hours.

Of course, this analysis assumes that the attacker can get his hands on the encrypted password file and work on it offline, at his leisure; i.e., that the same password was used to encrypt an e-mail, file or hard drive. Passwords can still work if you can prevent offline password-guessing attacks, and watch for online guessing. They’re also fine in low-value security situations, or if you choose really complicated passwords and use something like Password Safe to store them. But otherwise, security by password alone is pretty risky.

This essay originally appeared on Wired.com.

Posted on December 14, 2006 at 7:39 AMView Comments

New Voting Protocol

Interesting voting protocol from Ron Rivest:

Abstract:

We present a new paper-based voting method with attractive security properties. Not only can each voter verify that her vote is recorded as she intended, but she gets a “receipt” that she can take home that can be used later to verify that her vote is actually included in the final tally. Her receipt, however, does not allow her to prove to anyone else how she voted.

The new voting system is in some ways similar to recent cryptographic voting system proposals, but it achieves very nearly the same objectives without using any cryptography at all. Its principles are simple and easy to understand.

In this “ThreeBallot” voting system, each voter casts three paper ballots (with certain restrictions on how they may be filled out, so the tallying works). These paper ballots are of course “voter-verifiable.” All ballots cast are scanned and published on a web site, so anyone may correctly compute the election result.

A voter receives a copy of one of her ballots as her “receipt,” which she may take home. Only the voter knows which ballot she copied for her receipt. The voter is unable to use her receipt to prove how she voted or to sell her vote, as the receipt doesn’t reveal how she voted.

A voter can check that the web site contains a ballot matching her receipt. Deletion or modification of ballots is thus detectable; so the integrity of the election is verifiable.

The method can be implemented in a quite practical manner, although further refinements to improve usability would be nice.

Very clever.

Posted on October 2, 2006 at 1:27 PMView Comments

Educating Users

I’ve met users, and they’re not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they’re not technologists, let alone security people. Of course, they’re making all sorts of security mistakes. I too have tried educating users, and I agree that it’s largely futile.

Part of the problem is generational. We’ve seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don’t-get-it generation will die off eventually, we won’t suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there’s no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London’s financial district. Someone stood on a street corner and handed out CDs, saying they were a “special Valentine’s Day promotion.” Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign—all it did was alert some computer on the Internet that it was running—but it could just have easily been malicious. The researchers concluded that users don’t care about security. That’s simply not true. Users care about security—they just don’t understand it.

I don’t see a failure of education; I see a failure of technology. It shouldn’t have been possible for those users to run that CD, or for a random program stuffed into a banking computer to “phone home” across the Internet.

The real problem is that computers don’t work well. The industry has convinced everyone that people need a computer to survive, and at the same time it’s made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I’m likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there’s no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn’t something you do instead of education; it’s a form of education—a very primal form of education best suited to children and animals (and experts aren’t so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus’s essay here, if you are a subscriber. (Subscriptions are free to “qualified” people.)

EDITED TO ADD (9/11): Here’s Marcus’s half.

Posted on August 22, 2006 at 12:35 PMView Comments

Human/Bear Security Trade-Off

Interesting example:

Back in the 1980s, Yosemite National Park was having a serious problem with bears: They would wander into campgrounds and break into the garbage bins. This put both bears and people at risk. So the Park Service started installing armored garbage cans that were tricky to open—you had to swing a latch, align two bits of handle, that sort of thing. But it turns out it’s actually quite tricky to get the design of these cans just right. Make it too complex and people can’t get them open to put away their garbage in the first place. Said one park ranger, “There is considerable overlap between the intelligence of the smartest bears and the dumbest tourists.”

It’s a tough balance to strike. People are smart, but they’re impatient and unwilling to spend a lot of time solving the problem. Bears are dumb, but they’re tenacious and are willing to spend hours solving the problem. Given those two constraints, creating a trash can that can both work for people and not work for bears is not easy.

Posted on August 18, 2006 at 7:02 AMView Comments

When "Off" Doesn't Mean Off

According to the specs of the new Nintendo Wii (their new game machine), “Wii can communicate with the Internet even when the power is turned off.” Nintendo accentuates the positive: “This WiiConnect24 service delivers a new surprise or game update, even if users do not play with Wii,” while ignoring the possibility that Nintendo can deactivate a game if they choose to do so, or that someone else can deliver a different—not so wanted—surprise.

We all know that, but what’s interesting here is that Nintendo is changing the meaning of the word “off.” We are all conditioned to believe that “off” means off, and therefore safe. But in Nintendo’s case, “off” really means something like “on standby.” If users expect the Nintendo Wii to be truly off, they need to pull the power plug—assuming there isn’t a battery foiling that tactic. Maybe they need to pull both the power plug and the Ethernet cable. Unless they have a wireless network at home.

Maybe there is no way to turn the Nintendo Wii off.

There’s a serious security problem here, made worse by a bad user interface. “Off” should mean off.

Posted on May 10, 2006 at 6:45 AMView Comments

Microsoft Vista's Endless Security Warnings

Paul Thurrott has posted an excellent essay on the problems with Windows Vista. Most interesting to me is how they implement UAP (User Account Protection):

Modern operating systems like Linux and Mac OS X operate under a security model where even administrative users don’t get full access to certain features unless they provide an in-place logon before performing any task that might harm the system. This type of security model protects users from themselves, and it is something that Microsoft should have added to Windows years and years ago.

Here’s the good news. In Windows Vista, Microsoft is indeed moving to this kind of security model. The feature is called User Account Protection (UAP) and, as you might expect, it prevents even administrative users from performing potentially dangerous tasks without first providing security credentials, thus ensuring that the user understands what they’re doing before making a critical mistake. It sounds like a good system. But this is Microsoft, we’re talking about here. They completely botched UAP.

The bad news, then, is that UAP is a sad, sad joke. It’s the most annoying feature that Microsoft has ever added to any software product, and yes, that includes that ridiculous Clippy character from older Office versions. The problem with UAP is that it throws up an unbelievable number of warning dialogs for even the simplest of tasks. That these dialogs pop up repeatedly for the same action would be comical if it weren’t so amazingly frustrating. It would be hilarious if it weren’t going to affect hundreds of millions of people in a few short months. It is, in fact, almost criminal in its insidiousness.

Let’s look a typical example. One of the first things I do whenever I install a new Windows version is download and install Mozilla Firefox. If we forget, for a moment, the number of warning dialogs we get during the download and install process (including a brazen security warning from Windows Firewall for which Microsoft should be chastised), let’s just examine one crucial, often overlooked issue. Once Firefox is installed, there are two icons on my Desktop I’d like to remove: The Setup application itself and a shortcut to Firefox. So I select both icons and drag them to the Recycle Bin. Simple, right?

Wrong. Here’s what you have to go through to actually delete those files in Windows Vista. First, you get a File Access Denied dialog (Figure) explaining that you don’t, in fact, have permission to delete a … shortcut?? To an application you just installed??? Seriously?

OK, fine. You can click a Continue button to “complete this operation.” But that doesn’t complete anything. It just clears the desktop for the next dialog, which is a Windows Security window (Figure). Here, you need to give your permission to continue something opaquely called a “File Operation.” Click Allow, and you’re done. Hey, that’s not too bad, right? Just two dialogs to read, understand, and then respond correctly to. What’s the big deal?

What if you’re doing something a bit more complicated? Well, lucky you, the dialogs stack right up, one after the other, in a seemingly never-ending display of stupidity. Indeed, sometimes you’ll find yourself unable to do certain things for no good reason, and you click Allow buttons until you’re blue in the face. It will never stop bothering you, unless you agree to stop your silliness and leave that file on the desktop where it belongs. Mark my words, this will happen to you. And you will hate it.

The problem with lots of warning dialog boxes is that they don’t provide security. Users stop reading them. They think of them as annoyances, as an extra click required to get a feature to work. Clicking through gets embedded into muscle memory, and when it actually matters the user won’t even realize it.

Jeff Atwood says the same thing:

The problem with the Security Through Endless Warning Dialogs school of thought is that it doesn’t work. All those earnest warning dialogs eventually blend together into a giant “click here to get work done” button that nobody bothers to read any more. The operating system cries wolf so much that when a real wolf—in the form of a virus or malware—rolls around, you’ll mindlessly allow it access to whatever it wants, just out of habit.

So does Rick Strahl:

Then there are the security dialogs. Ah yes, now we’re making progress: Ask users on EVERY program you launch that isn’t signed whether they want to elevate permissions. Uh huh, this is going to work REAL WELL. We know how well that worked with unsigned ActiveX controls in Internet Explorer ­ so well that even Microsoft isn’t signing most of its own ActiveX controls. Give too many warnings that are not quite reasonable and people will never read the dialogs and just click them anyway… I know I started doing that in the short use I’ve had on Vista.

These dialog boxes are not security for the user, they’re CYA security from the user. When some piece of malware trashes your system, Microsoft can say: “You gave the program permission to do that; it’s not our fault.”

Warning dialog boxes are only effective if the user has the ability to make intelligent decisions about the warnings. If the user cannot do that, they’re just annoyances. And they’re annoyances that don’t improve security.

EDITED TO ADD (5/8): Commentary.

Posted on April 24, 2006 at 1:43 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.