Entries Tagged "usability"

Page 6 of 8

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

Balancing Security and Usability in Authentication

Since January, the Conficker.B worm has been spreading like wildfire across the Internet: infecting the French Navy, hospitals in Sheffield, the court system in Houston, and millions of computers worldwide. One of the ways it spreads is by cracking administrator passwords on networks. Which leads to the important question: Why in the world are IT administrators still using easy-to-guess passwords?

Computer authentication systems have two basic requirements. They need to keep the bad guys from accessing your account, and they need to allow you to access your account. Both are important, and every authentication system is a balancing act between the two. Too little security, and the bad guys will get in too easily. But if the authentication system is too complicated, restrictive, or hard to use, you won’t be able to—or won’t bother to—use it.

Passwords are the most common authentication system, and a good place to start. They’re very easy to implement and use, which is why they’re so popular. But as computers have become faster, password guessing has become easier. Most people don’t choose passwords that are complicated enough to remain secure against modern password-guessing attacks. Conficker.B is even less clever; it just tries a list of about 200 common passwords.

To combat password guessing, many systems force users to choose harder-to-guess passwords—requiring minimum lengths, non alpha-numeric characters, etc.—and change their passwords more frequently. The first makes guessing harder, and the second makes a guessed password less valuable. This, of course, makes the system more annoying, so users respond by writing their passwords down and taping them to their monitors, or simply forgetting them more often. Smarter users write them down and put them in their wallets, or use a secure password database like Password Safe.

Users forgetting their passwords can be expensive—sysadmins or customer service reps have to field phone calls and reset passwords—so some systems include a backup authentication system: a secret question. The idea is that if you forget your password, you can authenticate yourself with some personal information that only you know. Your mother’s maiden name was traditional, but these days there are all sorts of secret questions: your favourite schoolteacher, favourite colour, street you grew up on, name of your first pet, and so on. This might make the system more usable, but it also makes it much less secure: answers can be easily guessable, and are often known by people close to you.

A common enhancement is a one-time password generator, like a SecurID token. This is a small device with a screen that displays a password that changes automatically once a minute. Adding this is called two-factor authentication, and is much more secure, because this token—”something you have”—is combined with a password—”something you know.” But it’s less usable, because the tokens have to be purchased and distributed to all users, and far too often it’s “something you lost or forgot.” And it costs money. Tokens are far more frequently used in corporate environments, but banks and some online gaming worlds have taken to using them—sometimes only as an option, because people don’t like them.

In most cases, how an authentication system works when a legitimate user tries to log on is much more important than how it works when an impostor tries to log on. No security system is perfect, and there is some level of fraud associated with any of these authentication methods. But the instances of fraud are rare compared to the number of times someone tries to log on legitimately. If a given authentication system let the bad guys in one in a hundred times, a bank could decide to live with the problem—or try to solve it in some other way. But if the same authentication system prevented legitimate customers from logging on even one in a thousand times, the number of complaints would be enormous and the system wouldn’t survive one week.

Balancing security and usability is hard, and many organizations get it wrong. But it’s also evolving; organizations needing to tighten their security continue to push more involved authentication methods, and more savvy Internet users are willing to accept them. And certainly IT administrators need to be leading that evolutionary change.

A version of this essay was originally published in The Guardian.

Posted on February 19, 2009 at 1:44 PMView Comments

New Police Computer System Impeding Arrests

In Queensland, Australia, policemen are arresting fewer people because their new data-entry system is too annoying:

He said police were growing reluctant to make arrests following the latest phased roll-out of QPRIME, or Queensland Police Records Information Management Exchange.

“They are reluctant to make arrests and they’re showing a lot more discretion in the arrests they make because QPRIME is so convoluted to navigate,” Mr Leavers said. He said minor street offences, some traffic offences and minor property matters were going unchallenged, but not serious offences.

However, Mr Leavers said there had been occasions where offenders were released rather than kept in custody because of the length of time it now took to prepare court summaries.

“There was an occasion where two people were arrested on multiple charges. It took six detectives more than six hours to enter the details into QPRIME,” he said. “It would have taken even longer to do the summary to go to court the next morning, so basically the suspects were released on bail, rather than kept in custody.”

He said jobs could now take up to seven hours to process because of the amount of data entry involved.

This is a good example of how non-security incentives affect security decisions.

Posted on January 22, 2009 at 1:51 PMView Comments

Impersonation

Impersonation isn’t new. In 1556, a Frenchman was executed for impersonating Martin Guerre and this week hackers impersonated Barack Obama on Twitter. It’s not even unique to humans: mockingbirds, Viceroy butterflies, and the mimic octopus all use impersonation as a survival strategy. For people, detecting impersonation is a hard problem for three reasons: we need to verify the identity of people we don’t know, we interact with people through “narrow” communications channels like the telephone and Internet, and we want computerized systems to do the verification for us.

Traditional impersonation involves people fooling people. It’s still done today: impersonating garbage men to collect tips, impersonating parking lot attendants to collect fees, or impersonating the French president to fool Sarah Palin. Impersonating people like policemen, security guards, and meter readers is a common criminal tactic.

These tricks work because we all regularly interact with people we don’t know. No one could successfully impersonate your brother, your best friend, or your boss, because you know them intimately. But a policeman or a parking lot attendant? That’s just someone with a badge or a uniform. But badges and ID cards only help if you know how to verify one. Do you know what a valid police ID looks like? Or how to tell a real telephone repairman’s badge from a forged one?

Still, it’s human nature to trust these credentials. We naturally trust uniforms, even though we know that anyone can wear one. When we visit a Web site, we use the professionalism of the page to judge whether or not it’s really legitimate—never mind that anyone can cut and paste graphics. Watch the next time someone other than law enforcement verifies your ID; most people barely look at it.

Impersonation is even easier over limited communications channels. On the telephone, how can you distinguish someone working at your credit card company from someone trying to steal your account details and login information? On e-mail, how can you distinguish someone from your company’s tech support from a hacker trying to break into your network—or the mayor of Paris from an impersonator? Once in a while someone frees himself from jail by faxing a forged release order to his warden. This is social engineering: impersonating someone convincingly enough to fool the victim.

These days, a lot of identity verification happens with computers. Computers are fast at computation but not very good at judgment, and can be tricked. So people can fool speed cameras by taping a fake license plate over the real one, fingerprint readers with a piece of tape, or automatic face scanners with—and I’m not making this up—a photograph of a face held in front of their own. Even the most bored policeman wouldn’t fall for any of those tricks.

This is why identity theft is such a big problem today. So much authentication happens online, with only a small amount of information: user ID, password, birth date, Social Security number, and so on. Anyone who gets that information can impersonate you to a computer, which doesn’t know any better.

Despite all of these problems, most authentication systems work most of the time. Even something as ridiculous as faxed signatures work, and can be legally binding. But no authentication system is perfect, and impersonation is always possible.

This lack of perfection is okay, though. Security is a trade-off, and any well-designed authentication system balances security with ease of use, customer acceptance, cost, and so on. More authentication isn’t always better. Banks make this trade-off when they don’t bother authenticating signatures on checks under amounts like $25,000; it’s cheaper to deal with fraud after the fact. Web sites make this trade-off when they use simple passwords instead of something more secure, and merchants make this trade-off when they don’t bother verifying your signature against your credit card. We make this trade-off when we accept police badges, Best Buy uniforms, and faxed signatures with only a cursory amount of verification.

Good authentication systems also balance false positives against false negatives. Impersonation is just one way these systems can fail; they can also fail to authenticate the real person. An ATM is better off allowing occasional fraud than preventing legitimate account holders access to their money. On the other hand, a false positive in a nuclear launch system is much more dangerous; better to not launch the missiles.

Decentralized authentication systems work better than centralized ones. Open your wallet, and you’ll see a variety of physical tokens used to identify you to different people and organizations: your bank, your credit card company, the library, your health club, and your employer, as well as a catch-all driver’s license used to identify you in a variety of circumstances. That assortment is actually more secure than a single centralized identity card: each system must be broken individually, and breaking one doesn’t give the attacker access to everything. This is one of the reasons that centralized systems like REAL-ID make us less secure.

Finally, any good authentication system uses defense in depth. Since no authentication system is perfect, there need to be other security measures in place if authentication fails. That’s why all of a corporation’s assets and information isn’t available to anyone who can bluff his way into the corporate offices. That is why credit card companies have expert systems analyzing suspicious spending patterns. And it’s why identity theft won’t be solved by making personal information harder to steal.

We can reduce the risk of impersonation, but it will always be with us; technology cannot “solve” it in any absolute sense. Like any security, the trick is to balance the trade-offs. Too little security, and criminals withdraw money from all our bank accounts. Too much security and when Barack Obama calls to congratulate you on your reelection, you won’t believe it’s him.

This essay originally appeared in The Wall Street Journal.

Posted on January 9, 2009 at 2:04 PMView Comments

Nonviolent Activists Are Now Terrorists

Heard about this:

The Maryland State Police classified 53 nonviolent activists as terrorists and entered their names and personal information into state and federal databases that track terrorism suspects, the state police chief acknowledged yesterday.

Why did they do that?

Both Hutchins and Sheridan said the activists’ names were entered into the state police database as terrorists partly because the software offered limited options for classifying entries.

I know that once we had this “either you’re with us or with the terrorists” mentality, but don’t you think that—just maybe—the software should allow for a little bit more nuance?

Posted on October 9, 2008 at 1:07 PMView Comments

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AMView Comments

Interesting Microsoft Patent Application

Guardian Angel:

An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.

Note that Bill Gates and Ray Ozzie are co-inventors.

Posted on May 13, 2008 at 7:05 AMView Comments

People and Security Rules

In this article analyzing a security failure resulting in live nuclear warheads being flown over the U.S., there’s an interesting commentary on people and security rules:

Indeed, the gaff [sic] that allowed six nukes out over three major American cities (Omaha, Neb., Kansas City, Mo., and Little Rock, Ark.) could have been avoided if the Air Force personnel had followed procedure.

“Let’s not forget that the existing rules were pretty tight,” says Hans Kristensen, director of the Nuclear Information Project for the Federation of American Scientists. “Much of what went wrong occurred because people didn’t follow these tight rules. You can have all sorts of rules and regulations, but they still won’t do any good if the people don’t follow them.”

Procedures are a tough balancing act. If they’re too lax, there will be security problems. If they’re too tight, people will get around them and there will be security problems.

Posted on April 14, 2008 at 6:47 AMView Comments

Israel Implementing IFF System for Commercial Aircraft

Israel is implementing an IFF (identification, friend or foe) system for commercial aircraft, designed to differentiate legitimate planes from terrorist-controlled planes.

The news article implies that it’s a basic challenge-and-response system. Ground control issues some kind of alphanumeric challenge to the plane. The pilot types the challenge into some hand-held computer device, and reads back the reply. Authentication is achieved by 1) physical possession of the device, and 2) typing a legitimate PIN into the device to activate it.

The article talks about a distress mode, where the pilot signals that a terrorist is holding a gun to his head. Likely, that’s done by typing a special distress PIN into the device, and reading back whatever the screen displays.

The military has had this sort of system—first paper-based, and eventually computer-based—for decades. The critical issue with using this on commercial aircraft is how to deal with user error. The system has to be easy enough to use, and the parts hard enough to lose, that there won’t be a lot of false alarms.

Posted on March 10, 2008 at 12:24 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.