Entries Tagged "usability"

Page 5 of 8

Security vs. Usability

Good essay: “When Security Gets in the Way.”

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don’t? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer’s user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from “out there” that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AMView Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AMView Comments

Gaze Tracking Software Protecting Privacy

Interesting use of gaze tracking software to protect privacy:

Chameleon uses gaze-tracking software and camera equipment to track an authorized reader’s eyes to show only that one person the correct text. After a 15-second calibration period in which the software essentially “learns” the viewer’s gaze patterns, anyone looking over that user’s shoulder sees dummy text that randomly and constantly changes.

To tap the broader consumer market, Anderson built a more consumer-friendly version called PrivateEye, which can work with a simple Webcam. The software blurs a user’s monitor when he or she turns away. It also detects other faces in the background, and a small video screen pops up to alert the user that someone is looking at the screen.

How effective this is will mostly be a usability problem, but I like the idea of a system detecting if anyone else is looking at my screen.

Slashdot story.

EDITED TO ADD (7/14): A demo.

Posted on July 14, 2009 at 6:20 AMView Comments

The Pros and Cons of Password Masking

Usability guru Jakob Nielsen opened up a can of worms when he made the case for unmasking passwords in his blog. I chimed in that I agreed. Almost 165 comments on my blog (and several articles, essays, and many other blog posts) later, the consensus is that we were wrong.

I was certainly too glib. Like any security countermeasure, password masking has value. But like any countermeasure, password masking is not a panacea. And the costs of password masking need to be balanced with the benefits.

The cost is accuracy. When users don’t get visual feedback from what they’re typing, they’re more prone to make mistakes. This is especially true with character strings that have non-standard characters and capitalization. This has several ancillary costs:

  • Users get pissed off.
  • Users are more likely to choose easy-to-type passwords, reducing both mistakes and security. Removing password masking will make people more comfortable with complicated passwords: they’ll become easier to memorize and easier to use.

The benefits of password masking are more obvious:

  • Security from shoulder surfing. If people can’t look over your shoulder and see what you’re typing, they’re much less likely to be able to steal your password. Yes, they can look at your fingers instead, but that’s much harder than looking at the screen. Surveillance cameras are also an issue: it’s easier to watch someone’s fingers on recorded video, but reading a cleartext password off a screen is trivial.

    In some situations, there is a trust dynamic involved. Do you type your password while your boss is standing over your shoulder watching? How about your spouse or partner? Your parent or child? Your teacher or students? At ATMs, there’s a social convention of standing away from someone using the machine, but that convention doesn’t apply to computers. You might not trust the person standing next to you enough to let him see your password, but don’t feel comfortable telling him to look away. Password masking solves that social awkwardness.

  • Security from screen scraping malware. This is less of an issue; keyboard loggers are more common and unaffected by password masking. And if you have that kind of malware on your computer, you’ve got all sorts of problems.
  • A security “signal.” Password masking alerts users, and I’m thinking users who aren’t particularly security savvy, that passwords are a secret.

I believe that shoulder surfing isn’t nearly the problem it’s made out to be. One, lots of people use their computers in private, with no one looking over their shoulders. Two, personal handheld devices are used very close to the body, making shoulder surfing all that much harder. Three, it’s hard to quickly and accurately memorize a random non-alphanumeric string that flashes on the screen for a second or so.

This is not to say that shoulder surfing isn’t a threat. It is. And, as many readers pointed out, password masking is one of the reasons it isn’t more of a threat. And the threat is greater for those who are not fluent computer users: slow typists and people who are likely to choose bad passwords. But I believe that the risks are overstated.

Password masking is definitely important on public terminals with short PINs. (I’m thinking of ATMs.) The value of the PIN is large, shoulder surfing is more common, and a four-digit PIN is easy to remember in any case.

And lastly, this problem largely disappears on the Internet on your personal computer. Most browsers include the ability to save and then automatically populate password fields, making the usability problem go away at the expense of another security problem (the security of the password becomes the security of the computer). There’s a Firefox plugin that gets rid of password masking. And programs like my own Password Safe allow passwords to be cut and pasted into applications, also eliminating the usability problem.

One approach is to make it a configurable option. High-risk banking applications could turn password masking on by default; other applications could turn it off by default. Browsers in public locations could turn it on by default. I like this, but it complicates the user interface.

A reader mentioned BlackBerry’s solution, which is to display each character briefly before masking it; that seems like an excellent compromise.

I, for one, would like the option. I cannot type complicated WEP keys into Windows—twice! what’s the deal with that?—without making mistakes. I cannot type my rarely used and very complicated PGP keys without making a mistake unless I turn off password masking. That’s what I was reacting to when I said “I agree.”

So was I wrong? Maybe. Okay, probably. Password masking definitely improves security; many readers pointed out that they regularly use their computer in crowded environments, and rely on password masking to protect their passwords. On the other hand, password masking reduces accuracy and makes it less likely that users will choose secure and hard-to-remember passwords, I will concede that the password masking trade-off is more beneficial than I thought in my snap reaction, but also that the answer is not nearly as obvious as we have historically assumed.

Posted on July 3, 2009 at 1:42 PMView Comments

The Problem with Password Masking

I agree with this:

It’s time to show most passwords in clear text as users type them. Providing feedback and visualizing the system’s status have always been among the most basic usability principles. Showing undifferentiated bullets while users enter complex codes definitely fails to comply.

Most websites (and many other applications) mask passwords as users type them, and thereby theoretically prevent miscreants from looking over users’ shoulders. Of course, a truly skilled criminal can simply look at the keyboard and note which keys are being pressed. So, password masking doesn’t even protect fully against snoopers.

More importantly, there’s usually nobody looking over your shoulder when you log in to a website. It’s just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.

Shoulder surfing isn’t very common, and cleartext passwords greatly reduces errors. It has long annoyed me when I can’t see what I type: in Windows logins, in PGP, and so on.

EDITED TO ADD (6/26): To be clear, I’m not talking about PIN masking on public terminals like ATMs. I’m talking about password masking on personal computers.

EDITED TO ADD (6/30): Two articles on the subject.

Posted on June 26, 2009 at 6:17 AMView Comments

Second SHB Workshop Liveblogging (9)

The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.

Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”

Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)

John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)

Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.

During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.

And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 4:55 PMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Election Fraud in Kentucky

I think this is the first documented case of election fraud in the U.S. using electronic voting machines (there have been lots of documented cases of errors and voting problems, but this one involves actual maliciousness):

Five Clay County officials, including the circuit court judge, the county clerk, and election officers were arrested Thursday after they were indicted on federal charges accusing them of using corrupt tactics to obtain political power and personal gain.

The 10-count indictment, unsealed Thursday, accused the defendants of a conspiracy from March 2002 until November 2006 that violated the Racketeering Influenced and Corrupt Organizations Act (RICO). RICO is a federal statute that prosecutors use to combat organized crime. The defendants were also indicted for extortion, mail fraud, obstruction of justice, conspiracy to injure voters’ rights and conspiracy to commit voter fraud.

According to the indictment, these alleged criminal actions affected the outcome of federal, local, and state primary and general elections in 2002, 2004, and 2006.

From BradBlog:

Clay County uses the horrible ES&S iVotronic system for all of its votes at the polling place. The iVotronic is a touch-screen Direct Recording Electronic (DRE) device, offering no evidence, of any kind, that any vote has ever been recorded as per the voter’s intent. If the allegations are correct here, there would likely have been no way to discover, via post-election examination of machines or election results, that votes had been manipulated on these machines.

ES&S is the largest distributor of voting systems in America and its iVotronic system—which is well-documented to have lost and flipped votes on many occasions—is likely the most widely-used DRE system in the nation. It’s currently in use in some 419 jurisdictions in 18 states including Arkansas, Colorado, Florida, Indiana, Kansas, Kentucky, Missouri, Mississippi, North Carolina, New Jersey, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, Wisconsin, and West Virginia.

ArsTechnica has more, and here’s the actual indictment; BradBlog has excerpts.

The fraud itself is very low-tech, and didn’t make use of any of the documented vulnerabilities in the ES&S iVotronic machines; it was basic social engineering. Matt Blaze explains:

The iVotronic is a popular Direct Recording Electronic (DRE) voting machine. It displays the ballot on a computer screen and records voters’ choices in internal memory. Voting officials and machine manufacturers cite the user interface as a major selling point for DRE machines—it’s already familiar to voters used to navigating touchscreen ATMs, computerized gas pumps, and so on, and thus should avoid problems like the infamous “butterfly ballot”. Voters interact with the iVotronic primarily by touching the display screen itself. But there’s an important exception: above the display is an illuminated red button labeled “VOTE” (see photo at right). Pressing the VOTE button is supposed to be the final step of a voter’s session; it adds their selections to their candidates’ totals and resets the machine for the next voter.

The Kentucky officials are accused of taking advantage of a somewhat confusing aspect of the way the iVotronic interface was implemented. In particular, the behavior (as described in the indictment) of the version of the iVotronic used in Clay County apparently differs a bit from the behavior described in ES&S’s standard instruction sheet for voters [pdf – see page 2]. A flash-based iVotronic demo available from ES&S here shows the same procedure, with the VOTE button as the last step. But evidently there’s another version of the iVotronic interface in which pressing the VOTE button is only the second to last step. In those machines, pressing VOTE invokes an extra “confirmation” screen. The vote is only actually finalized after a “confirm vote” box is touched on that screen. (A different flash demo that shows this behavior with the version of the iVotronic equipped with a printer is available from ES&S here). So the iVotronic VOTE button doesn’t necessarily work the way a voter who read the standard instructions might expect it to.

The indictment describes a conspiracy to exploit this ambiguity in the iVotronic user interface by having pollworkers systematically (and incorrectly) tell voters that pressing the VOTE button is the last step. When a misled voter would leave the machine with the extra “confirm vote” screen still displayed, a pollworker would quietly “correct” the not-yet-finalized ballot before casting it. It’s a pretty elegant attack, exploiting little more than a poorly designed, ambiguous user interface, printed instructions that conflict with actual machine behavior, and public unfamiliarity with equipment that most citizens use at most once or twice each year. And once done, it leaves behind little forensic evidence to expose the deed.

Read the rest of Blaze’s post for some good analysis on the attack and what it says about iVotronic. He led the team that analyzed the security of that very machine:

We found numerous exploitable security weaknesses in these machines, many of which would make it easy for a corrupt voter, pollworker, or election official to tamper with election results (see our report for details).

[…]

On the one hand, we might be comforted by the relatively “low tech” nature of the attack—no software modifications, altered electronic records, or buffer overflow exploits were involved, even though the machines are, in fact, quite vulnerable to such things. But a close examination of the timeline in the indictment suggests that even these “simple” user interface exploits might well portend more technically sophisticated attacks sooner, rather than later.

Count 9 of the Kentucky indictment alleges that the Clay County officials first discovered and conspired to exploit the iVotronic “confirm screen” ambiguity around June 2004. But Kentucky didn’t get iVotronics until at the earliest late 2003; according to the state’s 2003 HAVA Compliance Plan [pdf], no Kentucky county used the machines as of mid-2003. That means that the officials involved in the conspiracy managed to discover and work out the operational details of the attack soon after first getting the machines, and were able to use it to alter votes in the next election.

[…]

But that’s not the worst news in this story. Even more unsettling is the fact that none of the published security analyses of the iVotronic—including the one we did at Penn—had noticed the user interface weakness. The first people to have discovered this flaw, it seems, didn’t publish or report it. Instead, they kept it to themselves and used it to steal votes.

Me on electronic voting machines, from 2004.

Posted on March 24, 2009 at 6:41 AMView Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.