Entries Tagged "usability"

Page 5 of 9

Unauthentication

In computer security, a lot of effort is spent on the authentication problem. Whether it’s passwords, secure tokens, secret questions, image mnemonics, or something else, engineers are continually coming up with more complicated—and hopefully more secure—ways for you to prove you are who you say you are over the Internet.

This is important stuff, as anyone with an online bank account or remote corporate network knows. But a lot less thought and work have gone into the other end of the problem: how do you tell the system on the other end of the line that you’re no longer there? How do you unauthenticate yourself?

My home computer requires me to log out or turn my computer off when I want to unauthenticate. This works for me because I know enough to do it, but lots of people just leave their computers on and running when they walk away. As a result, many office computers are left logged in when people go to lunch, or when they go home for the night. This, obviously, is a security vulnerability.

The most common way to combat this is by having the system time out. I could have my computer log me out automatically after a certain period of inactivity—five minutes, for example. Getting it right requires some fine tuning, though. Log the person out too quickly, and he gets annoyed; wait too long before logging him out, and the system could be vulnerable during that time. My corporate e-mail server logs me out after 10 minutes or so, and I regularly get annoyed at my corporate e-mail system.

Some systems have experimented with a token: a USB authentication token that has to be plugged in for the computer to operate, or an RFID token that logs people out automatically when the token moves more than a certain distance from the computer. Of course, people will be prone to just leave the token plugged in to their computer all the time; but if you attach it to their car keys or the badge they have to wear at all times when walking around the office, the risk is minimized.

That’s expensive, though. A research project used a Bluetooth device, like a cellphone, and measured its proximity to a computer. The system could be programmed to lock the computer if the Bluetooth device moved out of range.

Some systems log people out after every transaction. This wouldn’t work for computers, but it can work for ATMs. The machine spits my card out before it gives me my cash, or just requires a card swipe, and makes sure I take it out of the machine. If I want to perform another transaction, I have to reinsert my card and enter my PIN a second time.

There’s a physical analogue that everyone can explain: door locks. Does your door lock behind you when you close the door, or does it remain unlocked until you lock it? The first instance is a system that automatically logs you out, and the second requires you to log out manually. Both types of locks are sold and used, and which one you choose depends on both how you use the door and who you expect to try to break in.

Designing systems for usability is hard, especially when security is involved. Almost by definition, making something secure makes it less usable. Choosing an unauthentication method depends a lot on how the system is used as well as the threat model. You have to balance increasing security with pissing the users off, and getting that balance right takes time and testing, and is much more an art than a science.

This essay originally appeared on ThreatPost.

Posted on September 28, 2009 at 1:34 PMView Comments

Password Advice

Here’s some complicated advice on securing passwords that—I’ll bet—no one follows.

  • DO use a password manager such as those reviewed by Scott Dunn in his Sept. 18, 2008,
    Insider Tips
    column. Although Scott focused on free programs, I really like CallPod’s Keeper, a $15 utility that comes in Windows, Mac, and iPhone versions and allows you to keep all your passwords in sync. Find more information about the program and a download link for the 15-day free-trial version on the vendor’s site.

  • DO change passwords frequently. I change mine every six months or whenever I sign in to a site I haven’t visited in long time. Don’t reuse old passwords. Password managers can assign expiration dates to your passwords and remind you when the passwords are about to expire.
  • DO keep your passwords secret. Putting them into a file on your computer, e-mailing them to others, or writing them on a piece of paper in your desk is tantamount to giving them away. If you must allow someone else access to an account, create a temporary password just for them and then change it back immediately afterward.

    No matter how much you may trust your friends or colleagues, you can’t trust their computers. If they need ongoing access, consider creating a separate account with limited privileges for them to use.

  • DON’T use passwords comprised of dictionary words, birthdays, family and pet names, addresses, or any other personal information. Don’t use repeat characters such as 111 or sequences like abc, qwerty, or 123 in any part of your password.
  • DON’T use the same password for different sites. Otherwise, someone who culls your Facebook or Twitter password in a phishing exploit could, for example, access your bank account.
  • DON’T allow your computer to automatically sign in on boot-up and thus use any automatic e-mail, chat, or browser sign-ins. Avoid using the same Windows sign-in password on two different computers.

  • DON’T use the “remember me” or automatic sign-in option available on many Web sites. Keep sign-ins under the control of your password manager instead.

  • DON’T enter passwords on a computer you don’t control—such as a friend’s computer—because you don’t know what spyware or keyloggers might be on that machine.

  • DON’T access password-protected accounts over open Wi-Fi networks—or any other network you don’t trust—unless the site is secured via https. Use a VPN if you travel a lot. (See Ian “Gizmo” Richards’ Dec. 11, 2008, Best Software column, “Connect safely over open Wi-Fi networks,” for Wi-Fi security tips.)
  • DON’T enter a password or even your account name in any Web page you access via an e-mail link. These are most likely phishing scams. Instead, enter the normal URL for that site directly into your browser, and proceed to the page in question from there.

I regularly break seven of those rules. How about you? (Here’s my advice on choosing secure passwords.)

Posted on August 10, 2009 at 6:57 AMView Comments

Security vs. Usability

Good essay: “When Security Gets in the Way.”

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don’t? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer’s user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from “out there” that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AMView Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AMView Comments

Gaze Tracking Software Protecting Privacy

Interesting use of gaze tracking software to protect privacy:

Chameleon uses gaze-tracking software and camera equipment to track an authorized reader’s eyes to show only that one person the correct text. After a 15-second calibration period in which the software essentially “learns” the viewer’s gaze patterns, anyone looking over that user’s shoulder sees dummy text that randomly and constantly changes.

To tap the broader consumer market, Anderson built a more consumer-friendly version called PrivateEye, which can work with a simple Webcam. The software blurs a user’s monitor when he or she turns away. It also detects other faces in the background, and a small video screen pops up to alert the user that someone is looking at the screen.

How effective this is will mostly be a usability problem, but I like the idea of a system detecting if anyone else is looking at my screen.

Slashdot story.

EDITED TO ADD (7/14): A demo.

Posted on July 14, 2009 at 6:20 AMView Comments

The Pros and Cons of Password Masking

Usability guru Jakob Nielsen opened up a can of worms when he made the case for unmasking passwords in his blog. I chimed in that I agreed. Almost 165 comments on my blog (and several articles, essays, and many other blog posts) later, the consensus is that we were wrong.

I was certainly too glib. Like any security countermeasure, password masking has value. But like any countermeasure, password masking is not a panacea. And the costs of password masking need to be balanced with the benefits.

The cost is accuracy. When users don’t get visual feedback from what they’re typing, they’re more prone to make mistakes. This is especially true with character strings that have non-standard characters and capitalization. This has several ancillary costs:

  • Users get pissed off.
  • Users are more likely to choose easy-to-type passwords, reducing both mistakes and security. Removing password masking will make people more comfortable with complicated passwords: they’ll become easier to memorize and easier to use.

The benefits of password masking are more obvious:

  • Security from shoulder surfing. If people can’t look over your shoulder and see what you’re typing, they’re much less likely to be able to steal your password. Yes, they can look at your fingers instead, but that’s much harder than looking at the screen. Surveillance cameras are also an issue: it’s easier to watch someone’s fingers on recorded video, but reading a cleartext password off a screen is trivial.

    In some situations, there is a trust dynamic involved. Do you type your password while your boss is standing over your shoulder watching? How about your spouse or partner? Your parent or child? Your teacher or students? At ATMs, there’s a social convention of standing away from someone using the machine, but that convention doesn’t apply to computers. You might not trust the person standing next to you enough to let him see your password, but don’t feel comfortable telling him to look away. Password masking solves that social awkwardness.

  • Security from screen scraping malware. This is less of an issue; keyboard loggers are more common and unaffected by password masking. And if you have that kind of malware on your computer, you’ve got all sorts of problems.
  • A security “signal.” Password masking alerts users, and I’m thinking users who aren’t particularly security savvy, that passwords are a secret.

I believe that shoulder surfing isn’t nearly the problem it’s made out to be. One, lots of people use their computers in private, with no one looking over their shoulders. Two, personal handheld devices are used very close to the body, making shoulder surfing all that much harder. Three, it’s hard to quickly and accurately memorize a random non-alphanumeric string that flashes on the screen for a second or so.

This is not to say that shoulder surfing isn’t a threat. It is. And, as many readers pointed out, password masking is one of the reasons it isn’t more of a threat. And the threat is greater for those who are not fluent computer users: slow typists and people who are likely to choose bad passwords. But I believe that the risks are overstated.

Password masking is definitely important on public terminals with short PINs. (I’m thinking of ATMs.) The value of the PIN is large, shoulder surfing is more common, and a four-digit PIN is easy to remember in any case.

And lastly, this problem largely disappears on the Internet on your personal computer. Most browsers include the ability to save and then automatically populate password fields, making the usability problem go away at the expense of another security problem (the security of the password becomes the security of the computer). There’s a Firefox plugin that gets rid of password masking. And programs like my own Password Safe allow passwords to be cut and pasted into applications, also eliminating the usability problem.

One approach is to make it a configurable option. High-risk banking applications could turn password masking on by default; other applications could turn it off by default. Browsers in public locations could turn it on by default. I like this, but it complicates the user interface.

A reader mentioned BlackBerry’s solution, which is to display each character briefly before masking it; that seems like an excellent compromise.

I, for one, would like the option. I cannot type complicated WEP keys into Windows—twice! what’s the deal with that?—without making mistakes. I cannot type my rarely used and very complicated PGP keys without making a mistake unless I turn off password masking. That’s what I was reacting to when I said “I agree.”

So was I wrong? Maybe. Okay, probably. Password masking definitely improves security; many readers pointed out that they regularly use their computer in crowded environments, and rely on password masking to protect their passwords. On the other hand, password masking reduces accuracy and makes it less likely that users will choose secure and hard-to-remember passwords, I will concede that the password masking trade-off is more beneficial than I thought in my snap reaction, but also that the answer is not nearly as obvious as we have historically assumed.

Posted on July 3, 2009 at 1:42 PMView Comments

The Problem with Password Masking

I agree with this:

It’s time to show most passwords in clear text as users type them. Providing feedback and visualizing the system’s status have always been among the most basic usability principles. Showing undifferentiated bullets while users enter complex codes definitely fails to comply.

Most websites (and many other applications) mask passwords as users type them, and thereby theoretically prevent miscreants from looking over users’ shoulders. Of course, a truly skilled criminal can simply look at the keyboard and note which keys are being pressed. So, password masking doesn’t even protect fully against snoopers.

More importantly, there’s usually nobody looking over your shoulder when you log in to a website. It’s just you, sitting all alone in your office, suffering reduced usability to protect against a non-issue.

Shoulder surfing isn’t very common, and cleartext passwords greatly reduces errors. It has long annoyed me when I can’t see what I type: in Windows logins, in PGP, and so on.

EDITED TO ADD (6/26): To be clear, I’m not talking about PIN masking on public terminals like ATMs. I’m talking about password masking on personal computers.

EDITED TO ADD (6/30): Two articles on the subject.

Posted on June 26, 2009 at 6:17 AMView Comments

Second SHB Workshop Liveblogging (9)

The eighth, and final, session of the SHB09 was optimistically titled “How Do We Fix the World?” I moderated, which meant that my liveblogging was more spotty, especially in the discussion section.

David Mandel, Defense Research and Development Canada (suggested reading: Applied Behavioral Science in Support of Intelligence Analysis, Radicalization: What does it mean?; The Role of Instigators in Radicalization to Violent Extremism), is part of the Thinking, Risk, and Intelligence Group at DRDC Toronto. His first observation: “Be wary of purported world-fixers.” His second observation: when you claim that something is broken, it is important to specify the respects in which it’s broken and what fixed looks like. His third observation: it is also important to analyze the consequences of any potential fix. An analysis of the way things are is perceptually based, but an analysis of the way things should be is value-based. He also presented data showing that predictions made by intelligence analysts (at least in one Canadian organization) were pretty good.

Ross Anderson, Cambridge University (suggested reading: Database State; book chapters on psychology and terror), asked “Where’s the equilibrium?” Both privacy and security are moving targets, but he expects that someday soon there will be a societal equilibrium. Incentives to price discriminate go up, and the cost to do so goes down. He gave several examples of database systems that reached very different equilibrium points, depending on corporate lobbying, political realities, public outrage, etc. He believes that privacy will be regulated, the only question being when and how. “Where will the privacy boundary end up, and why? How can we nudge it one way or another?”

Alma Whitten, Google (suggested reading: Why Johnny can’t encrypt: A usability evaluation of PGP 5.0), presented a set of ideals about privacy (very European like) and some of the engineering challenges they present. “Engineering challenge #1: How to support access and control to personal data that isn’t authenticated? Engineering challenge #2: How to inform users about both authenticated and unauthenticated data? Engineering challenge #3: How to balance giving users control over data collection versus detecting and stopping abuse? Engineering challenge #4: How to give users fine-grained control over their data without overwhelming them with options? Engineering challenge #5: How to link sequential actions while preventing them from being linkable to a person? Engineering challenge #6: How to make the benefits of aggregate data analysis apparent to users? Engineering challenge #7: How to avoid or detect inadvertent recording of data that can be linked to an individual?” (Note that Alma requested not to be recorded.)

John Mueller, Ohio State University (suggested reading: Reacting to Terrorism: Probabilities, Consequences, and the Persistence of Fear; Evaluating Measures to Protect the Homeland from Terrorism; Terrorphobia: Our False Sense of Insecurity), talked about terrorism and the Department of Homeland Security. Terrorism isn’t a threat; it’s a problem and a concern, certainly, but the word “threat” is still extreme. Al Qaeda isn’t a threat, and they’re the most serious potential attacker against the U.S. and Western Europe. And terrorists are overwhelmingly stupid. Meanwhile, the terrorism issue “has become a self-licking ice cream cone.” In other words, it’s now an ever-perpetuating government bureaucracy. There are virtually an infinite number of targets; the odds of any one target being targeted is effectively zero; terrorists pick targets largely at random; if you protect target, it makes other targets less safe; most targets are vulnerable in the physical sense, but invulnerable in the sense that they can be rebuilt relatively cheaply (even something like the Pentagon); some targets simply can’t be protected; if you’re going to protect some targets, you need to determine if they should really be protected. (I recommend his book, Overblown.)

Adam Shostack, Microsoft (his blog), pointed out that even the problem of figuring out what part of the problem to work on first is difficult. One of the issues is shame. We don’t want to talk about what’s wrong, so we can’t use that information to determine where we want to go. We make excuses—customers will flee, people will sue, stock prices will go down—even though we know that those excuses have been demonstrated to be false.

During the discussion, there was a lot of talk about the choice between informing users and bombarding them with information they can’t understand. And lots more that I couldn’t transcribe.

And that’s it. SHB09 was a fantastic workshop, filled with interesting people and interesting discussion. Next year in the other Cambridge.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 4:55 PMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

1 3 4 5 6 7 9

Sidebar photo of Bruce Schneier by Joe MacInnis.