Entries Tagged "cooperation"

Page 2 of 2

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Tweenbots

Tweenbots:

Tweenbots are human-dependent robots that navigate the city with the help of pedestrians they encounter. Rolling at a constant speed, in a straight line, Tweenbots have a destination displayed on a flag, and rely on people they meet to read this flag and to aim them in the right direction to reach their goal.

Given their extreme vulnerability, the vastness of city space, the dangers posed by traffic, suspicion of terrorism, and the possibility that no one would be interested in helping a lost little robot, I initially conceived the Tweenbots as disposable creatures which were more likely to struggle and die in the city than to reach their destination. Because I built them with minimal technology, I had no way of tracking the Tweenbot’s progress, and so I set out on the first test with a video camera hidden in my purse. I placed the Tweenbot down on the sidewalk, and walked far enough away that I would not be observed as the Tweenbot—a smiling 10-inch tall cardboard missionary—bumped along towards his inevitable fate.

The results were unexpected. Over the course of the following months, throughout numerous missions, the Tweenbots were successful in rolling from their start point to their far-away destination assisted only by strangers. Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, “You can’t go that way, it’s toward the road.”

It’s a measure of our restored sanity that no one called the TSA. Or maybe it’s just that no one has tried this in Boston yet. Or maybe it’s a lesson for terrorists: paint smiley faces on your bombs.

Posted on April 13, 2009 at 6:14 AMView Comments

The Kindness of Strangers

When I was growing up, children were commonly taught: “don’t talk to strangers.” Strangers might be bad, we were told, so it’s prudent to steer clear of them.

And yet most people are honest, kind, and generous, especially when someone asks them for help. If a small child is in trouble, the smartest thing he can do is find a nice-looking stranger and talk to him.

These two pieces of advice may seem to contradict each other, but they don’t. The difference is that in the second instance, the child is choosing which stranger to talk to. Given that the overwhelming majority of people will help, the child is likely to get help if he chooses a random stranger. But if a stranger comes up to a child and talks to him or her, it’s not a random choice. It’s more likely, although still unlikely, that the stranger is up to no good.

As a species, we tend help each other, and a surprising amount of our security and safety comes from the kindness of strangers. During disasters: floods, earthquakes, hurricanes, bridge collapses. In times of personal tragedy. And even in normal times.

If you’re sitting in a café working on your laptop and need to get up for a minute, ask the person sitting next to you to watch your stuff. He’s very unlikely to steal anything. Or, if you’re nervous about that, ask the three people sitting around you. Those three people don’t know each other, and will not only watch your stuff, but they’ll also watch each other to make sure no one steals anything.

Again, this works because you’re selecting the people. If three people walk up to you in the café and offer to watch your computer while you go to the bathroom, don’t take them up on that offer. Your odds of getting three honest people are much lower.

Some computer systems rely on the kindness of strangers, too. The Internet works because nodes benevolently forward packets to each other without any recompense from either the sender or receiver of those packets. Wikipedia works because strangers are willing to write for, and edit, an encyclopedia—with no recompense.

Collaborative spam filtering is another example. Basically, once someone notices a particular e-mail is spam, he marks it, and everyone else in the network is alerted that it’s spam. Marking the e-mail is a completely altruistic task; the person doing it gets no benefit from the action. But he receives benefit from everyone else doing it for other e-mails.

Tor is a system for anonymous Web browsing. The details are complicated, but basically, a network of Tor servers passes Web traffic among each other in such a way as to anonymize where it came from. Think of it as a giant shell game. As a Web surfer, I put my Web query inside a shell and send it to a random Tor server. That server knows who I am but not what I am doing. It passes that shell to another Tor server, which passes it to a third. That third server—which knows what I am doing but not who I am—processes the Web query. When the Web page comes back to that third server, the process reverses itself and I get my Web page. Assuming enough Web surfers are sending enough shells through the system, even someone eavesdropping on the entire network can’t figure out what I’m doing.

It’s a very clever system, and it protects a lot of people, including journalists, human rights activists, whistleblowers, and ordinary people living in repressive regimes around the world. But it only works because of the kindness of strangers. No one gets any benefit from being a Tor server; it uses up bandwidth to forward other people’s packets around. It’s more efficient to be a Tor client and use the forwarding capabilities of others. But if there are no Tor servers, then there’s no Tor. Tor works because people are willing to set themselves up as servers, at no benefit to them.

Alibi clubs work along similar lines. You can find them on the Internet, and they’re loose collections of people willing to help each other out with alibis. Sign up, and you’re in. You can ask someone to pretend to be your doctor and call your boss. Or someone to pretend to be your boss and call your spouse. Or maybe someone to pretend to be your spouse and call your boss. Whatever you want, just ask and some anonymous stranger will come to your rescue. And because your accomplice is an anonymous stranger, it’s safer than asking a friend to participate in your ruse.

There are risks in these sorts of systems. Regularly, marketers and other people with agendas try to manipulate Wikipedia entries to suit their interests. Intelligence agencies can, and almost certainly have, set themselves up as Tor servers to better eavesdrop on traffic. And a do-gooder could join an alibi club just to expose other members. But for the most part, strangers are willing to help each other, and systems that harvest this kindness work very well on the Internet.

This essay originally appeared on the Wall Street Journal website.

Posted on March 13, 2009 at 7:41 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.