Entries Tagged "societal security"

Page 3 of 3

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball — and scheduling constraint — to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts — what e-mails they open and when they click on links — she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics — and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users — and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions — except for the couple of presenters who would rather not be taped — I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Tweenbots

Tweenbots:

Tweenbots are human-dependent robots that navigate the city with the help of pedestrians they encounter. Rolling at a constant speed, in a straight line, Tweenbots have a destination displayed on a flag, and rely on people they meet to read this flag and to aim them in the right direction to reach their goal.

Given their extreme vulnerability, the vastness of city space, the dangers posed by traffic, suspicion of terrorism, and the possibility that no one would be interested in helping a lost little robot, I initially conceived the Tweenbots as disposable creatures which were more likely to struggle and die in the city than to reach their destination. Because I built them with minimal technology, I had no way of tracking the Tweenbot’s progress, and so I set out on the first test with a video camera hidden in my purse. I placed the Tweenbot down on the sidewalk, and walked far enough away that I would not be observed as the Tweenbot–a smiling 10-inch tall cardboard missionary–bumped along towards his inevitable fate.

The results were unexpected. Over the course of the following months, throughout numerous missions, the Tweenbots were successful in rolling from their start point to their far-away destination assisted only by strangers. Every time the robot got caught under a park bench, ground futilely against a curb, or became trapped in a pothole, some passerby would always rescue it and send it toward its goal. Never once was a Tweenbot lost or damaged. Often, people would ignore the instructions to aim the Tweenbot in the “right” direction, if that direction meant sending the robot into a perilous situation. One man turned the robot back in the direction from which it had just come, saying out loud to the Tweenbot, “You can’t go that way, it’s toward the road.”

It’s a measure of our restored sanity that no one called the TSA. Or maybe it’s just that no one has tried this in Boston yet. Or maybe it’s a lesson for terrorists: paint smiley faces on your bombs.

Posted on April 13, 2009 at 6:14 AMView Comments

Foiling Bank Robbers with Kindness

Seems to work:

The method is a sharp contrast to the traditional training for bank employees confronted with a suspicious person, which advises not approaching the person, and at most, activating an alarm or dropping an exploding dye pack into the cash.

When a man walked into a First Mutual branch last year wearing garden gloves and sunglasses, manager Scott Taffera greeted him heartily, invited him to remove the glasses, and guided him to an equally friendly teller. The man eventually asked for a roll of quarters and left.

Carr said he suspects the man was the “Garden Glove Bandit,” who robbed area banks between March 2004 and November 2006.

What I like about this security system is that it fails really well in the event of a false alarm. There’s nothing wrong with being extra nice to a legitimate customer.

Posted on April 18, 2007 at 6:24 AMView Comments

Security Through Begging

From TechDirt:

Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It’s only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems — so that the next time this happens, there won’t be anyone on the network to download such documents.

Even if their begging works, it solves the wrong problem. Sad.

EDITED TO ADD (3/22): Another article.

Posted on March 20, 2006 at 2:01 PMView Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly’s rant against anonymity. Today I wrote about it for Wired.com:

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then — heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy — and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AMView Comments

Kevin Kelly on Anonymity

He’s against it:

More anonymity is good: that’s a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private.

However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it’s good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

There’s a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible.

I don’t even know where to begin. Anonymity is essential for free and fair elections. It’s essential for democracy and, I think, liberty. It’s essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority…and to protect individual self-respect.

Kelly makes the very valid point that reputation makes society work. But that doesn’t mean that 1) reputation can’t be anonymous, or 2) anonymity isn’t also essential for society to work.

I’m writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.

Posted on January 5, 2006 at 1:20 PMView Comments

Dog Poop Girl

Here’s the basic story: A woman and her dog are riding the Seoul subways. The dog poops in the floor. The woman refuses to clean it up, despite being told to by other passangers. Someone takes a picture of her, posts it on the Internet, and she is publicly shamed — and the story will live on the Internet forever. Then, the blogosphere debates the notion of the Internet as a social enforcement tool.

The Internet is changing our notions of personal privacy, and how the public enforces social norms.

Daniel Solove writes:

The dog-shit-girl case involves a norm that most people would seemingly agree to — clean up after your dog. Who could argue with that one? But what about when norm enforcement becomes too extreme? Most norm enforcement involves angry scowls or just telling a person off. But having a permanent record of one’s norm violations is upping the sanction to a whole new level. The blogosphere can be a very powerful norm-enforcing tool, allowing bloggers to act as a cyber-posse, tracking down norm violators and branding them with digital scarlet letters.

And that is why the law might be necessary — to modulate the harmful effects when the norm enforcement system gets out of whack. In the United States, privacy law is often the legal tool called in to address the situation. Suppose the dog poop incident occurred in the United States. Should the woman have legal redress under the privacy torts?

If this incident is any guide, then anyone acting outside the accepted norms of whatever segment of humanity surrounds him had better tread lightly. The question we need to answer is: is this the sort of society we want to live in? And if not, what technological or legal controls do we need to put in place to ensure that we don’t?

Solove again:

I believe that, as complicated as it might be, the law must play a role here. The stakes are too important. While entering law into the picture could indeed stifle freedom of discussion on the Internet, allowing excessive norm enforcement can be stifling to freedom as well.

All the more reason why we need to rethink old notions of privacy. Under existing notions, privacy is often thought of in a binary way ­ something either is private or public. According to the general rule, if something occurs in a public place, it is not private. But a more nuanced view of privacy would suggest that this case involved taking an event that occurred in one context and significantly altering its nature ­ by making it permanent and widespread. The dog-shit-girl would have been just a vague image in a few people’s memory if it hadn’t been for the photo entering cyberspace and spreading around faster than an epidemic. Despite the fact that the event occurred in public, there was no need for her image and identity to be spread across the Internet.

Could the law provide redress? This is a complicated question; certainly under existing doctrine, making a case would have many hurdles. And some will point to practical problems. Bloggers often don’t have deep pockets. But perhaps the possibility of lawsuits might help shape the norms of the Internet. In the end, I strongly doubt that the law alone can address this problem; but its greatest contribution might be to help along the development of blogging norms that will hopefully prevent more cases such as this one from having crappy endings.

Posted on July 29, 2005 at 4:21 PMView Comments

Hacking the Papal Election

As the College of Cardinals prepares to elect a new pope, people like me wonder about the election process. How does it work, and just how hard is it to hack the vote?

Of course I’m not advocating voter fraud in the papal election. Nor am I insinuating that a cardinal might perpetrate fraud. But people who work in security can’t look at a system without trying to figure out how to break it; it’s an occupational hazard.

The rules for papal elections are steeped in tradition, and were last codified on 22 Feb 1996: “Universi Dominici Gregis on the Vacancy of the Apostolic See and the Election of the Roman Pontiff.” The document is well-thought-out, and filled with details.

The election takes place in the Sistine Chapel, directed by the Church Chamberlain. The ballot is entirely paper-based, and all ballot counting is done by hand. Votes are secret, but everything else is done in public.

First there’s the “pre-scrutiny” phase. “At least two or three” paper ballots are given to each cardinal (115 will be voting), presumably so that a cardinal has extras in case he makes a mistake. Then nine election officials are randomly selected: three “Scrutineers” who count the votes, three “Revisers,” who verify the results of the Scrutineers, and three “Infirmarii” who collect the votes from those too sick to be in the room. (These officials are chosen randomly for each ballot.)

Each cardinal writes his selection for Pope on a rectangular ballot paper “as far as possible in handwriting that cannot be identified as his.” He then folds the paper lengthwise and holds it aloft for everyone to see.

When everyone is done voting, the “scrutiny” phase of the election begins. The cardinals proceed to the altar one by one. On the altar is a large chalice with a paten (the shallow metal plate used to hold communion wafers during mass) resting on top of it. Each cardinal places his folded ballot on the paten. Then he picks up the paten and slides his ballot into the chalice.

If a cardinal cannot walk to the altar, one of the Scrutineers — in full view of everyone — does this for him. If any cardinals are too sick to be in the chapel, the Scrutineers give the Infirmarii a locked empty box with a slot, and the three Infirmarii together collect those votes. (If a cardinal is too sick to write, he asks one of the Infirmarii to do it for him) The box is opened and the ballots are placed onto the paten and into the chalice, one at a time.

When all the ballots are in the chalice, the first Scrutineer shakes it several times in order to mix them. Then the third Scrutineer transfers the ballots, one by one, from one chalice to another, counting them in the process. If the total number of ballots is not correct, the ballots are burned and everyone votes again.

To count the votes, each ballot is opened and the vote is read by each Scrutineer in turn, the third one aloud. Each Scrutineer writes the vote on a tally sheet. This is all done in full view of the cardinals. The total number of votes cast for each person is written on a separate sheet of paper.

Then there’s the “post-scrutiny” phase. The Scrutineers tally the votes and determine if there’s a winner. Then the Revisers verify the entire process: ballots, tallies, everything. And then the ballots are burned. (That’s where the smoke comes from: white if a Pope has been elected, black if not.)

How hard is this to hack? The first observation is that the system is entirely manual, making it immune to the sorts of technological attacks that make modern voting systems so risky. The second observation is that the small group of voters — all of whom know each other — makes it impossible for an outsider to affect the voting in any way. The chapel is cleared and locked before voting. No one is going to dress up as a cardinal and sneak into the Sistine Chapel. In effect, the voter verification process is about as perfect as you’re ever going to find.

Eavesdropping on the process is certainly possible, although the rules explicitly state that the chapel is to be checked for recording and transmission devices “with the help of trustworthy individuals of proven technical ability.” I read that the Vatican is worried about laser microphones, as there are windows near the chapel’s roof.

That leaves us with insider attacks. Can a cardinal influence the election? Certainly the Scrutineers could potentially modify votes, but it’s difficult. The counting is conducted in public, and there are multiple people checking every step. It’s possible for the first Scrutineer, if he’s good at sleight of hand, to swap one ballot paper for another before recording it. Or for the third Scrutineer to swap ballots during the counting process.

A cardinal can’t stuff ballots when he votes. The complicated paten-and-chalice ritual ensures that each cardinal votes once — his ballot is visible — and also keeps his hand out of the chalice holding the other votes.

Making the ballots large would make these attacks harder. So would controlling the blank ballots better, and only distributing one to each cardinal per vote. Presumably cardinals change their mind more often during the voting process, so distributing extra blank ballots makes sense.

Ballots from previous votes are burned, which makes it harder to use one to stuff the ballot box. But there’s one wrinkle: “If however a second vote is to take place immediately, the ballots from the first vote will be burned only at the end, together with those from the second vote.” I assume that’s done so there’s only one plume of smoke for the two elections, but it would be more secure to burn each set of ballots before the next round of voting.

And lastly, the cardinals are in “choir dress” during the voting, which has translucent lace sleeves under a short red cape; much harder for sleight-of-hand tricks.

It’s possible for one Scrutineer to misrecord the votes, but with three Scrutineers, the discrepancy would be quickly detected. I presume a recount would take place, and the correct tally would be verified. Two or three Scrutineers in cahoots with each other could do more mischief, but since the Scrutineers are chosen randomly, the probability of a cabal being selected is very low. And then the Revisers check everything.

More interesting is to try and attack the system of selecting Scrutineers, which isn’t well-defined in the document. Influencing the selection of Scrutineers and Revisers seems a necessary first step towards influencing the election.

Ballots with more than one name (overvotes) are void, and I assume the same is true for ballots with no name written on them (undervotes). Illegible or ambiguous ballots are much more likely, and I presume they are discarded. The rules do have a provision for multiple ballots by the same cardinal: “If during the opening of the ballots the Scrutineers should discover two ballots folded in such a way that they appear to have been completed by one elector, if these ballots bear the same name they are counted as one vote; if however they bear two different names, neither vote will be valid; however, in neither of the two cases is the voting session annulled.” This surprises me, although I suppose it has happened by accident.

If there’s a weak step, it’s the counting of the ballots. There’s no real reason to do a pre-count, and it gives the Scrutineer doing the transfer a chance to swap legitimate ballots with others he previously stuffed up his sleeve. I like the idea of randomizing the ballots, but putting the ballots in a wire cage and spinning it around would accomplish the same thing more securely, albeit with less reverence.

And if I were improving the process, I would add some kind of white-glove treatment to prevent a Scrutineer from hiding a pencil lead or pen tip under his fingernails. Although the requirement to write out the candidate’s name in full gives more resistance against this sort of attack.

The recent change in the process that lets the cardinals go back and forth from the chapel into their dorm rooms — instead of being locked in the chapel the whole time as was done previously — makes the process slightly less secure. But I’m sure it makes it a lot more comfortable.

Lastly, there’s the potential for one of the Infirmarii to do what he wants when transcribing the vote of an infirm cardinal, but there’s no way to prevent that. If the cardinal is concerned, he could ask all three Infirmarii to witness the ballot.

There’s also enormous social — religious, actually — disincentives to hacking the vote. The election takes place in a chapel, and at an altar. They also swear an oath as they are casting their ballot — further discouragement. And the Scrutineers are explicitly exhorted not to form any sort of cabal or make any plans to sway the election under pain of excommunication: “The Cardinal electors shall further abstain from any form of pact, agreement, promise or other commitment of any kind which could oblige them to give or deny their vote to a person or persons.”

I’m sure there are negotiations and deals and influencing — cardinals are mortal men, after all, and such things are part of how humans come to agreement.

What are the lessons here? First, open systems conducted within a known group make voting fraud much harder. Every step of the election process is observed by everyone, and everyone knows everyone, which makes it harder for someone to get away with anything. Second, small and simple elections are easier to secure. This kind of process works to elect a Pope or a club president, but quickly becomes unwieldy for a large-scale election. The only way manual systems work is through a pyramid-like scheme, with small groups reporting their manually obtained results up the chain to more central tabulating authorities.

And a third and final lesson: when an election process is left to develop over the course of a couple thousand years, you end up with something surprisingly good.

Rules for a papal election

There’s a picture of choir dress on this page

Edited to add: The stack of used ballots are pierced with a needle and thread and tied together, which 1) marks them as used, and 2) makes them harder to reuse.

Posted on April 14, 2005 at 9:59 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.