Entries Tagged "voting"

Page 10 of 17

Hacking Trial Breaks D.C. Internet Voting System

Sounds like it was easy:

Last week, the D.C. Board of Elections and Ethics opened a new Internet-based voting system for a weeklong test period, inviting computer experts from all corners to prod its vulnerabilities in the spirit of “give it your best shot.” Well, the hackers gave it their best shot—and midday Friday, the trial period was suspended, with the board citing “usability issues brought to our attention.”

[…]

Stenbjorn said a Michigan professor whom the board has been working with on the project had “unleashed his students” during the test period, and one succeeded in infiltrating the system.

My primary worry about contests like this is that people will think a positive result means something. If a bunch of students can break into a system after a couple of weeks of attempts, we know it’s insecure. But just because a system withstands a test like this doesn’t mean it’s secure. We don’t know who tried. We don’t know what they tried. We don’t know how long they tried. And we don’t know if someone who tries smarter, harder, and longer could break the system.

More links.

Posted on October 8, 2010 at 6:23 AMView Comments

Second SHB Workshop Liveblogging (8)

The penultimate session of the conference was “Privacy,” moderated by Tyler Moore.

Alessandro Acquisti, Carnegie Mellon University (suggested reading: What Can Behavioral Economics Teach Us About Privacy?; Privacy in Electronic Commerce and the Economics of Immediate Gratification), presented research on how people value their privacy. He started by listing a variety of cognitive biases that affect privacy decisions: illusion of control, overconfidence, optimism bias, endowment effect, and so on. He discussed two experiments. The first demonstrated a “herding effect”: if a subject believes that others reveal sensitive behavior, the subject is more likely to also reveal sensitive behavior. The second examined the “frog effect”: do privacy intrusions alert or desensitize people to revealing personal information? What he found is that people tend to set their privacy level at the beginning of a survey, and don’t respond well to being asked easy questions at first and then sensitive questions at the end. In the discussion, Joe Bonneau asked him about the notion that people’s privacy protections tend to ratchet up over time; he didn’t have conclusive evidence, but gave several possible explanations for the phenomenon.

Adam Joinson, University of Bath (suggested reading: Privacy, Trust and Self-Disclosure Online; Privacy concerns and privacy actions), also studies how people value their privacy. He talked about expressive privacy—privacy that allows people to express themselves and form interpersonal relationships. His research showed that differences between how people use Facebook in different countries depend on how much people trust Facebook as a company, rather than how much people trust other Facebook users. Another study looked at posts from Secret Tweet and Twitter. He found 16 markers that allowed him to automatically determine which tweets contain sensitive personal information and which do not, with high probability. Then he tried to determine if people with large Twitter followings post fewer secrets than people who are only twittering to a few people. He found absolutely no difference.

Peter Neumann, SRI (suggested reading: Holistic systems; Risks; Identity and Trust in Context), talked about lack of medical privacy (too many people have access to your data), about voting (the privacy problem makes the voting problem a lot harder, and the end-to-end voting security/privacy problem is much harder than just securing voting machines), and privacy in China (the government is requiring all computers sold in China to be sold with software allowing them to eavesdrop on the users). Any would-be solution needs to reflect the ubiquity of the threat. When we design systems, we need to anticipate what the privacy problems will be. Privacy problems are everywhere you look, and ordinary people have no idea of the depth of the problem.

Eric Johnson, Dartmouth College (suggested reading: Access Flexibility with Escalation and Audit; Security through Information Risk Management), studies the information access problem from a business perspective. He’s been doing field studies in companies like retail banks and investment banks, and found that role-based access control fails because companies can’t determine who has what role. Even worse, roles change quickly, especially in large complex organizations. For example, one business group of 3000 people experiences 1000 role changes within three months. The result is that organizations do access control badly, either over-entitling or under-entitling people. But since getting the job done is the most important thing, organizations tend to over-entitle: give people more access than they need. His current work is to find the right set of incentives and controls to set access more properly. The challege is to do this without making people risk averse. In the discussion, he agreed that a perfect access control system is not possible, and that organizations should probably allow a certain amount of access control violations—similar to the idea of posting a 55 mph speed limit but not ticketing people unless they go over 70 mph.

Christine Jolls, Yale Law School (suggested reading: Rationality and Consent in Privacy Law, Employee Privacy), made the point that people regularly share their most private information with their intimates—so privacy is not about secrecy, it’s more about control. There are moments when people make pretty big privacy decisions. For example, they grant employers the rights to monitor their e-mail, or test their urine without notice. In general, courts hold that blanket signing away of privacy rights—”you can test my urine on any day in the future”—are not valid, but immediate signing away of privacy of privacy rights—”you can test my urine today”—are. Jolls believes that this is reasonable for several reasons, such as optimism bias and an overfocus on the present at the expense of the future. Without realizing it, the courts have implemented the system that behavioral economics would find optimal. During the discussion, she talked about how coercion figures into this; the U.S. legal system tends not to be concerned with it.

Andrew Adams, University of Reading (suggested reading: Regulating CCTV), also looks at attitudes of privacy on social networking services. His results are preliminary, and based on interviews with university students in Canada, Japan, and the UK, and are very concordant with what danah boyd and Joe Bonneau said earlier. From the UK: People join social networking sites to increase their level of interaction with people they already know in real life. Revealing personal information is okay, but revealing too much is bad. Even more interestingly, it’s not okay to reveal more about others than they reveal themselves. From Japan: People are more open to making friends online. There’s more anonymity. It’s not okay to reveal information about others, but “the fault of this lies as much with the person whose data was revealed in not choosing friends wisely.” This victim responsibility is a common theme with other privacy and security elements in Japan. Data from Canada is still being compiled.

Great phrase: the “laundry belt”—close enough for students to go home on weekends with their laundry, but far enough away so they don’t feel as if their parents are looking over their shoulder—typically two hours by public transportation (in the UK).

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 12, 2009 at 3:01 PMView Comments

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Election Fraud in Kentucky

I think this is the first documented case of election fraud in the U.S. using electronic voting machines (there have been lots of documented cases of errors and voting problems, but this one involves actual maliciousness):

Five Clay County officials, including the circuit court judge, the county clerk, and election officers were arrested Thursday after they were indicted on federal charges accusing them of using corrupt tactics to obtain political power and personal gain.

The 10-count indictment, unsealed Thursday, accused the defendants of a conspiracy from March 2002 until November 2006 that violated the Racketeering Influenced and Corrupt Organizations Act (RICO). RICO is a federal statute that prosecutors use to combat organized crime. The defendants were also indicted for extortion, mail fraud, obstruction of justice, conspiracy to injure voters’ rights and conspiracy to commit voter fraud.

According to the indictment, these alleged criminal actions affected the outcome of federal, local, and state primary and general elections in 2002, 2004, and 2006.

From BradBlog:

Clay County uses the horrible ES&S iVotronic system for all of its votes at the polling place. The iVotronic is a touch-screen Direct Recording Electronic (DRE) device, offering no evidence, of any kind, that any vote has ever been recorded as per the voter’s intent. If the allegations are correct here, there would likely have been no way to discover, via post-election examination of machines or election results, that votes had been manipulated on these machines.

ES&S is the largest distributor of voting systems in America and its iVotronic system—which is well-documented to have lost and flipped votes on many occasions—is likely the most widely-used DRE system in the nation. It’s currently in use in some 419 jurisdictions in 18 states including Arkansas, Colorado, Florida, Indiana, Kansas, Kentucky, Missouri, Mississippi, North Carolina, New Jersey, Ohio, Pennsylvania, South Carolina, Tennessee, Texas, Virginia, Wisconsin, and West Virginia.

ArsTechnica has more, and here’s the actual indictment; BradBlog has excerpts.

The fraud itself is very low-tech, and didn’t make use of any of the documented vulnerabilities in the ES&S iVotronic machines; it was basic social engineering. Matt Blaze explains:

The iVotronic is a popular Direct Recording Electronic (DRE) voting machine. It displays the ballot on a computer screen and records voters’ choices in internal memory. Voting officials and machine manufacturers cite the user interface as a major selling point for DRE machines—it’s already familiar to voters used to navigating touchscreen ATMs, computerized gas pumps, and so on, and thus should avoid problems like the infamous “butterfly ballot”. Voters interact with the iVotronic primarily by touching the display screen itself. But there’s an important exception: above the display is an illuminated red button labeled “VOTE” (see photo at right). Pressing the VOTE button is supposed to be the final step of a voter’s session; it adds their selections to their candidates’ totals and resets the machine for the next voter.

The Kentucky officials are accused of taking advantage of a somewhat confusing aspect of the way the iVotronic interface was implemented. In particular, the behavior (as described in the indictment) of the version of the iVotronic used in Clay County apparently differs a bit from the behavior described in ES&S’s standard instruction sheet for voters [pdf – see page 2]. A flash-based iVotronic demo available from ES&S here shows the same procedure, with the VOTE button as the last step. But evidently there’s another version of the iVotronic interface in which pressing the VOTE button is only the second to last step. In those machines, pressing VOTE invokes an extra “confirmation” screen. The vote is only actually finalized after a “confirm vote” box is touched on that screen. (A different flash demo that shows this behavior with the version of the iVotronic equipped with a printer is available from ES&S here). So the iVotronic VOTE button doesn’t necessarily work the way a voter who read the standard instructions might expect it to.

The indictment describes a conspiracy to exploit this ambiguity in the iVotronic user interface by having pollworkers systematically (and incorrectly) tell voters that pressing the VOTE button is the last step. When a misled voter would leave the machine with the extra “confirm vote” screen still displayed, a pollworker would quietly “correct” the not-yet-finalized ballot before casting it. It’s a pretty elegant attack, exploiting little more than a poorly designed, ambiguous user interface, printed instructions that conflict with actual machine behavior, and public unfamiliarity with equipment that most citizens use at most once or twice each year. And once done, it leaves behind little forensic evidence to expose the deed.

Read the rest of Blaze’s post for some good analysis on the attack and what it says about iVotronic. He led the team that analyzed the security of that very machine:

We found numerous exploitable security weaknesses in these machines, many of which would make it easy for a corrupt voter, pollworker, or election official to tamper with election results (see our report for details).

[…]

On the one hand, we might be comforted by the relatively “low tech” nature of the attack—no software modifications, altered electronic records, or buffer overflow exploits were involved, even though the machines are, in fact, quite vulnerable to such things. But a close examination of the timeline in the indictment suggests that even these “simple” user interface exploits might well portend more technically sophisticated attacks sooner, rather than later.

Count 9 of the Kentucky indictment alleges that the Clay County officials first discovered and conspired to exploit the iVotronic “confirm screen” ambiguity around June 2004. But Kentucky didn’t get iVotronics until at the earliest late 2003; according to the state’s 2003 HAVA Compliance Plan [pdf], no Kentucky county used the machines as of mid-2003. That means that the officials involved in the conspiracy managed to discover and work out the operational details of the attack soon after first getting the machines, and were able to use it to alter votes in the next election.

[…]

But that’s not the worst news in this story. Even more unsettling is the fact that none of the published security analyses of the iVotronic—including the one we did at Penn—had noticed the user interface weakness. The first people to have discovered this flaw, it seems, didn’t publish or report it. Instead, they kept it to themselves and used it to steal votes.

Me on electronic voting machines, from 2004.

Posted on March 24, 2009 at 6:41 AMView Comments

Fingerprinting Paper

Interesting paper:

Fingerprinting Blank Paper Using Commodity Scanners

Will Clarkson, Tim Weyrich, Adam Finkelstein, Nadia Heninger, Alex Halderman, and Edward W. Felten

Abstract: This paper presents a novel technique for authenticating physical documents based on random, naturally occurring imperfections in paper texture. We introduce a new method for measuring the three-dimensional surface of a page using only a commodity scanner and without modifying the document in any way. From this physical feature, we generate a concise fingerprint that uniquely identifies the document. Our technique is secure against counterfeiting and robust to harsh handling; it can be used even before any content is printed on a page. It has a wide range of applications, including detecting forged currency and tickets, authenticating passports, and halting counterfeit goods. Document identification could also be applied maliciously to de-anonymize printed surveys and to compromise the secrecy of paper ballots.

Posted on March 19, 2009 at 6:07 AMView Comments

When Voting Machine Audit Logs Don't Help

Wow:

Computer audit logs showing what occurred on a vote tabulation system that lost ballots in the November election are raising more questions not only about how the votes were lost, but also about the general reliability of voting system audit logs to record what occurs during an election and to ensure the integrity of results.

The logs, which Threat Level obtained through a public records request from Humboldt County, California, are produced by the Global Election Management System, the tabulation software, also known as GEMS, that counts the votes cast on all voting machines—touch-screen and optical-scan machines—made by Premier Election Solutions (formerly called Diebold Election Systems).

The article gets pretty technical, but is worth reading.

Posted on January 23, 2009 at 7:43 AMView Comments

1 8 9 10 11 12 17

Sidebar photo of Bruce Schneier by Joe MacInnis.