Entries Tagged "academic papers"

Page 78 of 86

Hacking Mifare Transport Cards

London’s Oyster card has been cracked, and the final details will become public in October. NXP Semiconductors, the Philips spin-off that makes the system, lost a court battle to prevent the researchers from publishing. People might be able to use this information to ride for free, but the sky won’t be falling. And the publication of this serious vulnerability actually makes us all safer in the long run.

Here’s the story. Every Oyster card has a radio-frequency identification chip that communicates with readers mounted on the ticket barrier. That chip, the “Mifare Classic” chip, is used in hundreds of other transport systems as well—Boston, Los Angeles, Brisbane, Amsterdam, Taipei, Shanghai, Rio de Janeiro—and as an access pass in thousands of companies, schools, hospitals, and government buildings around Britain and the rest of the world.

The security of Mifare Classic is terrible. This is not an exaggeration; it’s kindergarten cryptography. Anyone with any security experience would be embarrassed to put his name to the design. NXP attempted to deal with this embarrassment by keeping the design secret.

The group that broke Mifare Classic is from Radboud University Nijmegen in the Netherlands. They demonstrated the attack by riding the Underground for free, and by breaking into a building. Their two papers (one is already online) will be published at two conferences this autumn.

The second paper is the one that NXP sued over. They called disclosure of the attack “irresponsible,” warned that it will cause “immense damages,” and claimed that it “will jeopardize the security of assets protected with systems incorporating the Mifare IC.” The Dutch court would have none of it: “Damage to NXP is not the result of the publication of the article but of the production and sale of a chip that appears to have shortcomings.”

Exactly right. More generally, the notion that secrecy supports security is inherently flawed. Whenever you see an organization claiming that design secrecy is necessary for security—in ID cards, in voting machines, in airport security—it invariably means that its security is lousy and it has no choice but to hide it. Any competent cryptographer would have designed Mifare’s security with an open and public design.

Secrecy is fragile. Mifare’s security was based on the belief that no one would discover how it worked; that’s why NXP had to muzzle the Dutch researchers. But that’s just wrong. Reverse-engineering isn’t hard. Other researchers had already exposed Mifare’s lousy security. A Chinese company even sells a compatible chip. Is there any doubt that the bad guys already know about this, or will soon enough?

Publication of this attack might be expensive for NXP and its customers, but it’s good for security overall. Companies will only design security as good as their customers know to ask for. NXP’s security was so bad because customers didn’t know how to evaluate security: either they don’t know what questions to ask, or didn’t know enough to distrust the marketing answers they were given. This court ruling encourages companies to build security properly rather than relying on shoddy design and secrecy, and discourages them from promising security based on their ability to threaten researchers.

It’s unclear how this break will affect Transport for London. Cloning takes only a few seconds, and the thief only has to brush up against someone carrying a legitimate Oyster card. But it requires an RFID reader and a small piece of software which, while feasible for a techie, are too complicated for the average fare dodger. The police are likely to quickly arrest anyone who tries to sell cloned cards on any scale. TfL promises to turn off any cloned cards within 24 hours, but that will hurt the innocent victim who had his card cloned more than the thief.

The vulnerability is far more serious to the companies that use Mifare Classic as an access pass. It would be very interesting to know how NXP presented the system’s security to them.

And while these attacks only pertain to the Mifare Classic chip, it makes me suspicious of the entire product line. NXP sells a more secure chip and has another on the way, but given the number of basic cryptography mistakes NXP made with Mifare Classic, one has to wonder whether the “more secure” versions will be sufficiently so.

This essay originally appeared in the Guardian.

Posted on August 7, 2008 at 6:07 AMView Comments

TrueCrypt's Deniable File System

Together with Tadayoshi Kohno, Steve Gribble, and three of their students at the University of Washington, I have a new paper that breaks the deniable encryption feature of TrueCrypt version 5.1a. Basically, modern operating systems leak information like mad, making deniability a very difficult requirement to satisfy.

ABSTRACT: We examine the security requirements for creating a Deniable File System (DFS), and the efficacy with which the TrueCrypt disk-encryption software meets those requirements. We find that the Windows Vista operating system itself, Microsoft Word, and Google Desktop all compromise the deniability of a TrueCrypt DFS. While staged in the context of TrueCrypt, our research highlights several fundamental challenges to the creation and use of any DFS: even when the file system may be deniable in the pure, mathematical sense, we find that the environment surrounding that file system can undermine its deniability, as well as its contents. Finally, we suggest approaches for overcoming these challenges on modern operating systems like Windows.

The students did most of the actual work. I helped with the basic ideas, and contributed the threat model. Deniability is a very hard feature to achieve.

There are several threat models against which a DFS could potentially be secure:

  • One-Time Access. The attacker has a single snapshot of the disk image. An example would be when the secret police seize Alice’s computer.
  • Intermittent Access. The attacker has several snapshots of the disk image, taken at different times. An example would be border guards who make a copy of Alice’s hard drive every time she enters or leaves the country.
  • Regular Access. The attacker has many snapshots of the disk image, taken in short intervals. An example would be if the secret police break into Alice’s apartment every day when she is away, and make a copy of the disk each time.

Since we wrote our paper, TrueCrypt released version 6.0 of its software, which claims to have addressed many of the issues we’ve uncovered. In the paper, we said:

We analyzed the most current version of TrueCrypt available at the writing of the paper, version 5.1a. We shared a draft of our paper with the TrueCrypt development team in May 2008. TrueCrypt version 6.0 was released in July 2008. We have not analyzed version 6.0, but observe that TrueCrypt v6.0 does take new steps to improve TrueCrypt’s deniability properties (e.g., via the creation of deniable operating systems, which we also recommend in Section 5). We suggest that the breadth of our results for TrueCrypt v5.1a highlight the challenges to creating deniable file systems. Given these potential challenges, we encourage the users not to blindly trust the deniability of such systems. Rather, we encourage further research evaluating the deniability of such systems, as well as research on new yet light-weight methods for improving deniability.

So we cannot break the deniability feature in TrueCrypt 6.0. But, honestly, I wouldn’t trust it.

There have been two news articles (and a Slashdot thread) about the paper.

One talks about a generalization to encrypted partitions. If you don’t encrypt the entire drive, there is the possibility—and it seems very probable—that information about the encrypted partition will leak onto the unencrypted rest of the drive. Whole disk encryption is the smartest option.

Our paper will be presented at the 3rd USENIX Workshop on Hot Topics in Security (HotSec ’08). I’ve written about deniability before.

Posted on July 18, 2008 at 6:56 AMView Comments

Homeland Security Cost-Benefit Analysis

This is an excellent paper by Ohio State political science professor John Mueller. Titled “The Quixotic Quest for Invulnerability: Assessing the Costs, Benefits, and Probabilities of Protecting the Homeland,” it lays out some common send premises and policy implications.

The premises:

1. The number of potential terrorist targets is essentially infinite.

2. The probability that any individual target will be attacked is essentially zero.

3. If one potential target happens to enjoy a degree of protection, the agile terrorist usually can readily move on to another one.

4. Most targets are “vulnerable” in that it is not very difficult to damage them, but invulnerable in that they can be rebuilt in fairly short order and at tolerable expense.

5. It is essentially impossible to make a very wide variety of potential terrorist targets invulnerable except by completely closing them down.

The policy implications:

1. Any protective policy should be compared to a “null case”: do nothing, and use the money saved to rebuild and to compensate any victims.

2. Abandon any effort to imagine a terrorist target list.

3. Consider negative effects of protection measures: not only direct cost, but inconvenience, enhancement of fear, negative economic impacts, reduction of liberties.

4. Consider the opportunity costs, the tradeoffs, of protection measures.

Here’s the abstract:

This paper attempts to set out some general parameters for coming to grips with a central homeland security concern: the effort to make potential targets invulnerable, or at least notably less vulnerable, to terrorist attack. It argues that protection makes sense only when protection is feasible for an entire class of potential targets and when the destruction of something in that target set would have quite large physical, economic, psychological, and/or political consequences. There are a very large number of potential targets where protection is essentially a waste of resources and a much more limited one where it may be effective.

The whole paper is worth reading.

Posted on July 17, 2008 at 6:43 AMView Comments

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AMView Comments

Framing Computers Under the DMCA

Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can’t download anything, received nine takedown notices:

The researchers rigged the software agents to implicate three laserjet printers, which were then accused in takedown letters by the M.P.A.A. of downloading copies of “Iron Man” and the latest Indiana Jones film.

Research, including the paper, here.

Posted on June 9, 2008 at 6:47 AMView Comments

Risk and Culture

The Second National Risk and Culture Study, conducted by the Cultural Cognition Project at Yale Law School.

Abstract:

Cultural Cognition refers to the disposition to conform one’s beliefs about societal risks to one’s preferences for how society should be organized. Based on surveys and experiments involving some 5,000 Americans, the Second National Risk and Culture Study presents empirical evidence of the effect of this dynamic in generating conflict about global warming, school shootings, domestic terrorism, nanotechnology, and the mandatory vaccination of school-age girls against HPV, among other issues. The Study also presents evidence of risk-communication strategies that counteract cultural cognition. Because nuclear power affirms rather than threatens the identity of persons who hold individualist values, for example, proposing it as a solution to global warming makes persons who hold such values more willing to consider evidence that climate change is a serious risk. Because people tend to impute credibility to people who share their values, persons who hold hierarchical and egalitarian values are less likely to polarize when they observe people who hold their values advocating unexpected positions on the vaccination of young girls against HPV. Such techniques can help society to create a deliberative climate in which citizens converge on policies that are both instrumentally sound and expressively congenial to persons of diverse values.

And from the conclusion:

Conclusion:

There is a culture war in America, but it is about facts, not values. There is very little evidence that most Americans care nearly as much about issues that symbolize competing cultural values as they do about the economy, national security, and the safety and health of themselves and their loved ones. There is ample evidence, however, that Americans are sharply divided along cultural lines about what sorts of conditions endanger these interests and what sorts of policies effectively counteract such risks.

Findings from the Second National Culture and Risk Study help to show why. Psychologically speaking, it’s much easier to believe that conduct one finds dishonorable or offensive is dangerous, and conduct one finds noble or admirable is socially beneficial, than vice versa. People are also much more inclined to accept information about risk and danger when it comes from someone who shares their values than when it comes from someone who holds opposing commitments.

Posted on May 21, 2008 at 5:19 AMView Comments

Spying on Computer Monitors Off Reflective Objects

Impressive research:

At Saarland University, researchers trained a $500 telescope on a teapot near a computer monitor 5 meters away. The images are tiny but amazingly clear, professor Michael Backes told IDG.

All it took was a $500 telescope trained on a reflective object in front of the monitor. For example, a teapot yielded readable images of 12 point Word documents from a distance of 5 meters (16 feet). From 10 meters, they were able to read 18 point fonts. With a $27,500 Dobson telescope, they could get the same quality of images at 30 meters.

Here’s the paper:

Abstract

We present a novel eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors. Our technique exploits reflections of the screen’s optical emanations in various objects that one commonly finds in close proximity to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. We have demonstrated that this attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed us to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close-by building. We additionally establish theoretical limitations of the attack; these limitations may help to estimate the risk that this attack can be successfully mounted in a given environment.

Posted on May 20, 2008 at 10:44 AMView Comments

Designing Processors to Support Hacking

This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you’ve got yourself a potentially serious problem.

Posted on April 24, 2008 at 1:52 PMView Comments

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PMView Comments

Risk Preferences in Chimpanzees and Bonobos

I’ve already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:

Evolutionarily, presumably it is a better survival strategy to—all other things being equal, of course—accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.

Similarly, it is evolutionarily better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. That is, both can result in death. If that’s true, the best option is to risk everything for the chance at no loss at all.

This behavior has been demonstrated in animals as well: “species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food.”

A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos.

Abstract

Human and non-human animals tend to avoid risky prospects. If such patterns of economic choice are adaptive, risk preferences should reflect the typical decision-making environments faced by organisms. However, this approach has not been widely used to examine the risk sensitivity in closely related species with different ecologies. Here, we experimentally examined risk-sensitive behaviour in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus), closely related species whose distinct ecologies are thought to be the major selective force shaping their unique behavioural repertoires. Because chimpanzees exploit riskier food sources in the wild, we predicted that they would exhibit greater tolerance for risk in choices about food. Results confirmed this prediction: chimpanzees significantly preferred the risky option, whereas bonobos preferred the fixed option. These results provide a relatively rare example of risk-prone behaviour in the context of gains and show how ecological pressures can sculpt economic decision making.

The basic argument is that in the natural environment of the chimpanzee, if you don’t take risks you don’t get any of the high-value rewards (e.g., monkey meat). Bonobos “rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source.” So chimpanzees are less likely to avoid taking risks.

Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied—five of each species—were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the “experiences, cultures and conditions of the two specific groups tested here.”

The second problem is more general: we know very little about the life of bonobos in the wild. There’s a lot of popular stereotypes about bonobos, but they’re sloppy at best.

Even so, I like seeing this kind of research. It’s fascinating.

EDITED TO ADD (5/13): Response to that last link.

Posted on April 17, 2008 at 6:20 AMView Comments

1 76 77 78 79 80 86

Sidebar photo of Bruce Schneier by Joe MacInnis.