Entries Tagged "academic papers"

Page 72 of 80

Security and Human Behavior

I’m writing from the First Interdisciplinary Workshop on Security and Human Behavior (SHB 08).

Security is both a feeling and a reality, and they’re different. There are several different research communities: technologists who study security systems, and psychologists who study people, not to mention economists, anthropologists and others. Increasingly these worlds are colliding.

  • Security design is by nature psychological, yet many systems ignore this, and cognitive biases lead people to misjudge risk. For example, a key in the corner of a web browser makes people feel more secure than they actually are, while people feel far less secure flying than they actually are. These biases are exploited by various attackers.

  • Security problems relate to risk and uncertainty, and the way we react to them. Cognitive and perception biases affect the way we deal with risk, and therefore the way we understand security—whether that is the security of a nation, of an information system, or of one’s personal information.

  • Many real attacks on information systems exploit psychology more than technology. Phishing attacks trick people into logging on to websites that appear genuine but actually steal passwords. Technical measures can stop some phishing tactics, but stopping users from making bad decisions is much harder. Deception-based attacks are now the greatest threat to online
    security.

  • In order to be effective, security must be usable—not just by geeks, but by ordinary people. Research into usable security invariably has a psychological component.

  • Terrorism is perceived to be a major threat to society. Yet the actual damage done by terrorist attacks is dwarfed by the secondary effects as target societies overreact. There are many topics here, from the manipulation of risk perception to the anthropology of religion.

  • There are basic research questions; for example, about the extent to which the use and detection of deception in social contexts may have helped drive human evolution.

The dialogue between researchers in security and in psychology is rapidly widening, bringing in more and more disciplines—from security usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other.

About a year ago Ross Anderson and I conceived this conference as a way to bring together computer security researchers, psychologists, behavioral economists, sociologists, philosophers, and others—all of whom are studying the human side of security. I’ve read a lot—and written some—on psychology and security over the past few years, and have been continually amazed by some of the research that people outside my field have been doing on topics very relevant to my field. Ross and I both thought that bringing these diverse communities together would be fascinating to everyone. So we convinced behavioral economists Alessandro Acquisti and George Loewenstein to help us organize the workshop, invited the people we all have been reading, and also asked them who else to invite. The response was overwhelming. Almost everyone we wanted was able to attend, and the result was a 42-person conference with 35 speakers.

We’re most of the way through the morning, and it’s been even more fascinating than I expected. (Here’s the agenda.) We’ve talked about detecting deception in people, organizational biases in making security decisions, building security “intuition” into Internet browsers, different techniques to prevent crime, complexity and failure, and the modeling of security feeling.

I had high hopes of liveblogging this event, but it’s far too fascinating to spend time writing posts. If you want to read some of the more interesting papers written by the participants, this is a good page to start with.

I’ll write more about the conference later.

EDITED TO ADD (6/30): Ross Anderson has a blog post, where he liveblogs the individual sessions in the comments. And I should add that this was an invitational event—which is why you haven’t heard about it before—and that the room here at MIT is completely full.

EDITED TO ADD (7/1): Matt Blaze has posted audio. And Ross Anderson—link above—is posting paragraph-long summaries for each speaker.

EDITED TO ADD (7/6): Photos of the speakers.

EDITED TO ADD (7/7): MSNBC article on the workshop. And L. Jean Camp’s notes.

Posted on June 30, 2008 at 11:17 AMView Comments

Framing Computers Under the DMCA

Researchers from the University of Washington have demonstrated how lousy the MPAA/RIAA/etc. tactics are by successfully framing printers on their network. These printers, which can’t download anything, received nine takedown notices:

The researchers rigged the software agents to implicate three laserjet printers, which were then accused in takedown letters by the M.P.A.A. of downloading copies of “Iron Man” and the latest Indiana Jones film.

Research, including the paper, here.

Posted on June 9, 2008 at 6:47 AMView Comments

Risk and Culture

The Second National Risk and Culture Study, conducted by the Cultural Cognition Project at Yale Law School.

Abstract:

Cultural Cognition refers to the disposition to conform one’s beliefs about societal risks to one’s preferences for how society should be organized. Based on surveys and experiments involving some 5,000 Americans, the Second National Risk and Culture Study presents empirical evidence of the effect of this dynamic in generating conflict about global warming, school shootings, domestic terrorism, nanotechnology, and the mandatory vaccination of school-age girls against HPV, among other issues. The Study also presents evidence of risk-communication strategies that counteract cultural cognition. Because nuclear power affirms rather than threatens the identity of persons who hold individualist values, for example, proposing it as a solution to global warming makes persons who hold such values more willing to consider evidence that climate change is a serious risk. Because people tend to impute credibility to people who share their values, persons who hold hierarchical and egalitarian values are less likely to polarize when they observe people who hold their values advocating unexpected positions on the vaccination of young girls against HPV. Such techniques can help society to create a deliberative climate in which citizens converge on policies that are both instrumentally sound and expressively congenial to persons of diverse values.

And from the conclusion:

Conclusion:

There is a culture war in America, but it is about facts, not values. There is very little evidence that most Americans care nearly as much about issues that symbolize competing cultural values as they do about the economy, national security, and the safety and health of themselves and their loved ones. There is ample evidence, however, that Americans are sharply divided along cultural lines about what sorts of conditions endanger these interests and what sorts of policies effectively counteract such risks.

Findings from the Second National Culture and Risk Study help to show why. Psychologically speaking, it’s much easier to believe that conduct one finds dishonorable or offensive is dangerous, and conduct one finds noble or admirable is socially beneficial, than vice versa. People are also much more inclined to accept information about risk and danger when it comes from someone who shares their values than when it comes from someone who holds opposing commitments.

Posted on May 21, 2008 at 5:19 AMView Comments

Spying on Computer Monitors Off Reflective Objects

Impressive research:

At Saarland University, researchers trained a $500 telescope on a teapot near a computer monitor 5 meters away. The images are tiny but amazingly clear, professor Michael Backes told IDG.

All it took was a $500 telescope trained on a reflective object in front of the monitor. For example, a teapot yielded readable images of 12 point Word documents from a distance of 5 meters (16 feet). From 10 meters, they were able to read 18 point fonts. With a $27,500 Dobson telescope, they could get the same quality of images at 30 meters.

Here’s the paper:

Abstract

We present a novel eavesdropping technique for spying at a distance on data that is displayed on an arbitrary computer screen, including the currently prevalent LCD monitors. Our technique exploits reflections of the screen’s optical emanations in various objects that one commonly finds in close proximity to the screen and uses those reflections to recover the original screen content. Such objects include eyeglasses, tea pots, spoons, plastic bottles, and even the eye of the user. We have demonstrated that this attack can be successfully mounted to spy on even small fonts using inexpensive, off-the-shelf equipment (less than 1500 dollars) from a distance of up to 10 meters. Relying on more expensive equipment allowed us to conduct this attack from over 30 meters away, demonstrating that similar attacks are feasible from the other side of the street or from a close-by building. We additionally establish theoretical limitations of the attack; these limitations may help to estimate the risk that this attack can be successfully mounted in a given environment.

Posted on May 20, 2008 at 10:44 AMView Comments

Designing Processors to Support Hacking

This won best-paper award at the First USENIX Workshop on Large-Scale Exploits and Emergent Threats: “Designing and implementing malicious hardware,” by Samuel T. King, Joseph Tucek, Anthony Cozzie, Chris Grier, Weihang Jiang, and Yuanyuan Zhou.

Hidden malicious circuits provide an attacker with a stealthy attack vector. As they occupy a layer below the entire software stack, malicious circuits can bypass traditional defensive techniques. Yet current work on trojan circuits considers only simple attacks against the hardware itself, and straightforward defenses. More complex designs that attack the software are unexplored, as are the countermeasures an attacker may take to bypass proposed defenses.

We present the design and implementation of Illinois Malicious Processors (IMPs). There is a substantial design space in malicious circuitry; we show that an attacker, rather than designing one speci?c attack, can instead design hardware to support attacks. Such ?exible hardware allows powerful, general purpose attacks, while remaining surprisingly low in the amount of additional hardware. We show two such hardware designs, and implement them in a real system. Further, we show three powerful attacks using this hardware, including a login backdoor that gives an attacker complete and highlevel access to the machine. This login attack requires only 1341 additional gates: gates that can be used for other attacks as well. Malicious processors are more practical, more flexible, and harder to detect than an initial analysis would suggest.

Theoretical? Sure. But combine this with stories of counterfeit computer hardware from China, and you’ve got yourself a potentially serious problem.

Posted on April 24, 2008 at 1:52 PMView Comments

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PMView Comments

Risk Preferences in Chimpanzees and Bonobos

I’ve already written about prospect theory, which explains how people approach risk. People tend to be risk averse when it comes to gains, and risk seeking when it comes to losses:

Evolutionarily, presumably it is a better survival strategy to—all other things being equal, of course—accept small gains rather than risking them for larger ones, and risk larger losses rather than accepting smaller losses. Lions chase young or wounded wildebeest because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there’s a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow.

Similarly, it is evolutionarily better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor’s edge between starvation and reproduction, any loss of food—whether small or large—can be equally bad. That is, both can result in death. If that’s true, the best option is to risk everything for the chance at no loss at all.

This behavior has been demonstrated in animals as well: “species of insects, birds and mammals range from risk neutral to risk averse when making decisions about amounts of food, but are risk seeking towards delays in receiving food.”

A recent study examines the relative risk preferences in two closely related species: chimanzees and bonobos.

Abstract

Human and non-human animals tend to avoid risky prospects. If such patterns of economic choice are adaptive, risk preferences should reflect the typical decision-making environments faced by organisms. However, this approach has not been widely used to examine the risk sensitivity in closely related species with different ecologies. Here, we experimentally examined risk-sensitive behaviour in chimpanzees (Pan troglodytes) and bonobos (Pan paniscus), closely related species whose distinct ecologies are thought to be the major selective force shaping their unique behavioural repertoires. Because chimpanzees exploit riskier food sources in the wild, we predicted that they would exhibit greater tolerance for risk in choices about food. Results confirmed this prediction: chimpanzees significantly preferred the risky option, whereas bonobos preferred the fixed option. These results provide a relatively rare example of risk-prone behaviour in the context of gains and show how ecological pressures can sculpt economic decision making.

The basic argument is that in the natural environment of the chimpanzee, if you don’t take risks you don’t get any of the high-value rewards (e.g., monkey meat). Bonobos “rely more heavily than chimpanzees on terrestrial herbaceous vegetation, a more temporally and spatially consistent food source.” So chimpanzees are less likely to avoid taking risks.

Fascinating stuff, but there are at least two problems with this study. The first one, the researchers explain in their paper. The animals studied—five of each species—were from the Wolfgang Koehler Primate Research Center at the Leipzig Zoo, and the experimenters were unable to rule out differences in the “experiences, cultures and conditions of the two specific groups tested here.”

The second problem is more general: we know very little about the life of bonobos in the wild. There’s a lot of popular stereotypes about bonobos, but they’re sloppy at best.

Even so, I like seeing this kind of research. It’s fascinating.

EDITED TO ADD (5/13): Response to that last link.

Posted on April 17, 2008 at 6:20 AMView Comments

Seat Belt Usage and Compensating Behavior

There is a theory that people have an inherent risk thermostat that seeks out an optimal level of risk. When something becomes inherently safer—a law is passed requiring motorcycle riders to wear helmets, for example—people compensate by riding more recklessly. I first read this theory in a 1999 paper by John Adams at the University of Reading, although it seems to have originated with Sam Peltzman.

In any case, this paper presents data that contradicts that thesis:

Abstract—This paper investigates the effects of mandatory seat belt laws on driver behavior and traffic fatalities. Using a unique panel data set on seat belt usage in all U.S. jurisdictions, we analyze how such laws, by influencing seat belt use, affect the incidence of traffic fatalities. Allowing for the endogeneity of seat belt usage, we find that such usage decreases overall traffic fatalities. The magnitude of this effect, however, is significantly smaller than the estimate used by the National Highway Traffic Safety Administration. In addition, we do not find significant support for the compensating-behavior theory, which suggests that seat belt use also has an indirect adverse effect on fatalities by encouraging careless driving. Finally, we identify factors, especially the type of enforcement used, that make seat belt laws more effective in increasing seat belt usage.

Posted on April 11, 2008 at 1:44 PMView Comments

KeeLoq Still Broken

That’s the key entry system used by Chrysler, Daewoo, Fiat, General Motors, Honda, Toyota, Lexus, Volvo, Volkswagen, Jaguar, and probably others. It’s broken:

The KeeLoq encryption algorithm is widely used for security relevant applications, e.g., in the form of passive Radio Frequency Identification (RFID) transponders for car immobilizers and in various access control and Remote Keyless Entry (RKE) systems, e.g., for opening car doors and garage doors.

We present the first successful DPA (Differential Power Analysis) attacks on numerous commercially available products employing KeeLoq. These so-called side-channel attacks are based on measuring and evaluating the power consumption of a KeeLoq device during its operation. Using our techniques, an attacker can reveal not only the secret key of remote controls in less than one hour, but also the manufacturer key of the corresponding receivers in less than one day. Knowing the manufacturer key allows for creating an arbitrary number of valid new keys and generating new remote controls.

We further propose a new eavesdropping attack for which monitoring of two ciphertexts, sent from a remote control employing KeeLoq code hopping (car key, garage door opener, etc.), is sufficient to recover the device key of the remote control. Hence, using the methods described by us, an attacker can clone a remote control from a distance and gain access to a target that is protected by the claimed to be “highly secure” KeeLoq algorithm.

We consider our attacks to be of serious practical interest, as commercial KeeLoq access control systems can be overcome with modest effort.

I’ve written about this before, but the above link has much better data.

EDITED TO ADD (4/4): A good article.

Posted on April 4, 2008 at 6:03 AMView Comments

Hacking Medical Devices

Okay, so this could be big news:

But a team of computer security researchers plans to report Wednesday that it had been able to gain wireless access to a combination heart defibrillator and pacemaker.

They were able to reprogram it to shut down and to deliver jolts of electricity that would potentially be fatal—if the device had been in a person. In this case, the researcher were hacking into a device in a laboratory.

The researchers said they had also been able to glean personal patient data by eavesdropping on signals from the tiny wireless radio that Medtronic, the device’s maker, had embedded in the implant as a way to let doctors monitor and adjust it without surgery.

There’s only a little bit of hyperbole in the New York Times article. The research is being conducted by the Medical Device Security Center, with researchers from Beth Israel Deaconess Medical Center, Harvard Medical School, the University of Massachusetts Amherst, and the University of Washington. They have two published papers:

This is from the FAQ for the second paper (an ICD is a implantable cardiac defibrillator):

As part of our research we evaluated the security and privacy properties of a common ICD. We investigate whether a malicious party could create his or her own equipment capable of wirelessly communicating with this ICD.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could violate the privacy of patient information and medical telemetry. The ICD wirelessly transmits patient information and telemetry without observable encryption. The adversary’s computer could intercept wireless signals from the ICD and learn information including: the patient’s name, the patient’s medical history, the patient’s date of birth, and so on.

Using our own equipment (an antenna, radio hardware, and a PC), we found that someone could also turn off or modify therapy settings stored on the ICD. Such a person could render the ICD incapable of responding to dangerous cardiac events. A malicious person could also make the ICD deliver a shock that could induce ventricular fibrillation, a potentially lethal arrhythmia.

Of course, we all know how this happened. It’s a story we’ve seen a zillion times before: the designers didn’t think about security, so the design wasn’t secure.

The researchers are making it very clear that this doesn’t mean people shouldn’t get pacemakers and ICDs. Again, from the FAQ:

We strongly believe that nothing in our report should deter patients from receiving these devices if recommended by their physician. The implantable cardiac defibrillator is a proven, life-saving technology. We believe that the risk to patients is low and that patients should not be alarmed. We do not know of a single case where an IMD patient has ever been harmed by a malicious security attack. To carry out the attacks we discuss in our paper would require: malicious intent, technical sophistication, and the ability to place electronic equipment close to the patient. Our goal in performing this study is to improve the security, privacy, safety, and effectiveness of future IMDs.

For all our experiments our antenna, radio hardware, and PC were near the ICD. Our experiments were conducted in a computer laboratory and utilized simulated patient data. We did not experiment with extending the distance between the antenna and the ICD.

I agree with this answer. The risks are there, but the benefits of these devices are much greater. The point of this research isn’t to help people hack into pacemakers and commit murder, but to enable medical device companies to design better implantable equipment in the future. I think it’s great work.

Of course, that will only happen if the medical device companies don’t react like idiots:

Medtronic, the industry leader in cardiac regulating implants, said Tuesday that it welcomed the chance to look at security issues with doctors, regulators and researchers, adding that it had never encountered illegal or unauthorized hacking of its devices that have telemetry, or wireless control, capabilities.

“To our knowledge there has not been a single reported incident of such an event in more than 30 years of device telemetry use, which includes millions of implants worldwide,” a Medtronic spokesman, Robert Clark, said. Mr. Clark added that newer implants with longer transmission ranges than Maximo also had enhanced security.

[…]

St. Jude Medical, the third major defibrillator company, said it used “proprietary techniques” to protect the security of its implants and had not heard of any unauthorized or illegal manipulation of them.

Just because you have no knowledge of something happening does not mean it’s not a risk.

Another article.

The general moral here: more and more, computer technology is becoming intimately embedded into our lives. And with each new application comes new security risks. And we have to take those risks seriously.

Posted on March 12, 2008 at 10:39 AMView Comments

1 70 71 72 73 74 80

Sidebar photo of Bruce Schneier by Joe MacInnis.