Entries Tagged "academic papers"

Page 60 of 86

New Attack Against Chip-and-Pin Systems

Well, new to us:

You see, an EMV payment card authenticates itself with a MAC of transaction data, for which the freshly generated component is the unpredictable number (UN). If you can predict it, you can record everything you need from momentary access to a chip card to play it back and impersonate the card at a future date and location. You can as good as clone the chip. It’s called a “pre-play” attack. Just like most vulnerabilities we find these days some in industry already knew about it but covered it up; we have indications the crooks know about this too, and we believe it explains a good portion of the unsolved phantom withdrawal cases reported to us for which we had until recently no explanation.

Paper here. And news article.

Posted on September 11, 2012 at 12:38 PMView Comments

Hacking Brain-Computer Interfaces

In this fascinating piece of research, the question is asked: can we surreptitiously collect secret information from the brains of people using brain-computer interface devices? One article:

A team of security researchers from Oxford, UC Berkeley, and the University of Geneva say that they were able to deduce digits of PIN numbers, birth months, areas of residence and other personal information by presenting 30 headset-wearing subjects with images of ATM machines, debit cards, maps, people, and random numbers in a series of experiments. The paper, titled “On the Feasibility of Side-Channel Attacks with Brain Computer Interfaces,” represents the first major attempt to uncover potential security risks in the use of the headsets.

This is a new development in spyware.

EDITED TO ADD (9/6): More articles. And here’s a discussion of the pros and cons of this sort of technology.

Posted on September 5, 2012 at 6:06 AMView Comments

Measuring Cooperation and Defection using Shipwreck Data

In Liars and Outliers, I talk a lot about social norms and when people follow them. This research uses survival data from shipwrecks to measure it.

The authors argue that shipwrecks can actually tell us a fair bit about human behavior, since everyone stuck on a sinking ship has to do a bit of cost-benefit analysis. People will weigh their options—which will generally involve helping others at great risk to themselves—amidst a backdrop of social norms and, at least in case of the Titanic, direct orders from authority figures. “This cost-benefit logic is fundamental in economic models of human behavior,” the authors write, suggesting that a shipwreck could provide a real-world test of ideas derived from controlled experiments.

Eight ideas, to be precise. That’s how many hypotheses the authors lay out, ranging from “women have a survival advantage in shipwrecks” to “women are more likely to survive on British ships, given the UK’s strong sense of gentility.” They tested them using a database of ship sinkings that encompasses over 15,000 passengers and crew, and provides information on everything from age and sex to whether the passenger had a first-class ticket.

For the most part, the lessons provided by the Titanic simply don’t hold. Excluding the two disasters mentioned above, crew members had a survival rate of over 60 percent, far higher than any other group analyzed. (Although they didn’t consistently survive well—in about half the wrecks, there was no statistical difference between crew and passengers). Rather than going down with the ship, captains ended up coming in second, with just under half surviving. The authors offer a number of plausible reasons for crew survival, including better fitness, a thorough knowledge of the ship that’s sinking, and better training for how to handle emergencies. In any case, however, they’re not clearly or consistently sacrificing themselves to save their passengers.

At the other end of the spectrum, nearly half the children on the Titanic survived, but figures for the rest of the shipwrecks were down near 15 percent. About a quarter of women survived other sinkings, but roughly three times that made it through the Titanic alive. If you exclude the Titanic, female survival was 18 percent, or about half the rate at which males came through alive.

What about social factors? Having the captain order “women and children first” did boost female survival, but only by about 10 percentage points. Most of the other ideas didn’t pan out. For example, the speed of sinking, which might give the crew more time to get vulnerable passengers off first, made no difference whatsoever to female survival. Neither did the length of voyage, which might give passengers more time to get to know both the boat and each other. The fraction of passengers that were female didn’t seem to make a difference either.

One social factor that did play a role was price of ticket: “there is a class gradient in survival benefitting first class passengers.” Another is the being on a British ship, where (except with the Titanic), women actually had lower rates of survival.

Paper here (behind a paywall):

Abstract: Since the sinking of the Titanic, there has been a widespread belief that the social norm of “women and children first” (WCF) give women a survival advantage over men in maritime disasters, and that captains and crew members give priority to passengers. We analyze a database of 18 maritime disasters spanning three centuries, covering the fate of over 15,000 individuals of more than 30 nationalities. Our results provide a unique picture of maritime disasters. Women have a distinct survival disadvantage compared with men. Captains and crew survive at a significantly higher rate than passengers. We also find that: the captain has the power to enforce normative behavior; there seems to be no association between duration of a disaster and the impact of social norms; women fare no better when they constitute a small share of the ship’s complement; the length of the voyage before the disaster appears to have no impact on women’s relative survival rate; the sex gap in survival rates has declined since World War I; and women have a larger disadvantage in British shipwrecks. Taken together, our findings show that human behavior in life-and-death situations is best captured by the expression “every man for himself.”

Posted on August 14, 2012 at 1:16 PMView Comments

Breaking Microsoft's PPTP Protocol

Some things never change. Thirteen years ago, Mudge and I published a paper breaking Microsoft’s PPTP protocol and the MS-CHAP authentication system. I haven’t been paying attention, but I presume it’s been fixed and improved over the years. Well, it’s been broken again.

ChapCrack can take captured network traffic that contains a MS-CHAPv2 network handshake (PPTP VPN or WPA2 Enterprise handshake) and reduce the handshake’s security to a single DES (Data Encryption Standard) key.

This DES key can then be submitted to CloudCracker.com—a commercial online password cracking service that runs on a special FPGA cracking box developed by David Hulton of Pico Computing—where it will be decrypted in under a day.

The CloudCracker output can then be used with ChapCrack to decrypt an entire session captured with WireShark or other similar network sniffing tools.

Posted on August 6, 2012 at 11:22 AMView Comments

Fake Irises Fool Scanners

We already know you can wear fake irises to fool a scanner into thinking you’re not you, but this is the first fake iris you can use for impersonation: to fool a scanner into thinking you’re someone else.

EDITED TO ADD (8/13): Paper and slides.

Also This:

Daugman says the vulnerability in question, which involves using an iterative process to relatively quickly reconstruct a workable iris image from an iris template, is a classic “hill-climbing” attack that is a known vulnerability for all biometrics.”

Posted on July 31, 2012 at 11:11 AMView Comments

Implicit Passwords

This is a really interesting research paper (article here) on implicit passwords: something your unconscious mind remembers but your conscious mind doesn’t know. The Slashdot post is a nice summary:

A cross-disciplinary team of US neuroscientists and cryptographers have developed a password/passkey system that removes the weakest link in any security system: the human user. It’s ingenious: The system still requires that you enter a password, but at no point do you actually remember the password, meaning it can’t be written down and it can’t be obtained via coercion or torture—i.e. rubber-hose cryptanalysis. The system, devised by Hristo Bojinov of Stanford University and friends from Northwestern and SRI, relies on implicit learning, a process by which you absorb new information—but you’re completely unaware that you’ve actually learned anything; a bit like learning to ride a bike. The process of learning the password (or cryptographic key) involves the use of a specially crafted computer game that, funnily enough, resembles Guitar Hero. Their experimental results suggest that, after a 45 minute learning session, the 30-letter password is firmly implanted in your subconscious brain. Authentication requires that you play a round of the game—but this time, your 30-letter sequence is interspersed with other random 30-letter sequences. To pass authentication, you must reliably perform better on your sequence. Even after two weeks, it seems you are still able to recall this sequence.

The system isn’t very realistic—people aren’t going to spend 45 minutes learning their passwords and a few minutes authenticating themselves—but I really like the direction this research is going.

Posted on July 24, 2012 at 6:28 AMView Comments

Remote Scanning Technology

I don’t know if this is real or fantasy:

Within the next year or two, the U.S. Department of Homeland Security will instantly know everything about your body, clothes, and luggage with a new laser-based molecular scanner fired from 164 feet (50 meters) away. From traces of drugs or gun powder on your clothes to what you had for breakfast to the adrenaline level in your body—agents will be able to get any information they want without even touching you.

The meta-point is less about this particular technology, and more about the arc of technological advancements in general. All sorts of remote surveillance technologies—facial recognition, remote fingerprint recognition, RFID/Bluetooth/cell phone tracking, license plate tracking—are becoming possible, cheaper, smaller, more reliable, etc. It’s wholesale surveillance, something I wrote about back in 2004.

We’re at a unique time in the history of surveillance: the cameras are everywhere, and we can still see them. Fifteen years ago, they weren’t everywhere. Fifteen years from now, they’ll be so small we won’t be able to see them. Similarly, all the debates we’ve had about national ID cards will become moot as soon as these surveillance technologies are able to recognize us without us even knowing it.

EDITED TO ADD (8/13): Related papers, and a video.

Posted on July 16, 2012 at 1:59 PMView Comments

All-or-Nothing Access Control for Mobile Phones

This paper looks at access control for mobile phones. Basically, it’s all or nothing: either you have a password that protects everything, or you have no password and protect nothing. The authors argue that there should be more user choice: some applications should be available immediately without a password, and the rest should require a password. This makes a lot of sense to me. Also, if only important applications required a password, people would be more likely to choose strong passwords.

Abstract: Most mobile phones and tablets support only two access control device states: locked and unlocked. We investigated how well allornothing device access control meets the need of users by interviewing 20 participants who had both a smartphone and tablet. We find all-or-nothing device access control to be a remarkably poor fit with users’ preferences. On both phones and tablets, participants wanted roughly half their applications to be available even when their device was locked and half protected by authentication. We also solicited participants’ interest in new access control mechanisms designed specifically to facilitate device sharing. Fourteen participants out of 20 preferred these controls to existing security locks alone. Finally, we gauged participants’ interest in using face and voice biometrics to authenticate to their mobile phone and tablets; participants were surprisingly receptive to biometrics, given that they were also aware of security and reliability limitations.

Posted on July 12, 2012 at 12:59 PMView Comments

On Securing Potentially Dangerous Virology Research

Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.———Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.

My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.

First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.

Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.

Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.

Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.

Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat.” It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.

This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.

Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.

The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars.” Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.

Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.

The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday”; the once-a-month automatic download and installation of security patches.

Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called "responsible disclosure." Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.

Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.

Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.

Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.

This essay was originally published in Science.

EDITED TO ADD (7/14): Related article: “What Biology Can Learn from Infosec.”

Posted on June 29, 2012 at 6:35 AMView Comments

1 58 59 60 61 62 86

Sidebar photo of Bruce Schneier by Joe MacInnis.