Entries Tagged "academic papers"

Page 62 of 86

The Effects of Data Breach Litigation

Empirical Analysis of Data Breach Litigation,” Sasha Romanosky, David Hoffman, and Alessandro Acquisti:

Abstract: In recent years, a large number of data breaches have resulted in lawsuits in which individuals seek redress for alleged harm resulting from an organization losing or compromising their personal information. Currently, however, very little is known about those lawsuits. Which types of breaches are litigated, which are not? Which lawsuits settle, or are dismissed? Using a unique database of manually-collected lawsuits from PACER, we analyze the court dockets of over 230 federal data breach lawsuits from 2000 to 2010. We use binary outcome regressions to investigate two research questions: Which data breaches are being litigated in federal court? Which data breach lawsuits are settling? Our results suggest that the odds of a firm being sued in federal court are 3.5 times greater when individuals suffer financial harm, but over 6 times lower when the firm provides free credit monitoring following the breach. We also find that defendants settle 30% more often when plaintiffs allege financial loss from a data breach, or when faced with a certified class action suit. While the compromise of financial information appears to lead to more federal litigation, it does not seem to increase a plaintiff’s chance of a settlement. Instead, compromise of medical information is more strongly correlated with settlement.

The full paper is available by using the one-click download button.

Posted on March 27, 2012 at 6:46 AMView Comments

On Cyberwar Hype

Good article by Thomas Rid on the hype surrounding cyberwar. It’s well worth reading.

And in a more academic paper, published in the RUSI Journal, Thomas Rid and Peter McBurney argue that cyber-weapons aren’t all that destructive and that we’ve been misled by some bad metaphors.

Some fundamental questions on the use of force in cyberspace are still unanswered. Worse, they are still unexplored: What are cyber ‘weapons’ in the first place? How is weaponised code different from physical weaponry? What are the differences between various cyber-attack tools? And do the same dynamics and norms that govern the use of weapons on the conventional battlefield apply in cyberspace?

Cyber-weapons span a wide spectrum. That spectrum, we argue, reaches from generic but low-potential tools to specific but high-potential weaponry. To illustrate this polarity, we use a didactically helpful comparison. Low-potential ‘cyber-weapons’ resemble paintball guns: they may be mistaken for real weapons, are easily and commercially available, used by many to ‘play,’ and getting hit is highly visible—but at closer inspection these ‘weapons’ will lose some of their threatening character. High-potential cyber-weapons could be compared with sophisticated fire-and-forget weapon systems such as modern anti-radiation missiles: they require specific target intelligence that is programmed into the weapon system itself, major investments for R&D, significant lead-time, and they open up entirely new tactics but also novel limitations. This distinction brings into relief a two-pronged hypothesis that stands in stark contrast to some of the debate’s received wisdoms. Maximising the destructive potential of a cyber-weapon is likely to come with a double effect: it will significantly increase the resources, intelligence and time required to build and to deploy such weapons—and more destructive potential will significantly decrease the number of targets, the risk of collateral damage and the coercive utility of cyber-weapons.

And from the conclusion:

Two findings contravene the debate’s received wisdom. One insight concerns the dominance of the offence. Most weapons may be used defensively and offensively. But the information age, the argument goes since at least 1996, has ‘offence-dominant attributes.’ A 2011 Pentagon report on cyberspace again stressed ‘the advantage currently enjoyed by the offense in cyberwarfare.’ But when it comes to cyber-weapons, the offence has higher costs, a shorter shelf-life than the defence, and a very limited target set. All this drastically reduces the coercive utility of cyber-attacks. Any threat relies on the offender’s credibility to attack, or to repeat a successful attack. Even if a potent cyber-weapon could be launched successfully once, it would be highly questionable if an attack, or even a salvo, could be repeated in order to achieve a political goal. At closer inspection cyber-weapons do not seem to favour the offence.

A second insight concerns the risk of electronic arms markets. One concern is that sophisticated malicious actors could resort to asymmetric methods, such as employing the services of criminal groups, rousing patriotic hackers, and potentially redeploying generic elements of known attack tools. Worse, more complex malware is likely to be structured in a modular fashion. Modular design could open up new business models for malware developers. In the car industry, for instance, modularity translates into a possibility of a more sophisticated division of labour. Competitors can work simultaneously on different parts of a more complex system. Modules could be sold on underground markets. But if our analysis is correct, potential arms markets pose a more limited risk: the highly specific target information and programming design needed for potent weapons is unlikely to be traded generically. To go back to our imperfect analogy: paintball pistols will continue to be commercially available, but probably not pre-programmed warheads of smart missiles.

The use of this weapon analogy points to a larger and dangerous problem: the militarisation of cyber-security. William J Lynn, the Pentagon’s number two, responded to critics by pointing out that the Department of Defense would not ‘militarise’ cyberspace. ‘Indeed,’ Lynn wrote, ‘establishing robust cyberdefenses no more militarizes cyberspace than having a navy militarizes the ocean.’ Lynn may be right that the Pentagon is not militarising cyberspace—but his agency is unwittingly militarising the ideas and concepts to analyse security in cyberspace. We hope that this article, by focusing not on war but on weapons, will help bring into relief the narrow limits and the distractive quality of most martial analogies.

Here’s an article on the paper.

One final paper by Rid: “Cyber-War Will Not Take Place” (2012), Journal of Strategic Studies. I have not read it yet.

Posted on March 14, 2012 at 6:22 AMView Comments

The Security of Multi-Word Passphrases

Interesting research on the security of passphrases. From a blog post on the work:

We found about 8,000 phrases using a 20,000 phrase dictionary. Using a very rough estimate for the total number of phrases and some probability calculations, this produced an estimate that passphrase distribution provides only about 20 bits of security against an attacker trying to compromise 1% of available accounts. This is far better than passwords, which are usually under 10 bits by this same metric, but not high enough to make online guessing impractical without proper rate-limiting. Curiously, it’s close to estimates made using Kuo et al.’s published numbers on mnemonic phrases. It also shows that significant numbers of people will blatantly ignore security advice about choosing nonsense phrases and choose things like “Manchester United” or “Harry Potter.”

[…]

This led us to ask, if in the worst case users chose multi-word passphrases with a distribution identical to English speech, how secure would this be? Using the large Google n-gram corpus we can answer this question for phrases of up to 5 words. The results are discouraging: by our metrics, even 5-word phrases would be highly insecure against offline attacks, with fewer than 30 bits of work compromising over half of users. The returns appear to rapidly diminish as more words are required. This has potentially serious implications for applications like PGP private keys, which are often encrypted using a passphrase. Users are clearly more random in “passphrase English” than in actual English, but unless it’s dramatically more random the underlying natural language simply isn’t random enough.

Posted on March 13, 2012 at 6:22 AMView Comments

"1234" and Birthdays Are the Most Common PINs

Research paper: “A birthday present every eleven wallets? The security of customer-chosen banking PINs,” by Joseph Bonneau, Sören Preibusch, and Ross Anderson:

Abstract: We provide the first published estimates of the difficulty of guessing a human-chosen 4-digit PIN. We begin with two large sets of 4-digit sequences chosen outside banking for online passwords and smartphone unlock-codes. We use a regression model to identify a small number of dominant factors influencing user choice. Using this model and a survey of over 1,100 banking customers, we estimate the distribution of banking PINs as well as the frequency of security-relevant behaviour such as sharing and reusing PINs. We find that guessing PINs based on the victims’ birthday, which nearly all users carry documentation of, will enable a competent thief to gain use of an ATM card once for every 11-18 stolen wallets, depending on whether banks prohibit weak PINs such as 1234. The lesson for cardholders is to never use one’s date of birth as a PIN. The lesson for card-issuing banks is to implement a denied PIN list, which several large banks still fail to do. However, blacklists cannot effectively mitigate guessing given a known birth date, suggesting banks should move away from customer-chosen banking PINs in the long term.

Blog post.

EDITED TO ADD (2/22): News article

Posted on February 21, 2012 at 7:36 AMView Comments

Cryptanalysis of Satellite Phone Encryption Algorithms

From the abstract of the paper:

In this paper, we analyze the encryption systems used in the two existing (and competing) satphone standards, GMR-1 and GMR-2. The first main contribution is that we were able to completely reverse engineer the encryption algorithms employed. Both ciphers had not been publicly known previously. We describe the details of the recovery of the two algorithms from freely available DSP-firmware updates for satphones, which included the development of a custom disassembler and tools to analyze the code, and extending prior work on binary analysis to efficiently identify cryptographic code. We note that these steps had to be repeated for both systems, because the available binaries were from two entirely different DSP processors. Perhaps somewhat surprisingly, we found that the GMR-1 cipher can be considered a proprietary variant of the GSM A5/2 algorithm, whereas the GMR-2 cipher is an entirely new design. The second main contribution lies in the cryptanalysis of the two proprietary stream ciphers. We were able to adopt known A5/2 ciphertext-only attacks to the GMR-1 algorithm with an average case complexity of 232 steps. With respect to the GMR-2 cipher, we developed a new attack which is powerful in a known-plaintext setting. In this situation, the encryption key for one session, i.e., one phone call, can be recovered with approximately 50­65 bytes of key stream and a moderate computational complexity. A major finding of our work is that the stream ciphers of the two existing satellite phone systems are considerably weaker than what is state-oft-he-art in symmetric cryptography.

Press release. And news stories.

Posted on February 16, 2012 at 12:22 PMView Comments

Lousy Random Numbers Cause Insecure Public Keys

There’s some excellent research (paper, news articles) surveying public keys in the wild. Basically, the researchers found that a small fraction of them (27,000 out of 7.1 million, or 0.38%) share a common factor and are inherently weak. The researchers can break those public keys, and anyone who duplicates their research can as well.

The cause of this is almost certainly a lousy random number generator used to create those public keys in the first place. This shouldn’t come as a surprise. One of the hardest parts of cryptography is random number generation. It’s really easy to write a lousy random number generator, and it’s not at all obvious that it is lousy. Randomness is a non-functional requirement, and unless you specifically test for it—and know how to test for it—you’re going to think your cryptosystem is working just fine. (One of the reporters who called me about this story said that the researchers told him about a real-world random number generator that produced just seven different random numbers.) So it’s likely these weak keys are accidental.

It’s certainly possible, though, that some random number generators have been deliberately weakened. The obvious culprits are national intelligence services like the NSA. I have no evidence that this happened, but if I were in charge of weakening cryptosystems in the real world, the first thing I would target is random number generators. They’re easy to weaken, and it’s hard to detect that you’ve done anything. Much safer than tweaking the algorithms, which can be tested against known test vectors and alternate implementations. But again, I’m just speculating here.

What is the security risk? There’s some, but it’s hard to know how much. We can assume that the bad guys can replicate this experiment and find the weak keys. But they’re random, so it’s hard to know how to monetize this attack. Maybe the bad guys will get lucky and one of the weak keys will lead to some obvious way to steal money, or trade secrets, or national intelligence. Maybe.

And what happens now? My hope is that the researchers know which implementations of public-key systems are susceptible to these bad random numbers—they didn’t name names in the paper—and alerted them, and that those companies will fix their systems. (I recommend my own Fortuna, from Cryptography Engineering.) I hope that everyone who implements a home-grown random number generator will rip it out and put in something better. But I don’t hold out much hope. Bad random numbers have broken a lot of cryptosystems in the past, and will continue to do so in the future.

From the introduction to the paper:

In this paper we complement previous studies by concentrating on computational and randomness properties of actual public keys, issues that are usually taken for granted. Compared to the collection of certificates considered in [12], where shared RSA moduli are “not very frequent”, we found a much higher fraction of duplicates. More worrisome is that among the 4.7 million distinct 1024-bit RSA moduli that we had originally collected, more than 12500 have a single prime factor in common. That this happens may be crypto-folklore, but it was new to us, and it does not seem to be a disappearing trend: in our current collection of 7.1 million 1024-bit RSA moduli, almost 27000 are vulnerable and 2048-bit RSA moduli are affected as well. When exploited, it could act the expectation of security that the public key infrastructure is intended to achieve.

And the conclusion:

We checked the computational properties of millions of public keys that we collected on the web. The majority does not seem to suffer from obvious weaknesses and can be expected to provide the expected level of security. We found that on the order of 0.003% of public keys is incorrect, which does not seem to be unacceptable. We were surprised, however, by the extent to which public keys are shared among unrelated parties. For ElGamal and DSA sharing is rare, but for RSA the frequency of sharing may be a cause for concern. What surprised us most is that many thousands of 1024-bit RSA moduli, including thousands that are contained in still valid X.509 certificates, offer no security at all. This may indicate that proper seeding of random number generators is still a problematic issue….

EDITED TO ADD (3/14): The title of the paper, “Ron was wrong, Whit is right” refers to the fact that RSA is inherently less secure because it needs two large random primes. Discrete log based algorithms, like DSA and ElGamal, are less susceptible to this vulnerability because they only need one random prime.

Posted on February 16, 2012 at 6:51 AMView Comments

Evidence on the Effectiveness of Terrorism

Readers of this blog will know that I like the works of Max Abrahms, and regularly blog them. He has a new paper (full paper behind paywall) in Defence and Peace Economics, 22:6 (2011), 583–94, “Does Terrorism Really Work? Evolution in the Conventional Wisdom since 9/11”:

The basic narrative of bargaining theory predicts that, all else equal, anarchy favors concessions to challengers who demonstrate the will and ability to escalate against defenders. For this reason, post-9/11 political science research explained terrorism as rational strategic behavior for non-state challengers to induce government compliance given their constraints. Over the past decade, however, empirical research has consistently found that neither escalating to terrorism nor with terrorism helps non-state actors to achieve their demands. In fact, escalating to terrorism or with terrorism increases the odds that target countries will dig in their political heels, depriving the nonstate challengers of their given preferences. These empirical findings across disciplines, methodologies, as well as salient global events raise important research questions, with implications for counterterrorism strategy.

EDITED TO ADD (2/14): The paper.

Posted on January 26, 2012 at 10:36 AMView Comments

Studying Airport Security

Alan A. Kirschenbaum, Michele Mariani, Coen Van Gulijk, Sharon Lubasz, Carmit Rapaport, and Hinke Andriessen, “Airport Security: An Ethnographic Study,” Journal of Air Transport Management, 18 (January 2012): 68-73 (full article is behind a paywall).

Abstract: This paper employs a behavioral science perspective of airport security to, examine security related decision behaviors using exploratory ethnographic observations. Sampling employees from a broad spectrum of departments and occupations in several major airports across Europe, over 700 descriptive items are transcribed into story scripts that are analyzed. The results demonstrate that both formal and informal behavioral factors are present when security decisions are made. The repetitive patterns of behavior allowed us to develop a generic model applicable to a wide range of security related situations. What the descriptions suggest is that even within the formal regulatory administrative framework of airports, actual real-time security behaviors may deviate from rules and regulations to adapt to local situations.

Posted on December 30, 2011 at 6:11 AMView Comments

Multiple Protocol Attacks

In 1997, I wrote about something called a chosen-protocol attack, where an attacker can use one protocol to break another. Here’s an example of the same thing in the real world: two different parking garages that mask different digits of credit cards on their receipts. Find two from the same car, and you can reconstruct the entire number.

I have to admit this puzzles me, because I thought there was a standard for masking credit card numbers. I only ever see all digits except the final four masked.

Posted on December 20, 2011 at 6:24 AMView Comments

1 60 61 62 63 64 86

Sidebar photo of Bruce Schneier by Joe MacInnis.