Entries Tagged "academic papers"

Page 72 of 86

A Useful Side-Effect of Misplaced Fear

A study in the British Journal of Criminology makes the point that drink-spiking date-raping is basically an urban legend:

Abstract. There is a stark contrast between heightened perceptions of risk associated with drug-facilitated sexual assault (DFSA) and a lack of evidence that this is a widespread threat. Through surveys and interviews with university students in the United Kingdom and United States, we explore knowledge and beliefs about drink-spiking and the linked threat of sexual assault. University students in both locations are not only widely sensitized to the issue, but substantial segments claim first- or second-hand experience of particular incidents. We explore students’ understanding of the DFSA threat in relationship to their attitudes concerning alcohol, binge-drinking, and responsibility for personal safety. We suggest that the drink-spiking narrative has a functional appeal in relation to the contemporary experience of young women’s public drinking.

In an article on the study in The Telegraph, the authors said:

Among young people, drink spiking stories have attractive features that could “help explain” their disproportionate loss of control after drinking alcohol, the study found.

Dr Burgess said: “Our findings suggest guarding against drink spiking has also become a way for women to negotiate how to watch out for each other in an environment where they might well lose control from alcohol consumption.”

[…]

“As Dr Burgess observes, it is not scientific evidence which keeps the drug rape myth alive but the fact that it serves so many useful functions.”

Basically, the hypothesis is that perpetuating the fear of drug-rape allows parents and friends to warn young women off excessive drinking without criticizing their personal choices. The fake bogeyman lets people avoid talking about the real issues.

Posted on November 17, 2009 at 5:58 AMView Comments

Protecting OSs from RootKits

Interesting research: “Countering Kernel Rootkits with Lightweight Hook Protection,” by Zhi Wang, Xuxian Jiang, Weidong Cui, and Peng Ning.

Abstract: Kernel rootkits have posed serious security threats due to their stealthy manner. To hide their presence and activities, many rootkits hijack control flows by modifying control data or hooks in the kernel space. A critical step towards eliminating rootkits is to protect such hooks from being hijacked. However, it remains a challenge because there exist a large number of widely-scattered kernel hooks and many of them could be dynamically allocated from kernel heap and co-located together with other kernel data. In addition, there is a lack of flexible commodity hardware support, leading to the socalled protection granularity gap ­ kernel hook protection requires byte-level granularity but commodity hardware only provides pagelevel protection.

To address the above challenges, in this paper, we present HookSafe, a hypervisor-based lightweight system that can protect thousands of kernel hooks in a guest OS from being hijacked. One key observation behind our approach is that a kernel hook, once initialized, may be frequently “read”-accessed, but rarely “write”-accessed. As such, we can relocate those kernel hooks to a dedicated page-aligned memory space and then regulate accesses to them with hardware-based page-level protection. We have developed a prototype of HookSafe and used it to protect more than 5, 900 kernel hooks in a Linux guest. Our experiments with nine real-world rootkits show that HookSafe can effectively defeat their attempts to hijack kernel hooks. We also show that HookSafe achieves such a large-scale protection with a small overhead (e.g., around 6% slowdown in performance benchmarks).

The research will be presented at the 16th ACM Conference on Computer and Communications Security this week. Here’s an article on the research.

Posted on November 10, 2009 at 1:26 PMView Comments

Laissez-Faire Access Control

Recently I wrote about the difficulty of making role-based access control work, and how reasearch at Dartmouth showed that it was better to let people take the access control they need to do their jobs, and audit the results. This interesting paper, “Laissez-Faire File Sharing,” tries to formalize the sort of access control.

Abstract: When organizations deploy file systems with access control mechanisms that prevent users from reliably sharing files with others, these users will inevitably find alternative means to share. Alas, these alternatives rarely provide the same level of confidentiality, integrity, or auditability provided by the prescribed file systems. Thus, the imposition of restrictive mechanisms and policies by system designers and administrators may actually reduce the system’s security.

We observe that the failure modes of file systems that enforce centrally-imposed access control policies are similar to the failure modes of centrally-planned economies: individuals either learn to circumvent these restrictions as matters of necessity or desert the system entirely, subverting the goals behind the central policy.

We formalize requirements for laissez-faire sharing, which parallel the requirements of free market economies, to better address the file sharing needs of information workers. Because individuals are less likely to feel compelled to circumvent systems that meet these laissez-faire requirements, such systems have the potential to increase both productivity and security.

Think of Wikipedia as the ultimate example of this. Everybody has access to everything, but there are audit mechanisms in place to prevent abuse.

Posted on November 9, 2009 at 6:59 AMView Comments

The Problems with Unscientific Security

From the Open Access Journal of Forensic Psychology, by a whole list of authors: “A Call for Evidence-Based Security Tools“:

Abstract: Since the 2001 attacks on the twin towers, policies on security have changed drastically, bringing about an increased need for tools that allow for the detection of deception. Many of the solutions offered today, however, lack scientific underpinning.

We recommend two important changes to improve the (cost) effectiveness of security policy. To begin with, the emphasis of deception research should shift from technological to behavioural sciences. Secondly, the burden of proof should lie with the manufacturers of the security tools. Governments should not rely on security tools that have not passed scientific scrutiny, and should only employ those methods that have been proven effective. After all, the use of tools that do not work will only get us further from the truth.

One excerpt:

In absence of systematic research, users will base their evaluation on data generated by field use. Because people tend to follow heuristics rather than the rules of probability theory, perceived effectiveness can substantially differ from true effectiveness (Tversky & Kahneman, 1973). For example, one well-known problem associated with field studies is that of selective feedback. Investigative authorities are unlikely to receive feedback from liars who are erroneously considered truthful. They will occasionally receive feedback when correctly detecting deception, for example through confessions (Patrick & Iacono, 1991; Vrij, 2008). The perceived effectiveness that follows from this can be further reinforced through confirmation bias: Evidence confirming one’s preconception is weighted more heavily than evidence contradicting it (Lord, Ross, & Lepper, 1979). As a result, even techniques that perform at chance level may be perceived as highly effective (Iacono, 1991). This unwarranted confidence can have profound effects on citizens’ safety and civil liberty: Criminals may escape detection while innocents may be falsely accused. The Innocence Project (Unvalidated or improper science, no date) demonstrates that unvalidated or improper forensic science can indeed lead to wrongful convictions (see also Saks & Koehler, 2005).

Article on the paper.

Posted on November 5, 2009 at 6:11 AMView Comments

Detecting Forged Signatures Using Pen Pressure and Angle

Interesting:

Songhua Xu presented an interesting idea for measuring pen angle and pressure to present beautiful flower-like visual versions of a handwritten signature. You could argue that signatures are already a visual form, nicely identifiable and universal. However, with the added data about pen pressure and angle, the authors were able to create visual signatures that offer potentially greater security, assuming you can learn to read them.

A better image. The paper (abstract is free; paper is behind a paywall).

Posted on October 8, 2009 at 6:43 AMView Comments

Reproducing Keys from Photographs

Reproducing keys from distant and angled photographs:

Abstract:
The access control provided by a physical lock is based on the assumption that the information content of the corresponding key is private—that duplication should require either possession of the key or a priori knowledge of how it was cut. However, the ever-increasing capabilities and prevalence of digital imaging technologies present a fundamental challenge to this privacy assumption. Using modest imaging equipment and standard computer vision algorithms, we demonstrate the effectiveness of physical key teleduplication—extracting a key’s complete and precise bitting code at a distance via optical decoding and then cutting precise duplicates. We describe our prototype system, Sneakey, and evaluate its effectiveness, in both laboratory and real-world settings, using the most popular residential key types in the U.S.

Those of you who carry your keys on a ring dangling from a belt loop, take note.

Posted on October 1, 2009 at 2:09 PMView Comments

Inferring Friendship from Location Data

Interesting:

For nine months, Eagle’s team recorded data from the phones of 94 students and staff at MIT. By using blue-tooth technology and phone masts, they could monitor the movements of the participants, as well as their phone calls. Their main goal with this preliminary study was to compare data collected from the phones with subjective self-report data collected through traditional survey methodology.

The participants were asked to estimate their average spatial proximity to the other participants, whether they were close friends, and to indicate how satisfied they were at work.

Some intriguing findings emerged. For example, the researchers could predict with around 95 per cent accuracy who was friends with whom by looking at how much time participants spent with each other during key periods, such as Saturday nights.

According to the abstract:

Data collected from mobile phones have the potential to provide insight into the relational dynamics of individuals. This paper compares observational data from mobile phones with standard self-report survey data. We find that the information from these two data sources is overlapping but distinct. For example, self-reports of physical proximity deviate from mobile phone records depending on the recency and salience of the interactions. We also demonstrate that it is possible to accurately infer 95% of friendships based on the observational data alone, where friend dyads demonstrate distinctive temporal and spatial patterns in their physical proximity and calling patterns. These behavioral patterns, in turn, allow the prediction of individual-level outcomes such as job satisfaction.

We all leave data shadows everywhere we go, and maintaining privacy is very hard. Here’s the EFF writing about locational privacy.

EDITED TO ADD (10/12): More information.

Posted on September 21, 2009 at 1:41 PMView Comments

Skein News

Skein is one of the 14 SHA-3 candidates chosen by NIST to advance to the second round. As part of the process, NIST allowed the algorithm designers to implement small “tweaks” to their algorithms. We’ve tweaked the rotation constants of Skein. This change does not affect Skein’s performance in any way.

The revised Skein paper contains the new rotation constants, as well as information about how we chose them and why we changed them, the results of some new cryptanalysis, plus new IVs and test vectors. Revised source code is here.

The latest information on Skein is always here.

Tweaks were due today, September 15. Now the SHA-3 process moves into the second round. According to NIST’s timeline, they’ll choose a set of final round candidate algorithms in 2010, and then a single hash algorithm in 2012. Between now and then, it’s up to all of us to evaluate the algorithms and let NIST know what we want. Cryptanalysis is important, of course, but so is performance.

Here’s my 2008 essay on SHA-3. The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here.

In other news, we’re making Skein shirts available to the public. Those of you who attended the First Hash Function Candidate Conference in Leuven, Belgium, earlier this year might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before 1 October, and then we’ll have all the shirts made in one batch.

Posted on September 15, 2009 at 6:10 AMView Comments

File Deletion

File deletion is all about control. This used to not be an issue. Your data was on your computer, and you decided when and how to delete a file. You could use the delete function if you didn’t care about whether the file could be recovered or not, and a file erase program—I use BCWipe for Windows—if you wanted to ensure no one could ever recover the file.

As we move more of our data onto cloud computing platforms such as Gmail and Facebook, and closed proprietary platforms such as the Kindle and the iPhone, deleting data is much harder.

You have to trust that these companies will delete your data when you ask them to, but they’re generally not interested in doing so. Sites like these are more likely to make your data inaccessible than they are to physically delete it. Facebook is a known culprit: actually deleting your data from its servers requires a complicated procedure that may or may not work. And even if you do manage to delete your data, copies are certain to remain in the companies’ backup systems. Gmail explicitly says this in its privacy notice.

Online backups, SMS messages, photos on photo sharing sites, smartphone applications that store your data in the network: you have no idea what really happens when you delete pieces of data or your entire account, because you’re not in control of the computers that are storing the data.

This notion of control also explains how Amazon was able to delete a book that people had previously purchased on their Kindle e-book readers. The legalities are debatable, but Amazon had the technical ability to delete the file because it controls all Kindles. It has designed the Kindle so that it determines when to update the software, whether people are allowed to buy Kindle books, and when to turn off people’s Kindles entirely.

Vanish is a research project by Roxana Geambasu and colleagues at the University of Washington. They designed a prototype system that automatically deletes data after a set time interval. So you can send an email, create a Google Doc, post an update to Facebook, or upload a photo to Flickr, all designed to disappear after a set period of time. And after it disappears, no one—not anyone who downloaded the data, not the site that hosted the data, not anyone who intercepted the data in transit, not even you—will be able to read it. If the police arrive at Facebook or Google or Flickr with a warrant, they won’t be able to read it.

The details are complicated, but Vanish breaks the data’s decryption key into a bunch of pieces and scatters them around the web using a peer-to-peer network. Then it uses the natural turnover in these networks—machines constantly join and leave—to make the data disappear. Unlike previous programs that supported file deletion, this one doesn’t require you to trust any company, organisation, or website. It just happens.

Of course, Vanish doesn’t prevent the recipient of an email or the reader of a Facebook page from copying the data and pasting it into another file, just as Kindle’s deletion feature doesn’t prevent people from copying a book’s files and saving them on their computers. Vanish is just a prototype at this point, and it only works if all the people who read your Facebook entries or view your Flickr pictures have it installed on their computers as well; but it’s a good demonstration of how control affects file deletion. And while it’s a step in the right direction, it’s also new and therefore deserves further security analysis before being adopted on a wide scale.

We’ve lost the control of data on some of the computers we own, and we’ve lost control of our data in the cloud. We’re not going to stop using Facebook and Twitter just because they’re not going to delete our data when we ask them to, and we’re not going to stop using Kindles and iPhones because they may delete our data when we don’t want them to. But we need to take back control of data in the cloud, and projects like Vanish show us how we can.

Now we need something that will protect our data when a large corporation decides to delete it.

This essay originally appeared in The Guardian.

EDITED TO ADD (9/30): Vanish has been broken, paper here.

Posted on September 10, 2009 at 6:08 AMView Comments

Non-Randomness in Coin Flipping

It turns out that flipping a coin has all sorts of non-randomness:

Here are the broad strokes of their research:

  1. If the coin is tossed and caught, it has about a 51% chance of landing on the same face it was launched. (If it starts out as heads, there’s a 51% chance it will end as heads).
  2. If the coin is spun, rather than tossed, it can have a much-larger-than-50% chance of ending with the heavier side down. Spun coins can exhibit “huge bias” (some spun coins will fall tails-up 80% of the time).
  3. If the coin is tossed and allowed to clatter to the floor, this probably adds randomness.
  4. If the coin is tossed and allowed to clatter to the floor where it spins, as will sometimes happen, the above spinning bias probably comes into play.
  5. A coin will land on its edge around 1 in 6000 throws, creating a flipistic singularity.
  6. The same initial coin-flipping conditions produce the same coin flip result. That is, there’s a certain amount of determinism to the coin flip.
  7. A more robust coin toss (more revolutions) decreases the bias.

The paper.

Posted on August 24, 2009 at 7:12 AMView Comments

1 70 71 72 73 74 86

Sidebar photo of Bruce Schneier by Joe MacInnis.