Entries Tagged "academic papers"

Page 67 of 86

Analyzing CAPTCHAs

New research: “Attacks and Design of Image Recognition CAPTCHAs.”

Abstract. We systematically study the design of image recognition CAPTCHAs (IRCs) in this paper. We first review and examine all IRCs schemes known to us and evaluate each scheme against the practical requirements in CAPTCHA applications, particularly in large-scale real-life applications such as Gmail and Hotmail. Then we present a security analysis of the representative schemes we have identified. For the schemes that remain unbroken, we present our novel attacks. For the schemes for which known attacks are available, we propose a theoretical explanation why those schemes have failed. Next, we provide a simple but novel framework for guiding the design of robust IRCs. Then we propose an innovative IRC called Cortcha that is scalable to meet the requirements of large-scale applications. Cortcha relies on recognizing an object by exploiting its surrounding context, a task that humans can perform well but computers cannot. An infinite number of types of objects can be used to generate challenges, which can effectively disable the learning process in machine learning attacks. Cortcha does not require the images in its image database to be labeled. Image collection and CAPTCHA generation can be fully automated. Our usability studies indicate that, compared with Google’s text CAPTCHA, Cortcha yields a slightly higher human accuracy rate but on average takes more time to solve a challenge.

The paper attacks IMAGINATION (designed at Penn State around 2005) and ARTiFACIAL (designed at MSR Redmond around 2004).

Posted on October 5, 2010 at 7:22 AMView Comments

Cultural Cognition of Risk

This is no surprise:

The people behind the new study start by asking a pretty obvious question: “Why do members of the public disagree—­sharply and persistently—­about facts on which expert scientists largely agree?” (Elsewhere, they refer to the “intense political contestation over empirical issues on which technical experts largely agree.”) In this regard, the numbers from the Pew survey are pretty informative. Ninety-seven percent of the members of the American Association for the Advancement of Science accept the evidence for evolution, but at least 40 percent of the public thinks that major differences remain in scientific opinion on this topic. Clearly, the scientific community isn’t succeeding in making the public aware of its opinion.

According to the new study, this isn’t necessarily the fault of the scientists, though. The authors favor a model, called the cultural cognition of risk, which “refers to the tendency of individuals to form risk perceptions that are congenial to their values.” This wouldn’t apply directly to evolution, but would to climate change: if your cultural values make you less likely to accept the policy implications of our current scientific understanding, then you’ll be less likely to accept the science.

But, as the authors note, opponents of a scientific consensus often try to claim to be opposing it on scientific, rather than cultural grounds. “Public debates rarely feature open resistance to science,” they note, “the parties to such disputes are much more likely to advance diametrically opposed claims about what the scientific evidence really shows.” To get there, those doing the arguing must ultimately be selective about what evidence and experts they accept—­they listen to, and remember, those who tell them what they want to hear. “The cultural cognition thesis predicts that individuals will more readily recall instances of experts taking the position that is consistent with their cultural predisposition than ones taking positions inconsistent with it,” the paper suggests.

[…]

So, it’s not just a matter of the public not understanding the expert opinions of places like the National Academies of science; they simply discount the expertise associated with any opinion they’d rather not hear.

Here’s the paper.

Posted on September 28, 2010 at 6:33 AMView Comments

Successful Attack Against a Quantum Cryptography System

Clever:

Quantum cryptography is often touted as being perfectly secure. It is based on the principle that you cannot make measurements of a quantum system without disturbing it. So, in theory, it is impossible for an eavesdropper to intercept a quantum encryption key without disrupting it in a noticeable way, triggering alarm bells.

Vadim Makarov at the Norwegian University of Science and Technology in Trondheim and his colleagues have now cracked it. “Our hack gave 100% knowledge of the key, with zero disturbance to the system,” he says.

[…]

The cunning part is that while blinded, Bob’s detector cannot function as a ‘quantum detector’ that distinguishes between different quantum states of incoming light. However, it does still work as a ‘classical detector’ ­ recording a bit value of 1 if it is hit by an additional bright light pulse, regardless of the quantum properties of that pulse.

That means that every time Eve intercepts a bit value of 1 from Alice, she can send a bright pulse to Bob, so that he also receives the correct signal, and is entirely unaware that his detector has been sabotaged. There is no mismatch between Eve and Bob’s readings because Eve sends Bob a classical signal, not a quantum one. As quantum cryptographic rules no longer apply, no alarm bells are triggered, says Makarov.

“We have exploited a purely technological loophole that turns a quantum cryptographic system into a classical system, without anyone noticing,” says Makarov.

Makarov and his team have demonstrated that the hack works on two commercially available systems: one sold by ID Quantique (IDQ), based in Geneva, Switzerland, and one by MagiQ Technologies, based in Boston, Massachusetts. “Once I had the systems in the lab, it took only about two months to develop a working hack,” says Makarov.

Just because something is secure in theory doesn’t mean it’s secure in practice. Or, to put it more cleverly: in theory, theory and practice are the same; but in practice, they’re very different.

The paper is here.

Posted on September 2, 2010 at 1:46 PMView Comments

Wanted: Skein Hardware Help

As part of NIST’s SHA-3 selection process, people have been implementing the candidate hash functions on a variety of hardware and software platforms. Our team has implemented Skein in Intel’s 32 nm ASIC process, and got some impressive performance results (presentation and paper). Several other groups have implemented Skein in FPGA and ASIC, and have seen significantly poorer performance. We need help understanding why.

For example, a group led by Brian Baldwin at the Claude Shannon Institute for Discrete Mathematics, Coding and Cryptography implemented all the second-round candidates in FPGA (presentation and paper). Skein performance was terrible, but when they checked their code, they found an error. Their corrected performance comparison (presentation and paper) has Skein performing much better and in the top ten.

We suspect that the adders in all the designs may not be properly optimized, although there may be other performance issues. If we can at least identify (or possibly even fix) the slowdowns in the design, it would be very helpful, both for our understanding and for Skein’s hardware profile. Even if we find that the designs are properly optimized, that would also be good to know.

A group at George Mason University led by Kris Gaj implemented all the second-round candidates in FPGA (presentation, paper, and much longer paper). Skein had the worst performance of any of the implementations. We’re looking for someone who can help us understand the design, and determine if it can be improved.

Another group, led by Stefan Tillich at University of Bristol, implemented all the candidates in 180 nm custom ASIC (presentation and paper). Here, Skein is one of the worst performers. We’re looking for someone who can help us understand what this group did.

Three other groups—one led by Patrick Schaumont of Virginia Tech (presentation and paper), another led by Shin’ichiro Matsuo at National Institute of Information and Communications Technology in Japan (presentation and paper), and a third led by Luca Henzen at ETH Zurich (paper with appendix, and conference version)—implemented the SHA-3 candidates. Again, we need help understanding how their Skein performance numbers are so different from ours.

We’re looking for people with FPGA and ASIC skills to work with the Skein team. We don’t have money to pay anyone; co-authorship on a paper (and a Skein polo shirt) is our primary reward. Please send me e-mail if you’re interested.

Posted on September 1, 2010 at 1:17 PMView Comments

More Skein News

Skein is my new hash function. Well, “my” is an overstatement; I’m one of the eight designers. It was submitted to NIST for their SHA-3 competition, and one of the 14 algorithms selected to advance to the second round. Here’s the Skein paper; source code is here. The Skein website is here.

Last week was the Second SHA-3 Candidate Conference. Lots of people presented papers on the candidates: cryptanalysis papers, implementation papers, performance comparisons, etc. There were two cryptanalysis papers on Skein. The first was by Kerry McKay and Poorvi L. Vora (presentation and paper). They tried to extend linear cryptanalysis to groups of bits to attack Threefish (the block cipher inside Skein). It was a nice analysis, but it didn’t get very far at all.

The second was a fantastic piece of cryptanalysis by Dmitry Khovratovich, Ivica Nikolié, and Christian Rechberger. They used a rotational rebound attack (presentation and paper) to mount a “known-key distinguisher attack” on 57 out of 72 Threefish rounds faster than brute force. It’s a new type of attack—some go so far as to call it an “observation”—and the community is still trying to figure out what it means. It only works if the attacker can manipulate both the plaintexts and the keys in a structured way. Against 57-round Threefish, it requires 2503 work—barely better than brute force. And it only distinguishes reduced-round Threefish from a random permutation; it doesn’t actually recover any key bits.

Even with the attack, Threefish has a good security margin. Also, the attack doesn’t affect Skein. But changing one constant in the algorithm’s key schedule makes the attack impossible. NIST has said they’re allowing second-round tweaks, so we’re going to make the change. It won’t affect any performance numbers or obviate any other cryptanalytic results—but the best attack would be 33 out of 72 rounds.

Our update on Skein, which we presented at the conference, is here. All the other papers and presentations are here. (My 2008 essay on SHA-3 is here, and my 2009 update is here.) The second-round algorithms are: BLAKE, Blue Midnight Wish, CubeHash, ECHO, Fugue, Grøstl, Hamsi, JH, Keccak, Luffa, Shabal, SHAvite-3, SIMD, and Skein. You can find details on all of them, as well as the current state of their cryptanalysis, here. NIST will select approximately five algorithms to go on to the third round by the end of the year.

In other news, we’re once again making Skein polo shirts available to the public. Those of you who attended either of the two SHA-3 conferences might have noticed the stylish black Skein polo shirts worn by the Skein team. Anyone who wants one is welcome to buy it, at cost. Details (with photos) are here. All orders must be received before October 1, and we’ll have all the shirts made in one batch.

Posted on September 1, 2010 at 6:01 AMView Comments

Eavesdropping on Smart Homes with Distributed Wireless Sensors

Protecting your daily in-home activity information from a wireless snooping attack,” by Vijay Srinivasan, John Stankovic, and Kamin Whitehouse:

Abstract: In this paper, we first present a new privacy leak in residential wireless ubiquitous computing systems, and then we propose guidelines for designing future systems to prevent this problem. We show that we can observe private activities in the home such as cooking, showering, toileting, and sleeping by eavesdropping on the wireless transmissions of sensors in a home, even when all of the transmissions are encrypted. We call this the Fingerprint and Timing-based Snooping (FATS) attack. This attack can already be carried out on millions of homes today, and may become more important as ubiquitous computing environments such as smart homes and assisted living facilities become more prevalent. In this paper, we demonstrate and evaluate the FATS attack on eight different homes containing wireless sensors. We also propose and evaluate a set of privacy preserving design guidelines for future wireless ubiquitous systems and show how these guidelines can be used in a hybrid fashion to prevent against the FATS attack with low implementation costs.

The group was able to infer surprisingly detailed activity information about the residents, including when they were home or away, when they were awake or sleeping, and when they were performing activities such as showering or cooking. They were able to infer all this without any knowledge of the location, semantics, or source identifier of the wireless sensors, while assuming perfect encryption of the data and source identifiers.

Posted on August 31, 2010 at 12:39 PMView Comments

Detecting Deception in Conference Calls

Research paper: Detecting Deceptive Discussions in Conference Calls, by David F. Larcker and Anastasia A. Zakolyukina.

Abstract: We estimate classification models of deceptive discussions during quarterly earnings conference calls. Using data on subsequent financial restatements (and a set of criteria to identify especially serious accounting problems), we label the Question and Answer section of each call as “truthful” or “deceptive”. Our models are developed with the word categories that have been shown by previous psychological and linguistic research to be related to deception. Using conservative statistical tests, we find that the out-of-sample performance of the models that are based on CEO or CFO narratives is significantly better than random by 4% – 6% (with 50% – 65% accuracy) and provides a significant improvement to a model based on discretionary accruals and traditional controls. We find that answers of deceptive executives have more references to general knowledge, fewer non-extreme positive emotions, and fewer references to shareholders value and value creation. In addition, deceptive CEOs use significantly fewer self-references, more third person plural and impersonal pronouns, more extreme positive emotions, fewer extreme negative emotions, and fewer certainty and hesitation words.

Posted on August 26, 2010 at 6:15 AMView Comments

1 65 66 67 68 69 86

Sidebar photo of Bruce Schneier by Joe MacInnis.