Entries Tagged "academic papers"

Page 70 of 86

Leaders Make Better Liars

According to new research:

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars—the leaders—resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars—the subordinates—showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.

[…]

Carney emphasizes that these results don’t mean that all people in high positions find lying easier: people need only feel powerful, regardless of the real power they have or their position in a hierarchy. “There are plenty of CEOs who act like low-power people and there are plenty of people at every level in organizations who feel very high power,” Carney says. “It can cross rank, every strata of society, any job.”

Posted on March 30, 2010 at 1:59 PMView Comments

Identifying People by their Bacteria

A potential new forensic:

To determine how similar a person’s fingertip bacteria are to bacteria left on computer keys, the team took swabs from three computer keyboards and compared bacterial gene sequences with those from the fingertips of the keyboard owners. Today in the Proceedings of the National Academy of Sciences, they conclude that enough bacteria can be collected from even small surfaces such as computer keys to link them with the hand that laid them down.

The researchers then tested how well such a technique could distinguish the person who left the bacteria from the general population. They sampled bacteria from nine computer mice and from the nine mouse owners. They also collected information on bacterial communities from 270 hands that had never touched any of the mice. In all nine cases, the bacteria on the mice were far more similar to the mouse-owners’ hands than to any of the 270 strange hands. The researchers also found that bacteria will persist on a computer key or mouse for up to 2 weeks after it has been handled.

Here’s a link to the abstract; the full paper is behind a paywall.

Posted on March 29, 2010 at 7:15 AMView Comments

Side-Channel Attacks on Encrypted Web Traffic

Nice paper: “Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow,” by Shuo Chen, Rui Wang, XiaoFeng Wang, and Kehuan Zhang.

Abstract. With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees’ web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.

We already know that eavesdropping on an SSL-encrypted web session can leak a lot of information about the person’s browsing habits. Since the size of both the page requests and the page downloads are different, an eavesdropper can sometimes infer which links the person clicked on and what pages he’s viewing.

This paper extends that work. Ed Felten explains:

The new paper shows that this inference-from-size problem gets much, much worse when pages are using the now-standard AJAX programming methods, in which a web “page” is really a computer program that makes frequent requests to the server for information. With more requests to the server, there are many more opportunities for an eavesdropper to make inferences about what you’re doing—to the point that common applications leak a great deal of private information.

Consider a search engine that autocompletes search queries: when you start to type a query, the search engine gives you a list of suggested queries that start with whatever characters you have typed so far. When you type the first letter of your search query, the search engine page will send that character to the server, and the server will send back a list of suggested completions. Unfortunately, the size of that suggested completion list will depend on which character you typed, so an eavesdropper can use the size of the encrypted response to deduce which letter you typed. When you type the second letter of your query, another request will go to the server, and another encrypted reply will come back, which will again have a distinctive size, allowing the eavesdropper (who already knows the first character you typed) to deduce the second character; and so on. In the end the eavesdropper will know exactly which search query you typed. This attack worked against the Google, Yahoo, and Microsoft Bing search engines.

Many web apps that handle sensitive information seem to be susceptible to similar attacks. The researchers studied a major online tax preparation site (which they don’t name) and found that it leaks a fairly accurate estimate of your Adjusted Gross Income (AGI). This happens because the exact set of questions you have to answer, and the exact data tables used in tax preparation, will vary based on your AGI. To give one example, there is a particular interaction relating to a possible student loan interest calculation, that only happens if your AGI is between $115,000 and $145,000—so that the presence or absence of the distinctively-sized message exchange relating to that calculation tells an eavesdropper whether your AGI is between $115,000 and $145,000. By assembling a set of clues like this, an eavesdropper can get a good fix on your AGI, plus information about your family status, and so on.

For similar reasons, a major online health site leaks information about which medications you are taking, and a major investment site leaks information about your investments.

The paper goes on to talk about mitigation—padding page requests and downloads to a constant size is the obvious one—but they’re difficult and potentially expensive.

More articles.

Posted on March 26, 2010 at 6:04 AMView Comments

Natural Language Shellcode

Nice:

In this paper we revisit the assumption that shellcode need be fundamentally different in structure than non-executable data. Specifically, we elucidate how one can use natural language generation techniques to produce shellcode that is superficially similar to English prose. We argue that this new development poses significant challenges for inline payloadbased inspection (and emulation) as a defensive measure, and also highlights the need for designing more efficient techniques for preventing shellcode injection attacks altogether.

Posted on March 25, 2010 at 7:16 AMView Comments

Secret Questions

Interesting research:

Analysing our data for security, though, shows that essentially all human-generated names provide poor resistance to guessing. For an attacker looking to make three guesses per personal knowledge question (for example, because this triggers an account lock-down), none of the name distributions we looked at gave more than 8 bits of effective security except for full names. That is, about at least 1 in 256 guesses would be successful, and 1 in 84 accounts compromised. For an attacker who can make more than 3 guesses and wants to break into 50% of available accounts, no distributions gave more than about 12 bits of effective security. The actual values vary in some interesting ways-South Korean names are much easier to guess than American ones, female first names are harder than male ones, pet names are slightly harder than human names, and names are getting harder to guess over time.

I’ve written about this problem.

EDITED TO ADD (4/13): xkcd on the secret question.

Posted on March 16, 2010 at 6:44 AMView Comments

Typosquatting

Measuring the Perpetrators and Funders of Typosquatting,” by Tyler Moore and Benjamin Edelman:

Abstract. We describe a method for identifying “typosquatting”, the intentional registration of misspellings of popular website addresses. We estimate that at least 938 000 typosquatting domains target the top 3 264 .com sites, and we crawl more than 285 000 of these domains to analyze their revenue sources. We find that 80% are supported by pay-per-click ads often advertising the correctly spelled domain and its competitors.Another 20% include static redirection to other sites. We present an automated technique that uncovered 75 otherwise legitimate websites which benefited from direct links from thousands of misspellings of competing websites. Using regression analysis, we find that websites in categories with higher pay-per-click ad prices face more typosquatting registrations, indicating that ad platforms such as Google AdWords exacerbate typosquatting. However, our investigations also confirm the feasibility of signicantly reducing typosquatting. We find that typosquatting is highly concentrated: Of typo domains showing Google ads, 63% use one of five advertising IDs, and some large name servers host typosquatting domains as much as four times as often as the web as a whole.

The paper appeared at the Financial Cryptography conference this year.

Posted on March 15, 2010 at 6:13 AMView Comments

The Limits of Identity Cards

Good legal paper on the limits of identity cards: Stephen Mason and Nick Bohm, “Identity and its Verification,” in Computer Law & Security Review, Volume 26, Number 1, Jan 2010.

Those faced with the problem of how to verify a person’s identity would be well advised to ask themselves the question, ‘Identity with what?’ An enquirer equipped with the answer to this question is in a position to tackle, on a rational basis, the task of deciding what evidence will be useful for the purpose. Without the answer to the question, the verification of identity becomes a sadly familiar exercise in blind compliance with arbitrary rules.

Posted on March 10, 2010 at 7:09 AMView Comments

De-Anonymizing Social Network Users

Interesting paper: “A Practical Attack to De-Anonymize Social Network Users.”

Abstract. Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates. These sites have millions of registered users, and they are interesting from a security and privacy point of view because they store large amounts of sensitive personal user data.

In this paper, we introduce a novel de-anonymization attack that exploits group membership information that is available on social networking sites. More precisely, we show that information about the group memberships of a user (i.e., the groups of a social network to which a user belongs) is often sufficient to uniquely identify this user, or, at least, to significantly reduce the set of possible candidates. To determine the group membership of a user, we leverage well-known web browser history stealing attacks. Thus, whenever a social network user visits a malicious website, this website can launch our de-anonymization attack and learn the identity of its visitors.

The implications of our attack are manifold, since it requires a low effort and has the potential to affect millions of social networking users. We perform both a theoretical analysis and empirical measurements to demonstrate the feasibility of our attack against Xing, a medium-sized social network with more than eight million members that is mainly used for business relationships. Our analysis suggests that about 42% of the users that use groups can be uniquely identified, while for 90%, we can reduce the candidate set to less than 2,912 persons. Furthermore, we explored other, larger social networks and performed experiments that suggest that users of Facebook and LinkedIn are equally vulnerable (although attacks would require more resources on the side of the attacker). An analysis of an additional five social networks indicates that they are also prone to our attack.

News article. Moral: anonymity is really, really hard—but we knew that already.

Posted on March 8, 2010 at 6:13 AMView Comments

Man-in-the-Middle Attack Against Chip and PIN

Nice attack against the EMV—Eurocard Mastercard Visa—the “chip and PIN” credit card payment system. The attack allows a criminal to use a stolen card without knowing the PIN.

The flaw is that when you put a card into a terminal, a negotiation takes place about how the cardholder should be authenticated: using a PIN, using a signature or not at all. This particular subprotocol is not authenticated, so you can trick the card into thinking it’s doing a chip-and-signature transaction while the terminal thinks it’s chip-and-PIN. The upshot is that you can buy stuff using a stolen card and a PIN of 0000 (or anything you want). We did so, on camera, using various journalists’ cards. The transactions went through fine and the receipts say “Verified by PIN”.

[…]

So what went wrong? In essence, there is a gaping hole in the specifications which together create the “Chip and PIN” system. These specs consist of the EMV protocol framework, the card scheme individual rules (Visa, MasterCard standards), the national payment association rules (UK Payments Association aka APACS, in the UK), and documents produced by each individual issuer describing their own customisations of the scheme. Each spec defines security criteria, tweaks options and sets rules—but none take responsibility for listing what back-end checks are needed. As a result, hundreds of issuers independently get it wrong, and gain false assurance that all bases are covered from the common specifications. The EMV specification stack is broken, and needs fixing.

Read Ross Anderson’s entire blog post for both details and context. Here’s the paper, the press release, and a FAQ. And one news article.

This is big. There are about a gazillion of these in circulation.

EDITED TO ADD (2/12): BBC video of the attack in action.

Posted on February 11, 2010 at 4:18 PMView Comments

The Limits of Visual Inspection

Interesting research:

Target prevalence powerfully influences visual search behavior. In most visual search experiments, targets appear on at least 50% of trials. However, when targets are rare (as in medical or airport screening), observers shift response criteria, leading to elevated miss error rates. Observers also speed target-absent responses and may make more motor errors. This could be a speed/accuracy tradeoff with fast, frequent absent responses producing more miss errors. Disproving this hypothesis, our experiment one shows that very high target prevalence (98%) shifts response criteria in the opposite direction, leading to elevated false alarms in a simulated baggage search. However, the very frequent target-present responses are not speeded. Rather, rare target-absent responses are greatly slowed. In experiment two, prevalence was varied sinusoidally over 1000 trials as observers’ accuracy and reaction times (RTs) were measured. Observers’ criterion and target-absent RTs tracked prevalence. Sensitivity (d’) and target-present RTs did not vary with prevalence. These results support a model in which prevalence influences two parameters: a decision criterion governing the series of perceptual decisions about each attended item, and a quitting threshold that governs the timing of target-absent responses. Models in which target prevalence only influences an overall decision criterion are not supported.

This has implications for searching for contraband at airports.

Posted on February 8, 2010 at 1:54 PMView Comments

1 68 69 70 71 72 86

Sidebar photo of Bruce Schneier by Joe MacInnis.