Entries Tagged "academic papers"

Page 65 of 86

Status Report on the War on Photography

Worth reading: Morgan Leigh Manning, “Less than Picture Perfect: The Legal Relationship between Photographers’ Rights and Law Enforcement,” Tennessee Law Review, Vol. 78, p. 105, 2010.

Abstract: Threats to national security and public safety, whether real or perceived, result in an atmosphere conducive to the abuse of civil liberties. History is littered with examples: The Alien and Sedition Acts of 1798, the suspension of habeas corpus during the Civil War, the Palmer Raids during World War I, and McCarthyism in the aftermath of World War II.Unfortunately, the post-9/11 world represents no departure from this age-old trend. Evidence of post-9/11 tension between national security and civil liberties is seen in the heightened regulation of photography; scholars have labeled it the “War on Photography” – a conflict between law enforcement officials and photographers over the right to take pictures in public places. A simple Google search reveals countless incidents of overzealous law enforcement officials detaining or arresting photographers and, in many cases, confiscating their cameras and memory cards, despite the fact that these individuals were in lawful places, at lawful times, partaking in lawful activities.

This article examines the so-called War on Photography and the remedies available to those who have been unlawfully detained, arrested, or have had their property seized for taking pictures in public places or private places open to the public. It discusses recent incidents that highlight the growing infringement of photography rights and the magnitude of the harm that law enforcement officials have inflicted, paying particular attention to the themes these events have in common. It explores the existing legal framework surrounding photography rights and the federal and state remedies available to those whose rights have been violated. It examines the adequacy of each remedy including: (1) declaratory and injunctive relief, (2) Section 1983 and Bivens actions, and (3) state tort remedies. It discusses the obstacles associated with each remedy and the reasons why these obstacles are particularly hard to overcome in the context of photography. It then argues that most, if not all, of the remedies discussed are either inadequate or altogether impractical considering the costs of litigation. Lastly, this article will discuss the reasons why people should be concerned about the War on Photography and possible ways to reverse the erosion of photography rights.

Posted on June 14, 2011 at 1:45 PMView Comments

Spam as a Business

Interesting research: Kirill Levchenko, et al. (2010), “Click Trajectories—End-to-End Analysis of the Spam Value Chain,” IEEE Symposium on Security and Privacy 2011, Oakland, California, 24 May 2011.

Abstract: Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprise’s full structure, and thus most anti-spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown). In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email—including naming, hosting, payment and fulfillment—using extensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain; 95% of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.

It’s a surprisingly small handful of banks:

All told, they saw 13 banks handling 95% of the 76 orders for which they received transaction information. (Only one U.S. bank was seen settling spam transactions: Wells Fargo.) But just three banks handled the majority of transactions: Azerigazbank in Azerbaijan, DnB NOR in Latvia (although the bank is headquartered in Norway), and St. Kitts-Nevis-Anguilla National Bank in the Caribbean. In addition, “most herbal and replica purchases cleared through the same bank in St. Kitts, … while most pharmaceutical affiliate programs used two banks (in Azerbaijan and Latvia), and software was handled entirely by two banks (in Latvia and Russia),” they said.

This points to a fruitful avenue to reduce spam: go after the banks.

Here’s an older paper on the economics of spam.

Posted on June 9, 2011 at 1:53 PMView Comments

The Cyberwar Arms Race

Good paper: “Loving the Cyber Bomb? The Dangers of Threat Inflation in Cybersecurity Policy,” by Jerry Brito and Tate Watkins.

Over the past two years there has been a steady drumbeat of alarmist rhetoric coming out of Washington about potential catastrophic cyber threats. For example, at a Senate Armed Services Committee hearing last year, Chairman Carl Levin said that “cyberweapons and cyberattacks potentially can be devastating, approaching weapons of mass destruction in their effects.” Proposed responses include increased federal spending on cybersecurity and the regulation of private network security practices.

The rhetoric of “cyber doom” employed by proponents of increased federal intervention, however, lacks clear evidence of a serious threat that can be verified by the public. As a result, the United States may be witnessing a bout of threat inflation similar to that seen in the run-up to the Iraq War. Additionally, a cyber-industrial complex is emerging, much like the military-industrial complex of the Cold War. This complex may serve to not only supply cybersecurity solutions to the federal government, but to drum up demand for them as well.

Part I of this article draws a parallel between today’s cybersecurity debate and the run-up to the Iraq War and looks at how an inflated public conception of the threat we face may lead to unnecessary regulation of the Internet. Part II draws a parallel between the emerging cybersecurity establishment and the military-industrial complex of the Cold War and looks at how unwarranted external influence can lead to unnecessary federal spending. Finally, Part III surveys several federal cybersecurity proposals and presents a framework for analyzing the cybersecurity threat.

Also worth reading is an earlier paper by Sean Lawson: “Beyond Cyber Doom.”

EDITED TO ADD (5/3): Good article on the paper.

Posted on April 28, 2011 at 6:56 AMView Comments

Social Solidarity as an Effect of the 9/11 Terrorist Attacks

It’s standard sociological theory that a group experiences social solidarity in response to external conflict. This paper studies the phenomenon in the United States after the 9/11 terrorist attacks.

Conflict produces group solidarity in four phases: (1) an initial few days of shock and idiosyncratic individual reactions to attack; (2) one to two weeks of establishing standardized displays of solidarity symbols; (3) two to three months of high solidarity plateau; and (4) gradual decline toward normalcy in six to nine months. Solidarity is not uniform but is clustered in local groups supporting each other’s symbolic behavior. Actual solidarity behaviors are performed by minorities of the population, while vague verbal claims to performance are made by large majorities. Commemorative rituals intermittently revive high emotional peaks; participants become ranked according to their closeness to a center of ritual attention. Events, places, and organizations claim importance by associating themselves with national solidarity rituals and especially by surrounding themselves with pragmatically ineffective security ritual. Conflicts arise over access to centers of ritual attention; clashes occur between pragmatists deritualizing security and security zealots attempting to keep up the level of emotional intensity. The solidarity plateau is also a hysteria zone; as a center of emotional attention, it attracts ancillary attacks unrelated to the original terrorists as well as alarms and hoaxes. In particular historical circumstances, it becomes a period of atrocities.

This certainly makes sense as a group survival mechanism: self-interest giving way to group interest in face of a threat to the group. It’s the kind of thing I am talking about in my new book.

Paper also available here.

Posted on April 27, 2011 at 9:10 AMView Comments

"Schneier's Law"

Back in 1998, I wrote:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break.

In 2004, Cory Doctorow called this Schneier’s law:

…what I think of as Schneier’s Law: “any person can invent a security system so clever that she or he can’t think of how to break it.”

The general idea is older than my writing. Wikipedia points out that in The Codebreakers, David Kahn writes:

Few false ideas have more firmly gripped the minds of so many intelligent men than the one that, if they just tried, they could invent a cipher that no one could break.

The idea is even older. Back in 1864, Charles Babbage wrote:

One of the most singular characteristics of the art of deciphering is the strong conviction possessed by every person, even moderately acquainted with it, that he is able to construct a cipher which nobody else can decipher.

My phrasing is different, though. Here’s my original quote in context:

Anyone, from the most clueless amateur to the best cryptographer, can create an algorithm that he himself can’t break. It’s not even hard. What is hard is creating an algorithm that no one else can break, even after years of analysis. And the only way to prove that is to subject the algorithm to years of analysis by the best cryptographers around.

And here’s me in 2006:

Anyone can invent a security system that he himself cannot break. I’ve said this so often that Cory Doctorow has named it “Schneier’s Law”: When someone hands you a security system and says, “I believe this is secure,” the first thing you have to ask is, “Who the hell are you?” Show me what you’ve broken to demonstrate that your assertion of the system’s security means something.

And that’s the point I want to make. It’s not that people believe they can create an unbreakable cipher; it’s that people create a cipher that they themselves can’t break, and then use that as evidence they’ve created an unbreakable cipher.

EDITED TO ADD (4/16): This is an example of the Dunning-Kruger effect, named after the authors of this paper: “Unskilled and Unaware of It: How Difficulties in recognizing One’s Own Incompetence Lead to Inflated Self-Assessments.”

Abstract: People tend to hold overly favorable views of their abilities in many social and intellectual domains. The authors suggest that this overestimation occurs, in part, because people who are unskilled in these domains suffer a dual burden: Not only do these people reach erroneous conclusions and make unfortunate choices, but their incompetence robs them of the metacognitive ability to realize it. Across 4 studies, the authors found that participants scoring in the bottom quartile on tests of humor, grammar, and logic grossly overestimated their test performance and ability. Although their test scores put them in the 12th percentile, they estimated themselves to be in the 62nd. Several analyses linked this miscalibration to deficits in metacognitive skill, or the capacity to distinguish accuracy from error. Paradoxically, improving the skills of participants, and thus increasing their metacognitive competence, helped them recognize the limitations of their abilities.

EDITED TO ADD (4/18): If I have any contribution to this, it’s to generalize it to security systems and not just to cryptographic algorithms. Because anyone can design a security system that he cannot break, evaluating the security credentials of the designer is an essential aspect of evaluating the system’s security.

Posted on April 15, 2011 at 1:45 PMView Comments

Federated Authentication

New paper by Ross Anderson: “Can We Fix the Security Economics of Federated Authentication?“:

There has been much academic discussion of federated authentication, and quite some political manoeuvring about ‘e-ID’. The grand vision, which has been around for years in various forms but was recently articulated in the US National Strategy for Trustworthy Identities in Cyberspace (NSTIC), is that a single logon should work everywhere [1]. You should be able to use your identity provider of choice to log on anywhere; so you might use your driver’s license to log on to Gmail, or use your Facebook logon to file your tax return. More restricted versions include the vision of governments of places like Estonia and Germany (and until May 2010 the UK) that a government-issued identity card should serve as a universal logon. Yet few systems have been fielded at any scale.

In this paper I will briefly discuss the four existing examples we have of federated authentication, and then go on to discuss a much larger, looming problem. If the world embraces the Apple vision of your mobile phone becoming your universal authentication device ­ so that your phone contains half-a dozen credit cards, a couple of gift cards, a dozen coupons and vouchers, your AA card, your student card and your driving license, how will we manage all this? A useful topic for initial discussion, I argue, is revocation. Such a phone will become a target for bad guys, both old and new. What happens when someone takes your phone off you at knifepoint, or when it gets infested with malware? Who do you call, and what will they do to make the world right once more?

Blog post.

Posted on March 29, 2011 at 6:43 AMView Comments

Identifying Tor Users Through Insecure Applications

Interesting research: “One Bad Apple Spoils the Bunch: Exploiting P2P Applications to Trace and Profile Tor Users“:

Abstract: Tor is a popular low-latency anonymity network. However, Tor does not protect against the exploitation of an insecure application to reveal the IP address of, or trace, a TCP stream. In addition, because of the linkability of Tor streams sent together over a single circuit, tracing one stream sent over a circuit traces them all. Surprisingly, it is unknown whether this linkability allows in practice to trace a significant number of streams originating from secure (i.e., proxied) applications. In this paper, we show that linkability allows us to trace 193% of additional streams, including 27% of HTTP streams possibly originating from “secure” browsers. In particular, we traced 9% of Tor streams carried by our instrumented exit nodes. Using BitTorrent as the insecure application, we design two attacks tracing BitTorrent users on Tor. We run these attacks in the wild for 23 days and reveal 10,000 IP addresses of Tor users. Using these IP addresses, we then profile not only the BitTorrent downloads but also the websites visited per country of origin of Tor users. We show that BitTorrent users on Tor are over-represented in some countries as compared to BitTorrent users outside of Tor. By analyzing the type of content downloaded, we then explain the observed behaviors by the higher concentration of pornographic content downloaded at the scale of a country. Finally, we present results suggesting the existence of an underground BitTorrent ecosystem on Tor.

Posted on March 25, 2011 at 6:38 AMView Comments

Detecting Words and Phrases in Encrypted VoIP Calls

Interesting:

Abstract: Although Voice over IP (VoIP) is rapidly being adopted, its security implications are not yet fully understood. Since VoIP calls may traverse untrusted networks, packets should be encrypted to ensure confidentiality. However, we show that it is possible to identify the phrases spoken within encrypted VoIP calls when the audio is encoded using variable bit rate codecs. To do so, we train a hidden Markov model using only knowledge of the phonetic pronunciations of words, such as those provided by a dictionary, and search packet sequences for instances of specified phrases. Our approach does not require examples of the speaker’s voice, or even example recordings of the words that make up the target phrase. We evaluate our techniques on a standard speech recognition corpus containing over 2,000 phonetically rich phrases spoken by 630 distinct speakers from across the continental United States. Our results indicate that we can identify phrases within encrypted calls with an average accuracy of 50%, and with accuracy greater than 90% for some phrases. Clearly, such an attack calls into question the efficacy of current VoIP encryption standards. In addition, we examine the impact of various features of the underlying audio on our performance and discuss methods for mitigation.

EDITED TO ADD (4/13): Full paper. I wrote about this in 2008.

Posted on March 24, 2011 at 12:46 PMView Comments

Folk Models in Home Computer Security

This is a really interesting paper: “Folk Models of Home Computer Security,” by Rick Wash. It was presented at SOUPS, the Symposium on Usable Privacy and Security, last year.

Abstract:

Home computer systems are frequently insecure because they are administered by untrained, unskilled users. The rise of botnets has amplified this problem; attackers can compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I investigate how home computer users make security-relevant decisions about their computers. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which security advice to follow: four different conceptualizations of ‘viruses’ and other malware, and four different conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring some security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they have been cleverly designed to take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

I’d list the models, but it’s more complicated than that. Read the paper.

Posted on March 22, 2011 at 7:12 AMView Comments

1 63 64 65 66 67 86

Sidebar photo of Bruce Schneier by Joe MacInnis.