Entries Tagged "academic papers"

Page 74 of 86

John Walker and the Fleet Broadcasting System

Ph.D. thesis from 2001:

An Analysis of the Systemic Security Weaknesses of the U.S. Navy Fleet Broadcasting System, 1967-1974, as exploited by CWO John Walker, by MAJ Laura J. Heath

Abstract: CWO John Walker led one of the most devastating spy rings ever unmasked in the US. Along with his brother, son, and friend, he compromised US Navy cryptographic systems and classified information from 1967 to 1985. This research focuses on just one of the systems compromised by John Walker himself: the Fleet Broadcasting System (FBS) during the period 1967-1975, which was used to transmit all US Navy operational orders to ships at sea. Why was the communications security (COMSEC) system so completely defenseless against one rogue sailor, acting alone? The evidence shows that FBS was designed in such a way that it was effectively impossible to detect or prevent rogue insiders from compromising the system. Personnel investigations were cursory, frequently delayed, and based more on hunches than hard scientific criteria. Far too many people had access to the keys and sensitive materials, and the auditing methods were incapable, even in theory, of detecting illicit copying of classified materials. Responsibility for the security of the system was distributed between many different organizations, allowing numerous security gaps to develop. This has immediate implications for the design of future classified communications systems.

EDITED TO ADD (9/23): I blogged about this in 2005. Apologies; I forgot.

Posted on June 23, 2009 at 1:30 PMView Comments

Eavesdropping on Dot-Matrix Printers by Listening to Them

Interesting research.

First, we develop a novel feature design that borrows from commonly used techniques for feature extraction in speech recognition and music processing. These techniques are geared towards the human ear, which is limited to approx. 20 kHz and whose sensitivity is logarithmic in the frequency; for printers, our experiments show that most interesting features occur above 20 kHz, and a logarithmic scale cannot be assumed. Our feature design reflects these observations by employing a sub-band decomposition that places emphasis on the high frequencies, and spreading filter frequencies linearly over the frequency range. We further add suitable smoothing to make the recognition robust against measurement variations and environmental noise.

Second, we deal with the decay time and the induced blurring by resorting to a word-based approach instead of decoding individual letters. A word-based approach requires additional upfront effort such as an extended training phase as the dictionary grows larger, and it does not permit us to increase recognition rates by using, e.g., spell-checking. Recognition of words based on training the sound of individual letters (or pairs/triples of letters), however, is infeasible because the sound emitted by printers blurs so strongly over adjacent letters.

Third, we employ speech recognition techniques to increase the recognition rate: we use Hidden Markov Models (HMMs) that rely on the statistical frequency of sequences of words in text in order to rule out incorrect word combinations. The presence of strong blurring, however, requires to use at least 3-grams on the words of the dictionary to be effective, causing existing implementations for this task to fail because of memory exhaustion. To tame memory consumption, we implemented a delayed computation of the transition matrix that underlies HMMs, and in each step of the search procedure, we adaptively removed the words with only weakly matching features from the search space.

We built a prototypical implementation that can bootstrap the recognition routine from a database of featured words that have been trained using supervised learning. Afterwards, the prototype automatically recognizes text with recognition rates of up to 72 %.

Researchers have done lots of work on eavesdropping on remote devices. (One example.) And we know the various intelligence organizations of the world have been doing this sort of thing for decades.

Posted on June 23, 2009 at 6:16 AMView Comments

Ever Better Cryptanalytic Results Against SHA-1

The SHA family (which, I suppose, should really be called the MD4 family) of cryptographic hash functions has been under attack for a long time. In 2005, we saw the first cryptanalysis of SHA-1 that was faster than brute force: collisions in 269 hash operations, later improved to 263 operations. A great result, but not devastating. But remember the great truism of cryptanalysis: attacks always get better, they never get worse. Last week, devastating got a whole lot closer. A new attack can, at least in theory, find collisions in 252 hash operations—well within the realm of computational possibility. Assuming the cryptanalysis is correct, we should expect to see an actual SHA-1 collision within the year.

Note that this is a collision attack, not a pre-image attack. Most uses of hash functions don’t care about collision attacks. But if yours does, switch to SHA-2 immediately. (This has more information on this, written for the 269 attack.)

This is why NIST is administering a SHA-3 competition for a new hash standard. And whatever algorithm is chosen, it will look nothing like anything in the SHA family (which is why I think it should be called the Advanced Hash Standard, or AHS).

Posted on June 16, 2009 at 12:21 PMView Comments

Industry Differences in Types of Security Breaches

Interhack has been working on a taxonomy of security breaches, and has an interesting conclusion:

The Health Care and Social Assistance sector reported a larger than average proportion of lost and stolen computing hardware, but reported an unusually low proportion of compromised hosts. Educational Services reported a disproportionally large number of compromised hosts, while insider conduct and lost and stolen hardware were well below the proportion common to the set as a whole. Public Administration’s proportion of compromised host reports was below average, but their proportion of processing errors was well above the norm. The Finance and Insurance sector experienced the smallest overall proportion of processing errors, but the highest proportion of insider misconduct. Other sectors showed no statistically significant difference from the average, either due to a true lack of variance, or due to an insignificant number of samples for the statistical tests being used

Full study is here.

Posted on June 10, 2009 at 6:18 AMView Comments

Research on Movie-Plot Threats

This could be interesting:

Emerging Threats and Security Planning: How Should We Decide What Hypothetical Threats to Worry About?

Brian A. Jackson, David R. Frelinger

Concerns about how terrorists might attack in the future are central to the design of security efforts to protect both individual targets and the nation overall. In thinking about emerging threats, security planners are confronted by a panoply of possible future scenarios coming from sources ranging from the terrorists themselves to red-team brainstorming efforts to explore ways adversaries might attack in the future. This paper explores an approach to assessing emerging and/or novel threats and deciding whether—or how much—they should concern security planners by asking two questions: (1) Are some of the novel threats “niche threats” that should be addressed within existing security efforts? (2) Which of the remaining threats are attackers most likely to execute successfully and should therefore be of greater concern for security planners? If threats can reasonably be considered niche threats, they can be prudently addressed in the context of existing security activities. If threats are unusual enough, suggest significant new vulnerabilities, or their probability or consequences means they cannot be considered lesser included cases within other threats, prioritizing them based on their ease of execution provides a guide for which threats merit the greatest concern and most security attention. This preserves the opportunity to learn from new threats yet prevents security planners from being pulled in many directions simultaneously by attempting to respond to every threat at once.

Full paper available here.

Posted on June 1, 2009 at 3:29 PMView Comments

Steganography Using TCP Retransmission

Research:

Hiding Information in Retransmissions

Wojciech Mazurczyk, Milosz Smolarczyk, Krzysztof Szczypiorski

The paper presents a new steganographic method called RSTEG (Retransmission Steganography), which is intended for a broad class of protocols that utilises retransmission mechanisms. The main innovation of RSTEG is to not acknowledge a successfully received packet in order to intentionally invoke retransmission. The retransmitted packet carries a steganogram instead of user data in the payload field. RSTEG is presented in the broad context of network steganography, and the utilisation of RSTEG for TCP (Transport Control Protocol) retransmission mechanisms is described in detail. Simulation results are also presented with the main aim to measure and compare the steganographic bandwidth of the proposed method for different TCP retransmission mechanisms as well as to determine the influence of RSTEG on the network retransmissions level.

I don’t think these sorts of things have any large-scale applications, but they are clever.

Posted on May 28, 2009 at 6:40 AMView Comments

Secret Questions

In 2004, I wrote about the prevalence of secret questions as backup passwords. The problem is that the answers to these “secret questions” are often much easier to guess than random passwords. Mother’s maiden name isn’t very secret. Name of first pet, name of favorite teacher: there are some common names. Favorite color: I could probably guess that in no more than five attempts.

The result is that the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

Here’s some actual research on the issue:

It’s no secret: Measuring the security and reliability of authentication via ‘secret’ questions

Abstract:

All four of the most popular webmail providers—AOL, Google, Microsoft, and Yahoo!—rely on personal questions as the secondary authentication secrets used to reset account passwords. The security of these questions has received limited formal scrutiny, almost all of which predates webmail. We ran a user study to measure the reliability and security of the questions used by all four webmail providers. We asked participants to answer these questions and then asked their acquaintances to guess their answers. Acquaintance with whom participants reported being unwilling to share their webmail passwords were able to guess 17% of their answers. Participants forgot 20% of their own answers within six months. What’s more, 13% of answers could be guessed within five attempts by guessing the most popular answers of other participants, though this weakness is partially attributable to the geographic homogeneity of our participant pool.

Posted on May 25, 2009 at 9:56 AMView Comments

On the Anonymity of Home/Work Location Pairs

Interesting:

Philippe Golle and Kurt Partridge of PARC have a cute paper on the anonymity of geo-location data. They analyze data from the U.S. Census and show that for the average person, knowing their approximate home and work locations—to a block level—identifies them uniquely.

Even if we look at the much coarser granularity of a census tract—tracts correspond roughly to ZIP codes; there are on average 1,500 people per census tract—for the average person, there are only around 20 other people who share the same home and work location. There’s more: 5% of people are uniquely identified by their home and work locations even if it is known only at the census tract level. One reason for this is that people who live and work in very different areas (say, different counties) are much more easily identifiable, as one might expect.

“On the Anonymity of Home/Work Location Pairs,” by Philippe Golle and Kurt Partridge:

Abstract:

Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual’s home and workplace can both be deduced from a location trace, then the median size of the individual’s anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual’s home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.

This is all very troubling, given the number of location-based services springing up and the number of databases that are collecting location data.

Posted on May 21, 2009 at 6:15 AMView Comments

Researchers Hijack a Botnet

A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:

The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet’s controllers updated the system and cut the observation short.

During that time, however, UCSB’s researchers were able to gather massive amounts of information on how the botnet functions as well as what kind of information it’s gathering. Almost 300,000 unique login credentials were gathered over the time the researchers controlled the botnet, including 56,000 passwords gathered in a single hour using “simple replacement rules” and a password cracker. They found that 28 percent of victims reused their credentials for accessing 368,501 websites, making it an easy task for scammers to gather further personal information. The researchers noted that they were able to read through hundreds of e-mail, forum, and chat messages gathered by Torpig that “often contain detailed (and private) descriptions of the lives of their authors.”

Here’s the paper:

Abstract:

Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected. While botnets have been “hijacked” before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This opens the possibility to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards.

Another article.

Posted on May 11, 2009 at 6:56 AMView Comments

1 72 73 74 75 76 86

Sidebar photo of Bruce Schneier by Joe MacInnis.