Blog: August 2011 Archives

Job Opening: TSA Public Affairs Specialist

This job can’t be fun:

This Public Affairs Specialist position is located in the Office of Strategic Communications and Public Affairs (SCPA), Transportation Security Administration (TSA), Department of Homeland Security (DHS). If selected for this position, you will serve as the Press Secretary and senior representative/liaison working with Federal and stakeholder partners. You will utilize your expert knowledge and mastery of advanced public affairs principles, concepts, regulations, practices, analytical methods, and techniques (internet, print, TV, and radio) on a variety of transportation security and TSA related issues.

Typical assignments include:

  • Conducting on-camera and/or on the record interviews about sensitive, complex and potentially crisis situations, sometimes with no advance notice.
  • Serving as a senior representative and liaison from the Office of Strategic Communications and Public Affairs working with Federal and stakeholder partners.
  • Providing guidance on information to be released to the public, and approaches necessary to gain public understanding and acceptance of TSA policies and programs.
  • Planning and conducting events to demonstrate agency initiatives to the news media.
  • Responding to breaking news situations with an in-depth understanding of agency operations; willing to be available beyond normal business hours to respond to quickly evolving transportation security incidents and issues.

The posting expires today, so you don’t have much time. If you apply for and get the job, please continue to post here under a pseudonym. And if there’s a file on how to deal with me, I’d be really interested in seeing a copy.

Posted on August 31, 2011 at 12:30 PM35 Comments

The Effects of Social Media on Undercover Policing

Social networking sites make it very difficult, if not impossible, to have undercover police officers:

“The results found that 90 per cent of female officers were using social media compared with 81 per cent of males.”

The most popular site was Facebook, followed by Twitter. Forty seven per cent of those surveyed used social networking sites daily while another 24 per cent used them weekly. All respondents aged 26 years or younger had uploaded photos of themselves onto the internet.

“The thinking we had with this result means that the 16-year-olds of today who might become officers in the future have already been exposed.

“It’s too late [for them to take it down] because once it’s uploaded, it’s there forever.”

There’s another side to this issue as well. Social networking sites can help undercover officers with their backstory, by building a fictional history. Some of this might require help from the company that owns the social networking site, but that seems like a reasonable request by the police.

I am in the middle of reading Diego Gambetta’s book Codes of the Underworld: How Criminals Communicate. He talks about the lengthy vetting process organized crime uses to vet new members—often relying on people who knew the person since birth, or people who served time with him in jail—to protect against police informants. I agree that social networking sites can make undercover work even harder, but it’s gotten pretty hard even without that.

Posted on August 31, 2011 at 6:21 AM40 Comments

Details of the RSA Hack

We finally have some, even though the company isn’t talking:

So just how well crafted was the e-mail that got RSA hacked? Not very, judging by what F-Secure found.

The attackers spoofed the e-mail to make it appear to come from a “web master” at Beyond.com, a job-seeking and recruiting site. Inside the e-mail, there was just one line of text: “I forward this file to you for review. Please open and view it.” This was apparently enough to get the intruders the keys to RSAs kingdom.

F-Secure produced a brief video showing what happened if the recipient clicked on the attachment. An Excel spreadsheet opened, which was completely blank except for an “X” that appeared in the first box of the spreadsheet. The “X” was the only visible sign that there was an embedded Flash exploit in the spreadsheet. When the spreadsheet opened, Excel triggered the Flash exploit to activate, which then dropped the backdoor—in this case a backdoor known as Poison Ivy—onto the system.

Poison Ivy would then reach out to a command-and-control server that the attackers controlled at good.mincesur.com, a domain that F-Secure says has been used in other espionage attacks, giving the attackers remote access to the infected computer at EMC. From there, they were able to reach the systems and data they were ultimately after.

F-Secure notes that neither the phishing e-mail nor the backdoor it dropped onto systems were advanced, although the zero-day Flash exploit it used to drop the backdoor was advanced.

Posted on August 30, 2011 at 6:25 AM42 Comments

Screenshots of Chinese Hacking Tool

It’s hard to know how serious this really is:

The screenshots appear as B-roll footage in the documentary for six seconds­between 11:04 and 11:10 minutes—showing custom built Chinese software apparently launching a cyber-attack against the main website of the Falun Gong spiritual practice, by using a compromised IP address belonging to a United States university. As of Aug. 22 at 1:30pm EDT, in addition to Youtube, the whole documentary is available on the CCTV website.

The screenshots show the name of the software and the Chinese university that built it, the Electrical Engineering University of China’s People’s Liberation Army­direct evidence that the PLA is involved in coding cyber-attack software directed against a Chinese dissident group.

The software window says “Choose Attack Target.” The computer operator selects an IP address from a list­it happens to be 138.26.72.17­and then selects a target. Encoded in the software are the words “Falun Gong website list,” showing that attacking Falun Gong websites was built into the software.

A drop-down list of dozens of Falun Gong websites appears. The computer operator chooses Minghui.org, the main website of the Falun Gong spiritual practice.

The IP address 138.26.72.17 belongs to the University of Alabama in Birmingham (UAB), according to an online trace.

The shots then show a big “Attack” button on the bottom left being pushed, before the camera cuts away.

Posted on August 29, 2011 at 6:20 AM22 Comments

Friday Squid Blogging: Squid Fishing in Ulleungdo, Korea

The industry is in decline:

A generation ago, most of the island’s 10,000 residents worked in the squid industry, either as sellers like Kim or as farmer-fishermen who toiled in the fields each winter and went to sea during summer.

Ulleungdo developed a reputation for large, tasty squid that were once exported to the mainland and Japan. The volcanic island, which can be circumnavigated in three hours by car, is also known for its seaside cliffs and picturesque views, which have begun to attract more tourists.

The number of mainlanders who visit here has risen from 160,000 a decade ago to 250,000 last year. Meanwhile, the total squid catch has decreased by more than a third. Nowadays only 20% of islanders work in the squid industry, with many having shifted to the tourism trade, said Park Su-dong, a manager in the island’s marine and fisheries office.

As before, use the comments to this post to write about and discuss security stories that don’t have their own post.

Posted on August 26, 2011 at 3:40 PM42 Comments

The Problem with Using the Cold War Metaphor to Describe Cyberspace Risks

Nice essay on the problems with talking about cyberspace risks using “Cold War” metaphors:

The problem with threat inflation and misapplied history is that there are extremely serious risks, but also manageable responses, from which they steer us away. Massive, simultaneous, all-encompassing cyberattacks on the power grid, the banking system, transportation networks, etc. along the lines of a Cold War first strike or what Defense Secretary Leon Panetta has called the “next Pearl Harbor” (another overused and ill-suited analogy) would certainly have major consequences, but they also remain completely theoretical, and the nation would recover. In the meantime, a real national security danger is being ignored: the combination of online crime and espionage that’s gradually undermining our finances, our know-how and our entrepreneurial edge. While would-be cyber Cold Warriors stare at the sky and wait for it to fall, they’re getting their wallets stolen and their offices robbed.

[….]

If the most apt parallel is not the Cold War, then what are some alternatives we could turn to for guidance, especially when it comes to the problem of building up international cooperation in this space? Cybersecurity’s parallels, and some of its solutions, lie more in the 1840s and ’50s than they do in the 1940s and ’50s.

Much like the Internet is becoming today, in centuries past the sea was a primary domain of commerce and communication upon which no one single actor could claim complete control. What is notable is that the actors that related to maritime security and war at sea back then parallel many of the situations on our networks today. They scaled from individual pirates to state fleets with a global presence like the British Navy. In between were state-sanctioned pirates, or privateers. Much like today’s “patriotic hackers” (or NSA contractors), these forces were used both to augment traditional military forces and to add challenges of attribution to those trying to defend far-flung maritime assets. In the Golden Age of privateering, an attacker could quickly shift identity and locale, often taking advantage of third-party harbors with loose local laws. The actions that attacker might take ranged from trade blockades (akin to a denial of service) to theft and hijacking to actual assaults on military assets or underlying economic infrastructure to great effect.

Ross Anderson is the first person I heard comparing today’s cybercrime threats to global piracy in the 19th century.

Posted on August 26, 2011 at 1:58 PM16 Comments

Terrorism in the U.S. Since 9/11

John Mueller and his students analyze the 33 cases of attempted [EDITED TO ADD: Islamic extremist] terrorism in the U.S. since 9/11. So few of them are actually real, and so many of them were created or otherwise facilitated by law enforcement.

The death toll of all these is fourteen: thirteen at Ft. Hood and one in Little Rock. I think it’s fair to add to this the 2002 incident at Los Angeles Airport where a lone gunman killed two people at the El Al ticket counter, so that’s sixteen deaths in the U.S. to terrorism in the past ten years.

Given the credible estimate that we’ve spent $1 trillion on anti-terrorism security (this does not include our many foreign wars), that’s $62.5 billion per life [EDITED: lost]. Is there any other risk that we are even remotely as crazy about?

Note that everyone who died was shot with a gun. No Islamic extremist has been able to successfully detonate a bomb in the U.S. in the past ten years, not even a Molotov cocktail. (In the U.K. there has only been one successful terrorist bombing in the last ten years; the 2005 London Underground attacks.) And almost all of the 33 incidents (34 if you add LAX) have been lone actors, with no ties to al Qaeda.

I remember the government fear mongering after 9/11. How there were hundreds of sleeper cells in the U.S. How terrorism would become the new normal unless we implemented all sorts of Draconian security measures. You’d think that—if this were even remotely true—we would have seen more attempted terrorism in the U.S. over the past decade.

And I think arguments like “the government has secretly stopped lots of plots” don’t hold any water. Just look at the list, and remember how the Bush administration would hype even the most tenuous terrorist incident. Stoking fear was the policy. If the government stopped any other plots, they would have made as much of a big deal of them as they did of these 33 incidents.

EDITED TO ADD (8/26): According to the State Department’s recent report, fifteen American private citizens died in terrorist attacks in 2010: thirteen in Afghanistan and one each in Iraq and Uganda. Worldwide, 13,186 people died from terrorism in 2010. These numbers pale even in comparison to things that aren’t very risky.

Here’s data on incidents from 1970 to 2004. And here’s Nate Silver with data showing that the 1970s and 1980s were more dangerous with respect to airplane terrorism than the 2000s.

Also, look at Table 3 on page 16. The risk of dying in the U.S. from terrorism is substantially less than the risk of drowning in your bathtub, the risk of a home appliance killing you, or the risk of dying in an accident caused by a deer. Remember that more people die every month in automobile crashes than died in 9/11.

EDITED TO ADD (8/26): Looking over the incidents again, some of them would make pretty good movie plots. The point of my “movie-plot threat” phrase is not that terrorist attacks are never like that, but that concentrating defensive resources against them is pointless because 1) there are too many of them and 2) it is too easy for the terrorists to change tactics or targets.

EDITED TO ADD (9/1): As was pointed out here, I accidentally typed “lives saved” when I meant to type “lives lost.” I corrected that, above. We generally have a regulatory safety goal of $1 – $10M per life saved. In order for the $100B we have spent per year on counterterrorism to be worth it, it would need to have saved 10,000 lives per year.

Posted on August 26, 2011 at 6:26 AM57 Comments

Funniest Joke at the Edinburgh Fringe Festival

Nick Helm won an award for the funniest joke at the Edinburgh Fringe Festival:

Nick Helm: “I needed a password with eight characters so I picked Snow White and the Seven Dwarves.”

Note that two other jokes were about security:

Tim Vine: “Crime in multi-storey car parks. That is wrong on so many different levels.”

Andrew Lawrence: “I admire these phone hackers. I think they have a lot of patience. I can’t even be bothered to check my OWN voicemails.”

Posted on August 25, 2011 at 4:08 PM29 Comments

Moving 211 Tons of Gold

The security problems associated with moving $12B in gold from London to Venezuela.

It seems to me that Chávez has four main choices here. He can go the FT’s route, and just fly the gold to Caracas while insuring each shipment for its market value. He can go the Spanish route, and try to transport the gold himself, perhaps making use of the Venezuelan navy. He could attempt the mother of all repo transactions. Or he could get clever.

[…]

Which leaves one final alternative. Gold is fungible, and people are actually willing to pay a premium to buy gold which is sitting in the Bank of England’s ultra-secure vaults. So why bother transporting that gold at all? Venezuela could enter into an intercontinental repo transaction, where it sells its gold in the Bank of England to some counterparty, and then promises to buy it all back at a modest discount, on condition that it’s physically delivered to the Venezuelan central bank in Caracas. It would then be up to the counterparty to work out how to get 211 tons of gold to Caracas by a certain date. That gold could be sourced anywhere in the world, and transported in any conceivable manner—being much less predictable and transparent, those shipments would also be much harder to hijack.

[…]

But here’s one last idea: why doesn’t Chávez crowdsource the problem? He could simply open a gold window at the Banco Central de Venezuela, where anybody at all could deliver standard gold bars. In return, the central bank would transfer to that person an equal number of gold bars in the custody of the Bank of England, plus a modest bounty of say 2%—that’s over $15,000 per 400-ounce bar, at current rates.

It would take a little while, but eventually the gold would start trickling in: if you’re willing to pay a constant premium of 2% over the market price for a good, you can be sure that the good in question will ultimately find its way to your door.

Any other ideas?

Posted on August 25, 2011 at 12:43 PM87 Comments

Stealing ATM PINs with a Thermal Camera

It’s easy:

Researchers from UCSD pointed thermal cameras towards plastic ATM PIN pads and metal ATM PIN pads to test how effective they were at stealing PIN numbers. The thermal cams didn’t work against metal pads but on plastic pads the success rate of detecting all the digits was 80% after 10 seconds and 60% after 45 seconds. If you think about your average ATM trip, that’s a pretty wide window and an embarrassingly high success rate for thieves to take advantage of.

Paper here. More articles.

Posted on August 24, 2011 at 7:13 AM45 Comments

Smartphone Keystroke Logging Using the Motion Sensor

Clever:

“When the user types on the soft keyboard on her smartphone (especially when she holds her phone by hand rather than placing it on a fixed surface), the phone vibrates. We discover that keystroke vibration on touch screens are highly correlated to the keys being typed.”

Applications like TouchLogger could be significant because they bypass protections built into both Android and Apple’s competing iOS that prevent a program from reading keystrokes unless it’s active and receives focus from the screen. It was designed to work on an HTC Evo 4G smartphone. It had an accuracy rate of more than 70 percent of the input typed into the number-only soft keyboard of the device. The app worked by using the phone’s accelerometer to gauge the motion of the device each time a soft key was pressed.

Paper here. More articles.

Posted on August 23, 2011 at 2:09 PM27 Comments

Cheating at Casinos with Hidden Cameras

Sleeve cameras aren’t new, but they’re now smaller than ever and the cheaters are getting more sophisticated:

In January, at the newly opened $4-billion Cosmopolitan casino in Las Vegas, a gang called the Cutters cheated at baccarat. Before play began, the dealer offered one member of the group a stack of eight decks of cards for a pre-game cut. The player probably rubbed the stack for good luck, at the same instant riffling some of the corners of the cards underneath with his index finger. A small camera, hidden under his forearm, recorded the order.

After a few hands, the cutter left the floor and entered a bathroom stall, where he most likely passed the camera to a confederate in an adjoining stall. The runner carried the camera to a gaming analyst in a nearby hotel room, where the analyst transferred the video to a computer, watching it in slow motion to determine the order of the cards. Not quite half an hour had passed since the cut. Baccarat play averages less than six cards a minute, so there were still at least 160 cards left to play through. Back at the table, other members of the gang were delaying the action, glancing at their cellphones and waiting for the analyst to send them the card order.

Posted on August 23, 2011 at 5:44 AM28 Comments

Pseudonymity

Long essay on the value of pseudonymity. From the conclusions:

Here lies the huge irony in this discussion. Persistent pseudonyms aren’t ways to hide who you are. They provide a way to be who you are. You can finally talk about what you really believe; your real politics, your real problems, your real sexuality, your real family, your real self. Much of the support for “real names” comes from people who don’t want to hear about controversy, but controversy is only a small part of the need for pseudonyms. For most of us, it’s simply the desire to be able to talk openly about the things that matter to every one of us who uses the Internet. The desire to be judged—not by our birth, not by our sex, and not by who we work for—but by what we say.

[…]

I leave you with this question. What if I had posted this under my pseudonym? Why should that have made a difference? I would have written the same words, but ironically, I could have added some more personal and perhaps persuasive arguments which I dare not make under this account. Because I was forced to post this under my real name, I had to weaken my arguments; I had to share less of myself. Have you ever met “Kee Hinckley”? Have you met me under my other name? Does it matter? There is nothing real on the Internet; all you know about me is my words. You can look me up on Google, and still all you will know is my words. One real person wrote this post. It could have been submitted under either name. But one of them is not allowed to. Does that really make sense?

Behind every pseudonym is a real person. Deny the pseudonym and you deny the person.

This is, of a course, a response to the Google+ names policy.

Posted on August 22, 2011 at 6:01 AM66 Comments

Looking Backward at Terrorism

Nice essay on the danger of too much security:

The great lie of the war on terror is not that we can sacrifice a little liberty for greater security. It is that fear can be eliminated, and that all we need to do to improve our society is defeat terrorism, rather than look at the other causes of our social, economic, and political anxiety. That is the great seduction of fear: It allows us to do nothing. It is easier to find new threats than new possibilities.

A decade after 9/11, we look backward and find ourselves in all-too-familiar surroundings. We have, in fact, accomplished very little. We have yet to do any of the serious thinking that might carry us beyond the banal, stifling quest for security. That kind of thinking would require us to have a different relationship to fear: a willingness to accept it, even cause it.

Posted on August 19, 2011 at 1:57 PM12 Comments

The Dilemma of Counterterrorism Policy

Any institution delegated with the task of preventing terrorism has a dilemma: they can either do their best to prevent terrorism, or they can do their best to make sure they’re not blamed for any terrorist attacks. I’ve talked about this dilemma for a while now, and it’s nice to see some research results that demonstrate its effects.

A. Peter McGraw, Alexander Todorov, and Howard Kunreuther, “A Policy Maker’s Dilemma: Preventing Terrorism or Preventing Blame,” Organizational Behavior and Human Decision Processes, 115 (May 2011): 25-34.

Abstract: Although anti-terrorism policy should be based on a normative treatment of risk that incorporates likelihoods of attack, policy makers’ anti-terror decisions may be influenced by the blame they expect from failing to prevent attacks. We show that people’s anti-terror budget priorities before a perceived attack and blame judgments after a perceived attack are associated with the attack’s severity and how upsetting it is but largely independent of its likelihood. We also show that anti-terror budget priorities are influenced by directly highlighting the likelihood of the attack, but because of outcome biases, highlighting the attack’s prior likelihood has no influence on judgments of blame, severity, or emotion after an attack is perceived to have occurred. Thus, because of accountability effects, we propose policy makers face a dilemma: prevent terrorism using normative methods that incorporate the likelihood of attack or prevent blame by preventing terrorist attacks the public find most blameworthy.

Think about this with respect to the TSA. Are they doing their best to mitigate terrorism, or are they doing their best to ensure that if there’s a terrorist attack the public doesn’t blame the TSA for missing it?

Posted on August 19, 2011 at 8:55 AM22 Comments

Steven Pinker on Terrorism

It’s almost time for a deluge of “Ten Years After 9/11” essays. Here’s Steven Pinker:

The discrepancy between the panic generated by terrorism and the deaths generated by terrorism is no accident. Panic is the whole point of terrorism, as the root of the word makes clear: “Terror” refers to a psychological state, not an enemy or an event. The effects of terrorism depend completely on the psychology of the audience.

[…]

Cognitive psychologists such as Amos Tversky, Daniel Kahneman, Gerd Gigerenzer, and Paul Slovic have shown that the perceived danger of a risk depends on two factors: fathomability and dread. People are terrified of risks that are novel, undetectable, delayed in their effects, and poorly understood. And they are terrified about worst-case scenarios, the ones that are uncontrollable, catastrophic, involuntary, and inequitable (that is, the people exposed to the risk are not the ones who benefit from it).

These psychologists suggest that cognitive illusions are a legacy of ancient brain circuitry that evolved to protect us against natural risks such as predators, poisons, storms, and especially enemies. Large-scale terrorist plots are novel, undetectable, catastrophic, and inequitable, and thus maximize both unfathomability and dread. They give the terrorists a large psychological payoff for a small investment in damage.

[…]

Audrey Cronin nicely captures the conflicting moral psychology that defines the arc of terrorist movements: “Violence has an international language, but so does decency.”

Posted on August 18, 2011 at 1:32 PM21 Comments

New Attack on AES

Biclique Cryptanalysis of the Full AES,” by Andrey Bogdanov, Dmitry Khovratovich, and Christian Rechberger.

Abstract. Since Rijndael was chosen as the Advanced Encryption Standard, improving upon 7-round attacks on the 128-bit key variant or upon 8-round attacks on the 192/256-bit key variants has been one of the most difficult challenges in the cryptanalysis of block ciphers for more than a decade. In this paper we present a novel technique of block cipher cryptanalysis with bicliques, which leads to the following results:

  • The first key recovery attack on the full AES-128 with computational complexity 2126.1.
  • The first key recovery attack on the full AES-192 with computational complexity 2189.7.
  • The first key recovery attack on the full AES-256 with computational complexity 2254.4.
  • Attacks with lower complexity on the reduced-round versions of AES not considered before, including an attack on 8-round AES-128 with complexity 2124.9.
  • Preimage attacks on compression functions based on the full AES versions.

In contrast to most shortcut attacks on AES variants, we do not need to assume related-keys. Most of our attacks only need a very small part of the codebook and have small memory requirements, and are practically verified to a large extent. As our attacks are of high computational complexity, they do not threaten the practical use of AES in any way.

This is what I wrote about AES in 2009. I still agree with my advice:

Cryptography is all about safety margins. If you can break n round of a cipher, you design it with 2n or 3n rounds. What we’re learning is that the safety margin of AES is much less than previously believed. And while there is no reason to scrap AES in favor of another algorithm, NST should increase the number of rounds of all three AES variants. At this point, I suggest AES-128 at 16 rounds, AES-192 at 20 rounds, and AES-256 at 28 rounds. Or maybe even more; we don’t want to be revising the standard again and again.

And for new applications I suggest that people don’t use AES-256. AES-128 provides more than enough security margin for the forseeable future. But if you’re already using AES-256, there’s no reason to change.

The advice about AES-256 was because of a 2009 attack, not this result.

Again, I repeat the saying I’ve heard came from inside the NSA: “Attacks always get better; they never get worse.”

Posted on August 18, 2011 at 6:12 AM74 Comments

Search Redirection and the Illicit Online Prescription Drug Trade

Really interesting research.

Search-redirection attacks combine several well-worn tactics from black-hat SEO and web security. First, an attacker identifies high-visibility websites (e.g., at universities) that are vulnerable to code-injection attacks. The attacker injects code onto the server that intercepts all incoming HTTP requests to the compromised page and responds differently based on the type of request:
Requests from search-engine crawlers return a mix of the original content, along with links to websites promoted by the attacker and text that makes the website appealing to drug-related queries.

  • Requests from users arriving from search engines are checked for drug terms in the original search query. If a drug name is found in the search term, then the compromised server redirects the user to a pharmacy or another intermediary, which then redirects the user to a pharmacy.
  • All other requests, including typing the link directly into a browser, return the infected website’s original content.
  • The net effect is that web users are seamlessly delivered to illicit pharmacies via infected web servers, and the compromise is kept hidden from view of the affected host’s webmaster in nearly all circumstances.

Upon inspecting search results, we identified 7,000 websites that had been compromised in this manner between April 2010 and February 2011. One quarter of the top ten search results were observed to actively redirect to pharmacies, and another 15% of the top results were for sites that no longer redirected but had previously been compromised. We also found that legitimate health resources, including authorized pharmacies, were largely crowded out of the top results by search-redirection attacks and blog and forum spam promoting fake pharmacies.

And the paper.

Posted on August 16, 2011 at 10:47 AM26 Comments

New, Undeletable, Web Cookie

A couple of weeks ago Wired reported the discovery of a new, undeletable, web cookie:

Researchers at U.C. Berkeley have discovered that some of the net’s most popular sites are using a tracking service that can’t be evaded—even when users block cookies, turn off storage in Flash, or use browsers’ “incognito” functions.

The Wired article was very short on specifics, so I waited until one of the researchers—Ashkan Soltani—wrote up more details. He finally did, in a quite technical essay:

What differentiates KISSmetrics apart from Hulu with regards to respawning is, in addition to Flash and HTML5 LocalStorage, KISSmetrics was exploiting the browser cache to store persistent identifiers via stored Javascript and ETags. ETags are tokens presented by a user’s browser to a remote webserver in order to determine whether a given resource (such as an image) has changed since the last time it was fetched. Rather than simply using it for version control, we found KISSmetrics returning ETag values that reliably matched the unique values in their ‘km_ai’ user cookies.

Posted on August 15, 2011 at 4:48 AM75 Comments

Liars and Outliers Cover

My new book, Liars and Outliers, has a cover.

proposed cover

Publication is still scheduled for the end of February—in time for the RSA Conference—assuming I finish the manuscript in time.

EDITED TO ADD (8/12): The cover was inspired by a design by Luke Fretwell. He sent me an unsolicited cover design, which I liked and sent to my publisher. They liked the general idea, but refined it into the cover you see. Luke has a blog post on the exchange, which includes a picture of his cover.

Posted on August 12, 2011 at 2:09 PM64 Comments

Rat that Applies Poison to its Fur

The African crested rat applies tree poison to its fur to make itself more deadly.

The researchers made their discovery after presenting a wild-caught crested rat with branches and roots of the Acokanthera tree, whose bark includes the toxin ouabain.

The animal gnawed and chewed the tree’s bark but avoided the nontoxic leaves and fruit. The rat then applied the pasty, deadly drool to spiky flank hairs. Microscopes later revealed that the hairs are actually hollow quills that rapidly absorb the ouabain-saliva mixture, offering an unpleasant surprise to predators attempt to taste the rat.

Posted on August 12, 2011 at 11:13 AM12 Comments

Counterfeit Pilot IDs and Uniforms Will Now Be Sufficient to Bypass Airport Security

This seems like a really bad idea:

…the Transportation Security Administration began a program Tuesday allowing pilots to skirt the security-screening process. The TSA has deployed approximately 500 body scanners to airports nationwide in a bid to prevent terrorists from boarding domestic flights, but pilots don’t have to go through the controversial nude body scanners or other forms of screening. They don’t have to be patted down or go through metal detectors. Their carry-on bags are not searched.

I agree that it doesn’t make sense to screen pilots, that they’re at the controls of the plane and can crash it if they want to. But the TSA isn’t in a position to screen pilots; all they can decide to do is to not screen people who are in pilot uniforms with pilot IDs. And it’s far safer to just screen everybody than to trust that TSA agents will be able figure out who is a real pilot and who is someone just pretending to be a pilot.

I wrote about this in 2006.

Posted on August 12, 2011 at 6:59 AM69 Comments

Security Flaws in Encrypted Police Radios

Why (Special Agent) Johnny (Still) Can’t Encrypt: A Security Analysis of the APCO Project 25 Two-Way Radio System,” by Sandy Clark, Travis Goodspeed, Perry Metzger, Zachary Wasserman, Kevin Xu, and Matt Blaze.

Abstract: APCO Project 25a (“P25”) is a suite of wireless communications protocols used in the US and elsewhere for public safety two-way (voice) radio systems. The protocols include security options in which voice and data traffic can be cryptographically protected from eavesdropping. This paper analyzes the security of P25 systems against both passive and active adversaries. We found a number of protocol, implementation, and user interface weaknesses that routinely leak information to a passive eavesdropper or that permit highly efficient and difficult to detect active attacks. We introduce new selective subframe jamming attacks against P25, in which an active attacker with very modest resources can prevent specific kinds of traffic (such as encrypted messages) from being received, while emitting only a small fraction of the aggregate power of the legitimate transmitter. We also found that even the passive attacks represent a serious practical threat. In a study we conducted over a two year period in several US metropolitan areas, we found that a significant fraction of the “encrypted” P25 tactical radio traffic sent by federal law enforcement surveillance operatives is actually sent in the clear, in spite of their users’ belief that they are encrypted, and often reveals such sensitive data as the such sensitive data as the names of informants in criminal investigations.

I’ve heard Matt talk about this project several times. It’s great work, and a fascinating insight into the usability problems of encryption in the real world.

News article.

Posted on August 11, 2011 at 6:19 AM25 Comments

GPRS Hacked

Just announced:

Nohl’s group found a number of problems with GPRS. First, he says, lax authentication rules could allow an attacker to set up a fake cellular base station and eavesdrop on information transmitted by users passing by. In some countries, they found that GPRS communications weren’t encrypted at all. When they were encrypted, Nohl adds, the ciphers were often weak and could be either broken or decoded with relatively short keys that were easy to guess.

The group generated an optimized set of codes that an attacker could quickly use to find the key protecting a given communication. The attack the researchers designed against GPRS costs about 10 euros for radio equipment, Nohl says.

More articles.

Posted on August 10, 2011 at 4:11 PM10 Comments

"Taxonomy of Operational Cyber Security Risks"

I’m a big fan of taxonomies, and this—from Carnegie Mellon—seems like a useful one:

The taxonomy of operational cyber security risks, summarized in Table 1 and detailed in this section, is structured around a hierarchy of classes, subclasses, and elements. The taxonomy has four main classes:

  • actions of people—action, or lack of action, taken by people either deliberately or accidentally that impact cyber security
  • systems and technology failures—failure of hardware, software, and information systems
  • failed internal processes—problems in the internal business processes that impact the ability to implement, manage, and sustain cyber security, such as process design, execution, and control
  • external events—issues often outside the control of the organization, such as disasters, legal issues, business issues, and service provider dependencies

Each of these four classes is further decomposed into subclasses, and each subclass is described by its elements.

Posted on August 10, 2011 at 6:39 AM14 Comments

New Bank-Fraud Trojan

Nasty:

The German Federal Criminal Police (the “Bundeskriminalamt” or BKA for short) recently warned consumers about a new Windows malware strain that waits until the victim logs in to his bank account. The malware then presents the customer with a message stating that a credit has been made to his account by mistake, and that the account has been frozen until the errant payment is transferred back.

When the unwitting user views his account balance, the malware modifies the amounts displayed in his browser; it appears that he has recently received a large transfer into his account. The victim is told to immediately make a transfer to return the funds and unlock his account. The malicious software presents an already filled-in online transfer form ­ with the account and routing numbers for a bank account the attacker controls.

Posted on August 8, 2011 at 12:47 PM59 Comments

Zodiac Cipher Cracked

I admit I don’t pay much attention to pencil-and-paper ciphers, so I knew nothing about the Zodiac cipher. Seems it has finally been broken:

The Zodiac Killer was a serial killer who preyed on couples in Northern California in the years between 1968 and 1970. Of his seven confirmed victims, five died. More victims and attacks are suspected.

The killer sent four messages to newspapers in California’s Bay Area, only one of which has ever been decrypted. This first message ­ split into three parts ­ claimed Zodiac wanted to kill victims so that they would become his slaves in the afterlife.

The 408-symbol cryptogram was cracked by Donald and Bettye Harden of Salinas, California.

Code and solution—with photos—here.

EDITED TO ADD (8/5): Solution seems to be a hoax.

Posted on August 5, 2011 at 12:25 PM27 Comments

German Police Call Airport Full-Body Scanners Useless

I’m not surprised:

The weekly Welt am Sonntag, quoting a police report, said 35 percent of the 730,000 passengers checked by the scanners set off the alarm more than once despite being innocent.

The report said the machines were confused by several layers of clothing, boots, zip fasteners and even pleats, while in 10 percent of cases the passenger’s posture set them off.

The police called for the scanners to be made less sensitive to movements and certain types of clothing and the software to be improved. They also said the US manufacturer L3 Communications should make them work faster.

In the wake of the 10-month trial which began on September 27 last year, German federal police see no interest in carrying out any more tests with the scanners until new more effective models become available, Welt am Sonntag said.

However, this surprised me:

The European parliament backed on July 6 the deployment of body scanners at airports, but on condition that travellers have the right to refuse to walk through the controversial machines.

I was told in Amsterdam that there was no option. I either had to walk through the machines, or not fly.

Here’s a story about full-body scanners that are overly sensitive to sweaty armpits.

Posted on August 5, 2011 at 6:22 AM30 Comments

Hacking Lotteries

Two items on hacking lotteries. The first is about someone who figured out how to spot winner in a scratch-off tic-tac-toe style game, and a daily draw style game where expcted payout can exceed the ticket price. The second is about someone who has won the lottery four times, with speculation that she had advance knowledge of where and when certain jackpot-winning scratch-off tickets would be sold.

EDITED TO ADD (8/13): The Boston Globe has a on how to make money on Massachusetts’ Cash WinFall.

Posted on August 4, 2011 at 7:36 AM31 Comments

New Information on the Inventor of the One-Time Pad

Seems that the one-time pad was not first invented by Vernam:

He could plainly see that the document described a technique called the one-time pad fully 35 years before its supposed invention during World War I by Gilbert Vernam, an AT&T engineer, and Joseph Mauborgne, later chief of the Army Signal Corps.

[…]

The 1882 monograph that Dr. Bellovin stumbled across in the Library of Congress was “Telegraphic Code to Insure Privacy and Secrecy in the Transmission of Telegrams,” by Frank Miller, a successful banker in Sacramento who later became a trustee of Stanford University. In Miller’s preface, the key points jumped off the page:

“A banker in the West should prepare a list of irregular numbers to be called ‘shift numbers,'” he wrote. “The difference between such numbers must not be regular. When a shift-number has been applied, or used, it must be erased from the list and not be used again.”

It seems that Vernam was not aware of Miller’s work, and independently invented the one-time pad.

Another article. And the paper.

Posted on August 3, 2011 at 12:57 PM22 Comments

Identifying People by their Writing Style

The article is in the context of the big Facebook lawsuit, but the part about identifying people by their writing style is interesting:

Recently, a team of computer scientists at Concordia University in Montreal took advantage of an unusual set of data to test another method of determining e-mail authorship. In 2003, the Federal Energy Regulatory Commission, as part of its investigation into Enron, released into the public domain hundreds of thousands of employee e-mails, which have become an important resource for forensic research. (Unlike novels, newspapers or blogs, e-mails are a private form of communication and aren’t usually available as a sizable corpus for analysis.)

Using this data, Benjamin C. M. Fung, who specializes in data mining, and Mourad Debbabi, a cyber-forensics expert, collaborated on a program that can look at an anonymous e-mail message and predict who wrote it out of a pool of known authors, with an accuracy of 80 to 90 percent. (Ms. Chaski claims 95 percent accuracy with her syntactic method.) The team identifies bundles of linguistic features, hundreds in all. They catalog everything from the position of greetings and farewells in e-mails to the preference of a writer for using symbols (say, “$” or “%”) or words (“dollars” or “percent”). Combining all of those features, they contend, allows them to determine what they call a person’s “write-print.”

It seems reasonable that we have a linguistic fingerprint, although 1) there are far fewer of them than finger fingerprints, 2) they’re easier to fake. It’s probably not much of a stretch to take that software that “identifies bundles of linguistic features, hundreds in all” and use the data to automatically modify my writing to look like someone else’s.

EDITED TO ADD (8/3): A good criticism of the science behind author recognition, and a paper on how to evade these systems.

Posted on August 3, 2011 at 6:08 AM48 Comments

Developments in Facial Recognition

Eventually, it will work. You’ll be able to wear a camera that will automatically recognize someone walking towards you, and a earpiece that will relay who that person is and maybe something about him. None of the technologies required to make this work are hard; it’s just a matter of getting the error rate down low enough for it to be a useful system. And there have been a number of recent research results and news stories that illustrate what this new world might look like.

The police want this sort of system. I already blogged about MORIS, an iris-scanning technology that several police forces in the U.S. are using. The next step is the face-scanning glasses that the Brazilian police claim they will be wearing at the 2014 World Cup.

A small camera fitted to the glasses can capture 400 facial images per second and send them to a central computer database storing up to 13 million faces.

The system can compare biometric data at 46,000 points on a face and will immediately signal any matches to known criminals or people wanted by police.

In the future, this sort of thing won’t be limited to the police. Facebook has recently embarked on a major photo tagging project, and already has the largest collection of identified photographs in the world outside of a government. Researchers at Carnegie Mellon University have combined the public part of that database with a camera and face-recognition software to identify students on campus. (The paper fully describing their work is under review and not online yet, but slides describing the results can be found here.)

Of course, there are false positives—as there are with any system like this. That’s not a big deal if the application is a billboard with face-recognition serving different ads depending on the gender and age—and eventually the identity—of the person looking at it, but is more problematic if the application is a legal one.

In Boston, someone erroneously had his driver’s licence revoked:

It turned out Gass was flagged because he looks like another driver, not because his image was being used to create a fake identity. His driving privileges were returned but, he alleges in a lawsuit, only after 10 days of bureaucratic wrangling to prove he is who he says he is.

And apparently, he has company. Last year, the facial recognition system picked out more than 1,000 cases that resulted in State Police investigations, officials say. And some of those people are guilty of nothing more than looking like someone else. Not all go through the long process that Gass says he endured, but each must visit the Registry with proof of their identity.

[…]

At least 34 states are using such systems. They help authorities verify a person’s claimed identity and track down people who have multiple licenses under different aliases, such as underage people wanting to buy alcohol, people with previous license suspensions, and people with criminal records trying to evade the law.

The problem is less with the system, and more with the guilty-until-proven-innocent way in which the system is used.

Kaprielian said the Registry gives drivers enough time to respond to the suspension letters and that it is the individual’s “burden’” to clear up any confusion. She added that protecting the public far outweighs any inconvenience Gass or anyone else might experience.

“A driver’s license is not a matter of civil rights. It’s not a right. It’s a privilege,” she said. “Yes, it is an inconvenience [to have to clear your name], but lots of people have their identities stolen, and that’s an inconvenience, too.”

IEEE Spectrum and The Economist have published similar articles.

EDITED TO ADD (8/3): Here’s a system embedded in a pair of glasses that automatically analyzes and relays micro-facial expressions. The goal is to help autistic people who have trouble reading emotions, but you could easily imagine this sort of thing becoming common. And what happens when we start relying on these computerized systems and ignoring our own intuition?

EDITED TO ADD: CV Dazzle is camouflage from face detection.

Posted on August 2, 2011 at 1:33 PM53 Comments

Attacking PLCs Controlling Prison Doors

Embedded system vulnerabilities in prisons:

Some of the same vulnerabilities that the Stuxnet superworm used to sabotage centrifuges at a nuclear plant in Iran exist in the country’s top high-security prisons, according to security consultant and engineer John Strauchs, who plans to discuss the issue and demonstrate an exploit against the systems at the DefCon hacker conference next week in Las Vegas.

Strauchs, who says he engineered or consulted on electronic security systems in more than 100 prisons, courthouses and police stations throughout the U.S. ­ including eight maximum-security prisons ­ says the prisons use programmable logic controllers to control locks on cells and other facility doors and gates. PLCs are the same devices that Stuxnet exploited to attack centrifuges in Iran.

This seems like a minor risk today; Stuxnet was a military-grade effort, and beyond the reach of your typical criminal organization. But that can only change, as people study and learn from the reverse-engineered Stuxnet code and as hacking PLCs becomes more common.

As we move from mechanical, or even electro-mechanical, systems to digital systems, and as we network those digital systems, this sort of vulnerability is going to only become more common.

Posted on August 2, 2011 at 6:23 AM21 Comments

Breaking the Xilinx Virtex-II FPGA Bitstream Encryption

It’s a power-analysis attack, which makes it much harder to defend against. And since the attack model is an engineer trying to reverse-engineer the chip, it’s a valid attack.

Abstract: Over the last two decades FPGAs have become central components for many advanced digital systems, e.g., video signal processing, network routers, data acquisition and military systems. In order to protect the intellectual property and to prevent fraud, e.g., by cloning an FPGA or manipulating its content, many current FPGAs employ a bitstream encryption feature. We develop a successful attack on the bitstream encryption engine integrated in the widespread Virtex-II Pro FPGAs from Xilinx, using side-channel analysis. After measuring the power consumption of a single power-up of the device and a modest amount of o-line computation, we are able to recover all three different keys used by its triple DES module. Our method allows extracting secret keys from any real-world device where the bitstream encryption feature of Virtex-II Pro is enabled. As a consequence, the target product can be cloned and manipulated at will of the attacker. Also, more advanced attacks such as reverse engineering or the introduction of hardware Trojans become potential threats. As part of the side-channel attack, we were able to deduce certain internals of the hardware encryption engine. To our knowledge, this is the first attack against the bitstream encryption of a commercial FPGA reported in the open literature.

Posted on August 1, 2011 at 12:29 PM21 Comments

Using Science Fiction to Teach Computer Security

Interesting paper: “Science Fiction Prototyping and Security Education: Cultivating Contextual and Societal Thinking in Computer Security Education and Beyond,” by Tadayoshi Kohno and Brian David Johnson.

Abstract: Computer security courses typically cover a breadth of technical topics, including threat modeling, applied cryptography, software security, and Web security. The technical artifacts of computer systems—and their associated computer security risks and defenses—do not exist in isolation, however; rather, these systems interact intimately with the needs, beliefs, and values of people. This is especially true as computers become more pervasive, embedding themselves not only into laptops, desktops, and the Web, but also into our cars, medical devices, and toys. Therefore, in addition to the standard technical material, we argue that students would benefit from developing a mindset focused on the broader societal and contextual issues surrounding computer security systems and risks. We used science fiction (SF) prototyping to facilitate such societal and contextual thinking in a recent undergraduate computer security course. We report on our approach and experiences here, as well as our recommendations for future computer security and other computer science courses.

Posted on August 1, 2011 at 6:03 AM19 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.