Blog: March 2010 Archives

Security Cameras in the New York City Subways

The New York Times has an article about cameras in the subways. The article is all about how horrible it is that the cameras don’t work:

Moreover, nearly half of the subway system’s 4,313 security cameras that have been installed—in stations and tunnels throughout the system—do not work, because of either shoddy software or construction problems, say officials with the Metropolitan Transportation Authority, which operates the city’s bus, subway and train system.

I certainly agree that taxpayers should be upset when something they’ve purchased doesn’t function as expected. But way down at the bottom of the article, we find:

Even without the cameras, officials said crime in the transit system had dropped to a record low. In 1990, the system averaged 47.8 crimes a day, compared with 5.3 so far this year. “The subway system is safer than it’s ever been,” said Kevin Ortiz, an authority spokesman.

No data on how many crimes were solved by cameras, but we know from other studies that their effect on crime is minimal.

Posted on March 31, 2010 at 1:24 PM28 Comments

Should the Government Stop Outsourcing Code Development?

Information technology is increasingly everywhere, and it’s the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping. The same networking technologies are used in every country. The same digital infrastructure underpins the small and the large, the important and the trivial, the local and the global; the same vendors, the same standards, the same protocols, the same applications.

With all of this sameness, you’d think these technologies would be designed to the highest security standard, but they’re not. They’re designed to the lowest or, at best, somewhere in the middle. They’re designed sloppily, in an ad hoc manner, with efficiency in mind. Security is a requirement, more or less, but it’s a secondary priority. It’s far less important than functionality, and security is what gets compromised when schedules get tight.

Should the government—ours, someone else’s?—stop outsourcing code development? That’s the wrong question to ask. Code isn’t magically more secure when it’s written by someone who receives a government paycheck than when it’s written by someone who receives a corporate paycheck. It’s not magically less secure when it’s written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn’t even a viable option anymore; we’re all stuck with software written by who-knows-whom in who-knows-which-country. And we need to figure out how to get security from that.

The traditional solution has been defense in depth: layering one mediocre security measure on top of another mediocre security measure. So we have the security embedded in our operating system and applications software, the security embedded in our networking protocols, and our additional security products such as antivirus and firewalls. We hope that whatever security flaws—either found and exploited, or deliberately inserted—there are in one layer are counteracted by the security in another layer, and that when they’re not, we can patch our systems quickly enough to avoid serious long-term damage. That is a lousy solution when you think about it, but we’ve been more-or-less managing with it so far.

Bringing all software—and hardware, I suppose—development in-house under some misconception that proximity equals security is not a better solution. What we need is to improve the software development process, so we can have some assurance that our software is secure—regardless of what coder, employed by what company, and living in what country, writes it. The key word here is “assurance.”

Assurance is less about developing new security techniques than about using the ones we already have. It’s all the things described in books on secure coding practices. It’s what Microsoft is trying to do with its Security Development Lifecycle. It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it fields a piece of avionics software. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems. But most of the time, we don’t care; commercial software, as insecure as it is, is good enough for most purposes.

Assurance is expensive, in terms of money and time, for both the process and the documentation. But the NSA needs assurance for critical military systems and Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be more common in government IT contracts.

The software used to run our critical infrastructure—government, corporate, everything—isn’t very secure, and there’s no hope of fixing it anytime soon. Assurance is really our only option to improve this, but it’s expensive and the market doesn’t care. Government has to step in and spend the money where its requirements demand it, and then we’ll all benefit when we buy the same software.

This essay first appeared in Information Security, as the second part of a point-counterpoint with Marcus Ranum. You can read Marcus’s essay there as well.

Posted on March 31, 2010 at 6:54 AM57 Comments

Leaders Make Better Liars

According to new research:

The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars—the leaders—resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars—the subordinates—showed the usual signs of stress and slower reaction times. “Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal,” Carney explains.


Carney emphasizes that these results don’t mean that all people in high positions find lying easier: people need only feel powerful, regardless of the real power they have or their position in a hierarchy. “There are plenty of CEOs who act like low-power people and there are plenty of people at every level in organizations who feel very high power,” Carney says. “It can cross rank, every strata of society, any job.”

Posted on March 30, 2010 at 1:59 PM31 Comments

Jeremy Clarkson on Security Guards

Nice essay:

Of course, we know why he’s really there. He’s really there so that if the bridge is destroyed by terrorists, the authorities can appear on the television news and say they had taken all possible precautions. Plus, if you employ a security guard, then I should imagine that your insurance premiums are going to be significantly lower.

This is probably why so many companies use security guards these days. It must be, because when it comes to preventing a crime, they are pretty much useless. No, really. If you are planning a heist, job one on the list of things to do is “take out the guard”. He is therefore not an impenetrable wall of steel; he’s just a nuisance.

And he’s not just a nuisance to the people planning to hit him on the head. He’s also a nuisance to the thousands of people who legitimately wish to enter or leave the building he’s supposed to be guarding.

At the office where I work, everyone is issued with laminated photo-ID cards that open all the barriers and doors. It is quite impossible to make any sort of progress unless you have such a thing about your person. But even so, every barrier and door is also guarded by a chap who, in a fight, would struggle to beat Christopher Robin. One looks like his heart would give out if you said “boo.” Another has a face that’s so grey that, in some lights, he appears to be slightly lilac. I cannot for the life of me work out what these people are supposed to achieve, apart from making the lives of normal people a little bit more difficult.

EDITED TO ADD (4/13): Another Clarkson essay, this one on security theater.

Posted on March 30, 2010 at 6:06 AM

Master Thief

The amazing story of Gerald Blanchard.

Thorough as ever, Blanchard had spent many previous nights infiltrating the bank to do recon or to tamper with the locks while James acted as lookout, scanning the vicinity with binoculars and providing updates via a scrambled-band walkie-talkie. He had put a transmitter behind an electrical outlet, a pinhole video camera in a thermostat, and a cheap baby monitor behind the wall. He had even mounted handles on the drywall panels so he could remove them to enter and exit the ATM room. Blanchard had also taken detailed measurements of the room and set up a dummy version in a friend’s nearby machine shop. With practice, he had gotten his ATM-cracking routine down to where he needed only 90 seconds after the alarm tripped to finish and escape with his score.

As Blanchard approached, he saw that the door to the ATM room was unlocked and wide open. Sometimes you get lucky. All he had to do was walk inside.

From here he knew the drill by heart. There were seven machines, each with four drawers. He set to work quickly, using just the right technique to spring the machines open without causing any telltale damage. Well rehearsed, Blanchard wheeled out boxes full of cash and several money counters, locked the door behind him, and headed to a van he had parked nearby.

Eight minutes after Blanchard broke into the first ATM, the Winnipeg Police Service arrived in response to the alarm. However, the officers found the doors locked and assumed the alarm had been an error. As the police pronounced the bank secure, Blanchard was zipping away with more than half a million dollars.

Posted on March 29, 2010 at 1:48 PM32 Comments

Identifying People by their Bacteria

A potential new forensic:

To determine how similar a person’s fingertip bacteria are to bacteria left on computer keys, the team took swabs from three computer keyboards and compared bacterial gene sequences with those from the fingertips of the keyboard owners. Today in the Proceedings of the National Academy of Sciences, they conclude that enough bacteria can be collected from even small surfaces such as computer keys to link them with the hand that laid them down.

The researchers then tested how well such a technique could distinguish the person who left the bacteria from the general population. They sampled bacteria from nine computer mice and from the nine mouse owners. They also collected information on bacterial communities from 270 hands that had never touched any of the mice. In all nine cases, the bacteria on the mice were far more similar to the mouse-owners’ hands than to any of the 270 strange hands. The researchers also found that bacteria will persist on a computer key or mouse for up to 2 weeks after it has been handled.

Here’s a link to the abstract; the full paper is behind a paywall.

Posted on March 29, 2010 at 7:15 AM32 Comments

Schneier Blogging Template

Eerily accurate:

Catchy one-liner (“interesting,” with link):

In this part of the blog post, Bruce quotes something from the article he links to in the catchy phrase. It might be the abstract to an academic article, or the key points in a subject he’s trying to get across. To get the post looking right, you have to include at least a decent sized paragraph from the quoted source or otherwise it just looks like crap. So I will continue typing another sentence or two, until I have enough text to make this look like a legitimately quoted paragraph. See, now that wasn’t so hard after all.

He might offer a short comment about the article here.

Finally, he will let you know that he wrote about the exact same subject link to previous Schneier article on the exact same topic and link to another previous Schneier article on the exact same topic.

I don’t always do this, but it’s pretty common.

You can see the template in these two posts.

Posted on March 26, 2010 at 1:16 PM45 Comments

Side-Channel Attacks on Encrypted Web Traffic

Nice paper: “Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow,” by Shuo Chen, Rui Wang, XiaoFeng Wang, and Kehuan Zhang.

Abstract. With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application’s internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees’ web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.

We already know that eavesdropping on an SSL-encrypted web session can leak a lot of information about the person’s browsing habits. Since the size of both the page requests and the page downloads are different, an eavesdropper can sometimes infer which links the person clicked on and what pages he’s viewing.

This paper extends that work. Ed Felten explains:

The new paper shows that this inference-from-size problem gets much, much worse when pages are using the now-standard AJAX programming methods, in which a web “page” is really a computer program that makes frequent requests to the server for information. With more requests to the server, there are many more opportunities for an eavesdropper to make inferences about what you’re doing—to the point that common applications leak a great deal of private information.

Consider a search engine that autocompletes search queries: when you start to type a query, the search engine gives you a list of suggested queries that start with whatever characters you have typed so far. When you type the first letter of your search query, the search engine page will send that character to the server, and the server will send back a list of suggested completions. Unfortunately, the size of that suggested completion list will depend on which character you typed, so an eavesdropper can use the size of the encrypted response to deduce which letter you typed. When you type the second letter of your query, another request will go to the server, and another encrypted reply will come back, which will again have a distinctive size, allowing the eavesdropper (who already knows the first character you typed) to deduce the second character; and so on. In the end the eavesdropper will know exactly which search query you typed. This attack worked against the Google, Yahoo, and Microsoft Bing search engines.

Many web apps that handle sensitive information seem to be susceptible to similar attacks. The researchers studied a major online tax preparation site (which they don’t name) and found that it leaks a fairly accurate estimate of your Adjusted Gross Income (AGI). This happens because the exact set of questions you have to answer, and the exact data tables used in tax preparation, will vary based on your AGI. To give one example, there is a particular interaction relating to a possible student loan interest calculation, that only happens if your AGI is between $115,000 and $145,000—so that the presence or absence of the distinctively-sized message exchange relating to that calculation tells an eavesdropper whether your AGI is between $115,000 and $145,000. By assembling a set of clues like this, an eavesdropper can get a good fix on your AGI, plus information about your family status, and so on.

For similar reasons, a major online health site leaks information about which medications you are taking, and a major investment site leaks information about your investments.

The paper goes on to talk about mitigation—padding page requests and downloads to a constant size is the obvious one—but they’re difficult and potentially expensive.

More articles.

Posted on March 26, 2010 at 6:04 AM22 Comments

Natural Language Shellcode


In this paper we revisit the assumption that shellcode need be fundamentally different in structure than non-executable data. Specifically, we elucidate how one can use natural language generation techniques to produce shellcode that is superficially similar to English prose. We argue that this new development poses significant challenges for inline payloadbased inspection (and emulation) as a defensive measure, and also highlights the need for designing more efficient techniques for preventing shellcode injection attacks altogether.

Posted on March 25, 2010 at 7:16 AM27 Comments

Acrobatic Thieves

Some movie-plot attacks actually happen:

They never touched the floor—that would have set off an alarm.

They didn’t appear on store security cameras. They cut a hole in the roof and came in at a spot where the cameras were obscured by advertising banners.

And they left with some $26,000 in laptop computers, departing the same way they came in—down a 3-inch gas pipe that runs from the roof to the ground outside the store.

EDITED TO ADD (4/13): Similar heists.

Posted on March 24, 2010 at 1:51 PM47 Comments

Dead on the No-Fly List

Such “logic“:

If a person on the no-fly list dies, his name could stay on the list so that the government can catch anyone trying to assume his identity.

But since a terrorist might assume anyone’s identity, by the same logic we should put everyone on the no-fly list.

Otherwise, it’s an interesting article on how the no-fly list works.

Posted on March 24, 2010 at 6:38 AM57 Comments

New Book: Cryptography Engineering

I have a new book, sort of. Cryptography Engineering is really the second edition of Practical Cryptography. Niels Ferguson and I wrote Practical Cryptography in 2003. Tadayoshi Kohno did most of the update work—and added exercises to make it more suitable as a textbook—and is the third author on Cryptography Engineering. (I didn’t like it that Wiley changed the title; I think it’s too close to Ross Anderson’s excellent Security Engineering.)

Cryptography Engineering is a techie book; it’s for practitioners who are implementing cryptography or for people who want to learn more about the nitty-gritty of how cryptography works and what the implementation pitfalls are. If you’ve already bought Practical Cryptography, there’s no need to upgrade unless you’re actually using it.

EDITED TO ADD (3/23): Signed copies are available. See the bottom of this page for details.

EDITED TO ADD (3/29): In comments, someone asked what’s new in this book.

We revised the introductory materials in Chapter 1 to help readers better understand the broader context for computer security, with some explicit exercises to help readers develop a security mindset. We updated the discussion of AES in Chapter 3; rather than speculating on algebraic attacks, we now talk about the recent successful (theoretical, not practical) attacks against AES. Chapter 4 used to recommended using nonce-based encryption schemes. We now find these schemes problematic, and instead recommend randomized encryption schemes, like CBC mode. We updated the discussion of hash functions in Chapter 5; we discuss new results against MD5 and SHA1, and allude to the new SHA3 candidates (but say it’s too early to start using the SHA3 candidates). In Chapter 6, we no longer talk about UMAC, and instead talk about CMAC and GMAC. We revised Chapters 8 and 15 to talk about some recent implementation issue to be aware of. For example, we now talk about the cold boot attacks and challenges for generating randomness in VMs. In Chapter 19, we discuss online certificate verification.

Posted on March 23, 2010 at 2:42 PM23 Comments

Electronic Health Record Security Analysis

In British Columbia:

When Auditor-General John Doyle and his staff investigated the security of electronic record-keeping at the Vancouver Coastal Health Authority, they found trouble everywhere they looked.

“In every key area we examined, we found serious weaknesses,” wrote Doyle. “Security controls throughout the network and over the database were so inadequate that there was a high risk of external and internal attackers being able to access or extract information without the authority even being aware of it.”


“No intrusion prevention and detection systems exist to prevent or detect certain types of [online] attacks. Open network connections in common business areas. Dial-in remote access servers that bypass security. Open accounts existing, allowing health care data to be copied even outside the Vancouver Coastal Health Care authority at any time.”

More than 4,000 users were found to have access to the records in the database, many of them at a far higher level than necessary.


“Former client records and irrelevant records for current clients are still accessible to system users. Hundreds of former users, both employees and contractors, still have access to resources through active accounts, network accounts, and virtual private network accounts.”

While this report is from Canada, the same issues apply to any electronic patient record system in the U.S. What I find really interesting is that the Canadian government actually conducted a security analysis of the system, rather than just maintaining that everything would be fine. I wish the U.S. would do something similar.

The report, “The PARIS System for Community Care Services: Access and Security,” is here.

Posted on March 23, 2010 at 12:23 PM53 Comments

Back Door in Battery Charger


The United States Computer Emergency Response Team (US-CERT) has warned that the software included in the Energizer DUO USB battery charger contains a backdoor that allows unauthorized remote system access.

That’s actually misleading. Even though the charger is an USB device, it does not contain the harmful installer described in the article—it has no storage capacity. The software has to be downloaded from the Energizer website, and the software is only used to monitor the progress of the charge. The software is not needed for the device to function properly.

Here are details.

Energizer has announced it will pull the software from its website, and also will stop selling the device.

EDITED TO ADD (3/23): Additional news here.

Posted on March 23, 2010 at 6:13 AM27 Comments

Even More on the al-Mabhouh Assassination

This, from a former CIA chief of station:

The point is that in this day and time, with ubiquitous surveillance cameras, the ability to comprehensively analyse patterns of cell phone and credit card use, computerised records of travel documents which can be shared in the blink of an eye, the growing use of biometrics and machine-readable passports, and the ability of governments to share vast amounts of travel and security-related information almost instantaneously, it is virtually impossible for clandestine operatives not to leave behind a vast electronic trail which, if and when there is reason to examine it in detail, will amount to a huge body of evidence.

A not-terribly flattering article about Mossad:

It would be surprising if a key part of this extraordinary story did not turn out to be the role played by Palestinians. It is still Mossad practice to recruit double agents, just as it was with the PLO back in the 1970s. News of the arrest in Damascus of another senior Hamas operative ­ though denied by Mash’al ­ seems to point in this direction. Two other Palestinians extradited from Jordan to Dubai are members of the Hamas armed wing, the Izzedine al-Qassam brigades, suggesting treachery may indeed have been involved. Previous assassinations have involved a Palestinian agent identifying the target.

There’s no proof, of course, that Mossad was behind this operation. But the author is certainly right that the Palestinians believe that Mossad was behind it.

The Cold Spy lists what he sees as the mistakes made:

1. Using passport names of real people not connected with the operation.

2. Airport arrival without disguises in play thus showing your real faces.

3. Not anticipating the wide use of surveillance cameras in Dubai.

4. Checking into several hotels prior to checking in at the target hotel thus bringing suspicion on your entire operation.

5. Checking into the same hotel that the last person on the team checked into in order to change disguises.

6. Not anticipating the reaction that the local police had upon discovery of the crime, and their subsequent use of surveillance cameras in showing your entire operation to the world in order to send you a message that such actions or activities will not be tolerated on their soil.

7. Not anticipating the use of surveillance camera footage being posted on YouTube, thus showing everything about your operation right down to your faces and use of disguises to the masses around the world.

8. Using 11 people for a job that one person could have done without all the negative attention to the operation. For example, it could have been as simple as a robbery on the street with a subsequent shooting to cover it all up for what it really was.

9. Using too much sophistication in the operation showing it to be a high level intelligence/hit operation, as opposed to a simple matter using one person to carry out the assignment who was either used as a cutout or an expendable person which was then eliminated after the job was completed, thus covering all your tracks without one shred of evidence leading back to the original order for the hit.

10. Arriving too close to the date or time of the hit. Had the team arrived a few weeks earlier they could have established a presence in the city ­ thus seeing all the problems associated with carrying out said assignment ­ thus calling it off or having a counter plan whereby something else could have been tried elsewhere or in another country.

11. And to take everything to 11 points, not even noticing (which many on your team did in fact notice) all the surveillance you were under, and not calling the entire thing off because of it, and because you failed to see all of your mistakes made so far and then not calling it off because of them.

I disagree with a bunch of those.

My previous two blog posts on the topic.

EDITED TO ADD (3/22): The Israeli public believes Mossad was behind the assassination, too.

EDITED TO ADD (4/13): The Cold Spy responds in comments. Actually, there’s lots of interesting discussion in the comments.

Posted on March 22, 2010 at 9:10 AM60 Comments

Bringing Lots of Liquids on a Plane at Schiphol

This would worry me, if the liquid ban weren’t already useless.

The reporter found the security flaw in the airport’s duty-free shopping system. At Schiphol airport, passengers flying to countries outside the Schengan Agreement Area can buy bottles of alcohol at duty-free shops before going through security. They are then permitted to take these bottles onto flights, provided that they have the bottles sealed at the shop.

Mr Stegeman bought a bottle, emptied it and refilled it with another liquid. After that he returned to the same shop and ‘bought’ the refilled bottle again. The shop sealed the bottle in a bag, allowing him to take it with him through security and onto a London-bound flight. In London, he transferred planes and carried the bottle onto a flight to Washington DC.

The flaw, of course, is the assumption that bottles bought at a duty-free shop actually come from the duty-free shop.

But note that 1) it’s the same airport as underwear bomber, 2) reporter is known for trying to defeat airport security, and 3) body scanners would have made no difference.

Watch the TV program here.

Posted on March 19, 2010 at 12:58 PM69 Comments

Security Trade-Offs and Sacred Values

Interesting research:

Psychologist Jeremy Ginges and his colleagues identified this backfire effect in studies of the Israeli-Palestinian conflict in 2007. They interviewed both Israelis and Palestinians who possessed sacred values toward key issues such as ownership over disputed territories like the West Bank or the right of Palestinian refugees to return to villages they were forced to leave—these people viewed compromise on these issues completely unacceptable. Ginges and colleagues found that individuals offered a monetary payout to compromise their values expressed more moral outrage and were more supportive of violent opposition toward the other side. Opposition decreased, however, when the other side offered to compromise on a sacred value of its own, such as Israelis formally renouncing their right to the West Bank or Palestinians formally recognizing Israel as a state. Ginges and Scott Atran found similar evidence of this backfire effect with Indonesian madrassah students, who expressed less willingness to compromise their belief in sharia, strict Islamic law, when offered a material incentive.


After giving their opinions on Iran’s nuclear program, all participants were asked to consider one of two deals for Iranian disarmament. Half of the participants read about a deal in which the United States would reduce military aid to Israel in exchange for Iran giving up its military program. The other half of the participants read about a deal in which the United States would reduce aid to Israel and would pay Iran $40 billion. After considering the deal, all participants predicted how much the Iranian people would support the deal and how much anger they would feel toward the deal. In line with the Palestinian-Israeli and Indonesian studies, those who considered the nuclear program a sacred value expressed less support, and more anger, when the deal included money.

Posted on March 19, 2010 at 6:58 AM38 Comments

Disabling Cars by Remote Control

Who didn’t see this coming?

More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.


Ramos-Lopez’s account had been closed when he was terminated from Texas Auto Center in a workforce reduction last month, but he allegedly got in through another employee’s account, Garcia says. At first, the intruder targeted vehicles by searching on the names of specific customers. Then he discovered he could pull up a database of all 1,100 Auto Center customers whose cars were equipped with the device. He started going down the list in alphabetical order, vandalizing the records, disabling the cars and setting off the horns.

Posted on March 18, 2010 at 7:41 AM62 Comments

Casino Hack

Nice hack:

Using insider knowledge the two hacked into software that controlled remote betting machines on live roulette wheels, the report said.

The machines would print out winning betting slips regardless of the results on the wheel, Peterborough Today said.

I’d like to know how they got caught.

EDITED TO ADD (4/17): They got their math wrong:

However, the scheme came unstuck after an alert cashier noticed a winning slip for £600 for a £10 bet at odds of 35-1. The casino launched an investigation that unearthed a string of other suspicious bets, traced back to Ashley and Bhagat, IT contractors working at the casino at the time of the scam.

Posted on March 17, 2010 at 6:33 AM58 Comments

Secret Questions

Interesting research:

Analysing our data for security, though, shows that essentially all human-generated names provide poor resistance to guessing. For an attacker looking to make three guesses per personal knowledge question (for example, because this triggers an account lock-down), none of the name distributions we looked at gave more than 8 bits of effective security except for full names. That is, about at least 1 in 256 guesses would be successful, and 1 in 84 accounts compromised. For an attacker who can make more than 3 guesses and wants to break into 50% of available accounts, no distributions gave more than about 12 bits of effective security. The actual values vary in some interesting ways-South Korean names are much easier to guess than American ones, female first names are harder than male ones, pet names are slightly harder than human names, and names are getting harder to guess over time.

I’ve written about this problem.

EDITED TO ADD (4/13): xkcd on the secret question.

Posted on March 16, 2010 at 6:44 AM63 Comments

USB Combination Lock

Here’s a promotional security product designed by someone who knows nothing about security. The USB drive is “protected” by a combination lock. There are only two dials, so there are only 100 possible combinations. And when the drive is “locked” and the connector is retracted, the contact are still accessible.

Maybe it should be given away by companies that sell security theater.

Posted on March 15, 2010 at 1:59 PM57 Comments


Measuring the Perpetrators and Funders of Typosquatting,” by Tyler Moore and Benjamin Edelman:

Abstract. We describe a method for identifying “typosquatting”, the intentional registration of misspellings of popular website addresses. We estimate that at least 938 000 typosquatting domains target the top 3 264 .com sites, and we crawl more than 285 000 of these domains to analyze their revenue sources. We find that 80% are supported by pay-per-click ads often advertising the correctly spelled domain and its competitors.Another 20% include static redirection to other sites. We present an automated technique that uncovered 75 otherwise legitimate websites which benefited from direct links from thousands of misspellings of competing websites. Using regression analysis, we find that websites in categories with higher pay-per-click ad prices face more typosquatting registrations, indicating that ad platforms such as Google AdWords exacerbate typosquatting. However, our investigations also confirm the feasibility of signicantly reducing typosquatting. We find that typosquatting is highly concentrated: Of typo domains showing Google ads, 63% use one of five advertising IDs, and some large name servers host typosquatting domains as much as four times as often as the web as a whole.

The paper appeared at the Financial Cryptography conference this year.

Posted on March 15, 2010 at 6:13 AM49 Comments

Wanted: Trust Detector

It’s good to dream:

IARPA’s five-year plan aims to design experiments that can measure trust with high certainty—a tricky proposition for a psychological study. Developing such experimental protocols could prove very useful for assessing levels of trust within one-on-one talks, or even during group interactions.

A second part of the IARPA proposal might involve using new types of sensors and software to gauge human facial, language or body signals that might help predict trustworthiness. Perhaps facial recognition technology that could deduce emotions or facial tics might help, not to mention better lie detectors.

IARPA is the Intelligence Advanced Research Projects Activity, the U.S. intelligence community’s answer to DARPA.

Posted on March 11, 2010 at 6:17 AM44 Comments

Nose Biometrics


Since they are hard to conceal, the study says, noses would work well for identification in covert surveillance.

The researchers say noses have been overlooked in the growing field of biometrics, studies into ways of identifying distinguishing traits in people.

“Noses are prominent facial features and yet their use as a biometric has been largely unexplored,” said the University of Bath’s Dr Adrian Evans.

“Ears have been looked at in detail, eyes have been looked at in terms of iris recognition but the nose has been neglected.”

The researchers used a system called PhotoFace, developed by researchers at the University of the West of England, Bristol and Imperial College, London, for the 3D scans.

Posted on March 10, 2010 at 1:47 PM43 Comments

The Limits of Identity Cards

Good legal paper on the limits of identity cards: Stephen Mason and Nick Bohm, “Identity and its Verification,” in Computer Law & Security Review, Volume 26, Number 1, Jan 2010.

Those faced with the problem of how to verify a person’s identity would be well advised to ask themselves the question, ‘Identity with what?’ An enquirer equipped with the answer to this question is in a position to tackle, on a rational basis, the task of deciding what evidence will be useful for the purpose. Without the answer to the question, the verification of identity becomes a sadly familiar exercise in blind compliance with arbitrary rules.

Posted on March 10, 2010 at 7:09 AM51 Comments

Marc Rotenberg on Google's Italian Privacy Case

Interesting commentary:

I don’t think this is really a case about ISP liability at all. It is a case about the use of a person’s image, without their consent, that generates commercial value for someone else. That is the essence of the Italian law at issue in this case. It is also how the right of privacy was first established in the United States.

The video at the center of this case was very popular in Italy and drove lots of users to the Google Video site. This boosted advertising and support for other Google services. As a consequence, Google actually had an incentive not to respond to the many requests it received before it actually took down the video.

Back in the U.S., here is the relevant history: after Brandeis and Warren published their famous article on the right to privacy in 1890, state courts struggled with its application. In a New York state case in 1902, a court rejected the newly proposed right. In a second case, a Georgia state court in 1905 endorsed it.

What is striking is that both cases involved the use of a person’s image without their consent. In New York, it was a young girl, whose image was drawn and placed on an oatmeal box for advertising purposes. In Georgia, a man’s image was placed in a newspaper, without his consent, to sell insurance.

Also important is the fact that the New York judge who rejected the privacy claim, suggested that the state assembly could simple pass a law to create the right. The New York legislature did exactly that and in 1903 New York enacted the first privacy law in the United States to protect a person’s “name or likeness” for commercial use.

The whole thing is worth reading.

EDITED TO ADD (3/18): A rebuttal.

Posted on March 9, 2010 at 12:36 PM23 Comments

Guide to Microsoft Police Forensic Services

The “Microsoft Online Services Global Criminal Compliance Handbook (U.S. Domestic Version)” (also can be found here, here, and here) outlines exactly what Microsoft will do upon police request. Here’s a good summary of what’s in it:

The Global Criminal Compliance Handbook is a quasi-comprehensive explanatory document meant for law enforcement officials seeking access to Microsoft’s stored user information. It also provides sample language for subpoenas and diagrams on how to understand server logs.

I call it “quasi-comprehensive” because, at a mere 22 pages, it doesn’t explore the nitty-gritty of Microsoft’s systems; it’s more like a data-hunting guide for dummies.

When it was first leaked, Microsoft tried to scrub it from the Internet. But they quickly realized that it was futile and relented.

Lots more information.

Posted on March 9, 2010 at 6:59 AM11 Comments

Google in The Onion


MOUNTAIN VIEW, CA—Responding to recent public outcries over its handling of private data, search giant Google offered a wide-ranging and eerily well-informed apology to its millions of users Monday.

“We would like to extend our deepest apologies to each and every one of you,” announced CEO Eric Schmidt, speaking from the company’s Googleplex headquarters. “Clearly there have been some privacy concerns as of late, and judging by some of the search terms we’ve seen, along with the tens of thousands of personal e-mail exchanges and Google Chat conversations we’ve carefully examined, it looks as though it might be a while before we regain your trust.”

Google expressed regret to some of its third-generation Irish-American users on Smithwood between Barlow and Lake.

Added Schmidt, “Whether you’re Michael Paulson who lives at 3425 Longview Terrace and makes $86,400 a year, or Jessica Goldblatt from Lynnwood, WA, who already has well-established trust issues, we at Google would just like to say how very, truly sorry we are.”

Posted on March 8, 2010 at 2:24 PM18 Comments

Eating a Flash Drive

How not to destroy evidence:

In a bold and bizarre attempt to destroy evidence seized during a federal raid, a New York City man grabbed a flash drive and swallowed the data storage device while in the custody of Secret Service agents, records show.

The article wasn’t explicit about this—odd, as it’s the main question any reader would have—but it seems that the man’s digestive tract did not destroy the evidence.

Posted on March 8, 2010 at 11:00 AM55 Comments

De-Anonymizing Social Network Users

Interesting paper: “A Practical Attack to De-Anonymize Social Network Users.”

Abstract. Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates. These sites have millions of registered users, and they are interesting from a security and privacy point of view because they store large amounts of sensitive personal user data.

In this paper, we introduce a novel de-anonymization attack that exploits group membership information that is available on social networking sites. More precisely, we show that information about the group memberships of a user (i.e., the groups of a social network to which a user belongs) is often sufficient to uniquely identify this user, or, at least, to significantly reduce the set of possible candidates. To determine the group membership of a user, we leverage well-known web browser history stealing attacks. Thus, whenever a social network user visits a malicious website, this website can launch our de-anonymization attack and learn the identity of its visitors.

The implications of our attack are manifold, since it requires a low effort and has the potential to affect millions of social networking users. We perform both a theoretical analysis and empirical measurements to demonstrate the feasibility of our attack against Xing, a medium-sized social network with more than eight million members that is mainly used for business relationships. Our analysis suggests that about 42% of the users that use groups can be uniquely identified, while for 90%, we can reduce the candidate set to less than 2,912 persons. Furthermore, we explored other, larger social networks and performed experiments that suggest that users of Facebook and LinkedIn are equally vulnerable (although attacks would require more resources on the side of the attacker). An analysis of an additional five social networks indicates that they are also prone to our attack.

News article. Moral: anonymity is really, really hard—but we knew that already.

Posted on March 8, 2010 at 6:13 AM30 Comments

Comprehensive National Cybersecurity Initiative

On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:

  • Initiative #1. Manage the Federal Enterprise Network as a single network enterprise with Trusted Internet.
  • Initiative #2. Deploy an intrusion detection system of sensors across the Federal enterprise.
  • Initiative #3. Pursue deployment of intrusion prevention systems across the Federal enterprise.
  • Initiative #4: Coordinate and redirect research and development (R&D) efforts.
  • Initiative #5. Connect current cyber ops centers to enhance situational awareness.
  • Initiative #6. Develop and implement a government-wide cyber counterintelligence (CI) plan.
  • Initiative #7. Increase the security of our classified networks.
  • Initiative #8. Expand cyber education.
  • Initiative #9. Define and develop enduring “leap-ahead” technology, strategies, and programs.
  • Initiative #10. Define and develop enduring deterrence strategies and programs.
  • Initiative #11. Develop a multi-pronged approach for global supply chain risk management.
  • Initiative #12. Define the Federal role for extending cybersecurity into critical infrastructure domains.

While this transparency is a good, in this sort of thing the devil is in the details—and we don’t have any details. We also don’t have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.

Posted on March 4, 2010 at 12:55 PM17 Comments

Crypto Implementation Failure

Look at this new AES-encrypted USB memory stick. You enter the key directly into the stick via the keypad, thereby bypassing any eavesdropping software on the computer.

The problem is that in order to get full 256-bit entropy in the key, you need to enter 77 decimal digits using the keypad. I can’t imagine anyone doing that; they’ll enter an eight- or ten-digit key and call it done. (Likely, the password encrypts a random key that encrypts the actual data: not that it matters.) And even if you wanted to, is it reasonable to expect someone to enter 77 digits without making an error?

Nice idea, complete implementation failure.

EDITED TO ADD (3/4): According to the manual, the drive locks for two minutes after five unsuccessful attempts. This delay is enough to make brute-force attacks infeasible, even with only ten-digit keys.

So, not nearly as bad as I thought it was. Better would be a much longer delay after 100 or so unsuccessful attempts. Yes, there’s a denial-of-service attack against the thing, but stealing it is an even more effective denial-of-service attack.

Posted on March 4, 2010 at 6:05 AM100 Comments

More on the Al-Mabhouh Assassination

Interesting essay by a former CIA field officer on the al-Mabhouh assassination:

The truth is that Mr. Mabhouh’s assassination was conducted according to the book—a military operation in which the environment is completely controlled by the assassins. At least 25 people are needed to carry off something like this. You need “eyes on” the target 24 hours a day to ensure that when the time comes he is alone. You need coverage of the police—assassinations go very wrong when the police stumble into the middle of one. You need coverage of the hotel security staff, the maids, the outside of the hotel. You even need people in back-up accommodations in the event the team needs a place to hide.

I found this conclusion incredible:

I can only speculate about where exactly the hit went wrong. But I would guess the assassins failed to account for the marked advance in technology.


Not completely understanding advances in technology may be one explanation for the assassins nonchalantly exposing their faces to the closed-circuit TV cameras, one female assassin even smiling at one…. The other explanation—the assassins didn’t care whether their faces were identified—doesn’t seem plausible at all.

Does he really think that this professional a team simply didn’t realize that there were security cameras in airports and hotels? I think that the “other explanation” is not only plausible, it’s obvious.

The number of suspects is now at 27, by the way. And:

Also Monday, the sources said the UAE central bank is working with other nations to track funding and 14 credit cards—issued mostly by a United States bank—used by the suspects in different places, including the United States.

We’ll see how well these people covered their tracks.

EDITED TO ADD (3/3): Speculation that it’s Egypt or Jordan. I don’t believe it.

EDITED TO ADD (3/5): More commentary on the tactics. Speculation that it was Mossad.

Posted on March 2, 2010 at 5:55 AM83 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.