Schneier on Security
A blog covering security and security technology.
March 2010 Archives
The New York Times has an article about cameras in the subways. The article is all about how horrible it is that the cameras don't work:
Moreover, nearly half of the subway system’s 4,313 security cameras that have been installed — in stations and tunnels throughout the system — do not work, because of either shoddy software or construction problems, say officials with the Metropolitan Transportation Authority, which operates the city’s bus, subway and train system.
I certainly agree that taxpayers should be upset when something they've purchased doesn't function as expected. But way down at the bottom of the article, we find:
Even without the cameras, officials said crime in the transit system had dropped to a record low. In 1990, the system averaged 47.8 crimes a day, compared with 5.3 so far this year. “The subway system is safer than it’s ever been,” said Kevin Ortiz, an authority spokesman.
No data on how many crimes were solved by cameras, but we know from other studies that their effect on crime is minimal.
Information technology is increasingly everywhere, and it's the same technologies everywhere. The same operating systems are used in corporate and government computers. The same software controls critical infrastructure and home shopping. The same networking technologies are used in every country. The same digital infrastructure underpins the small and the large, the important and the trivial, the local and the global; the same vendors, the same standards, the same protocols, the same applications.
With all of this sameness, you'd think these technologies would be designed to the highest security standard, but they're not. They're designed to the lowest or, at best, somewhere in the middle. They're designed sloppily, in an ad hoc manner, with efficiency in mind. Security is a requirement, more or less, but it's a secondary priority. It's far less important than functionality, and security is what gets compromised when schedules get tight.
Should the government -- ours, someone else's? -- stop outsourcing code development? That's the wrong question to ask. Code isn't magically more secure when it's written by someone who receives a government paycheck than when it's written by someone who receives a corporate paycheck. It's not magically less secure when it's written by someone who speaks a foreign language, or is paid by the hour instead of by salary. Writing all your code in-house isn't even a viable option anymore; we're all stuck with software written by who-knows-whom in who-knows-which-country. And we need to figure out how to get security from that.
The traditional solution has been defense in depth: layering one mediocre security measure on top of another mediocre security measure. So we have the security embedded in our operating system and applications software, the security embedded in our networking protocols, and our additional security products such as antivirus and firewalls. We hope that whatever security flaws -- either found and exploited, or deliberately inserted -- there are in one layer are counteracted by the security in another layer, and that when they're not, we can patch our systems quickly enough to avoid serious long-term damage. That is a lousy solution when you think about it, but we've been more-or-less managing with it so far.
Bringing all software -- and hardware, I suppose -- development in-house under some misconception that proximity equals security is not a better solution. What we need is to improve the software development process, so we can have some assurance that our software is secure -- regardless of what coder, employed by what company, and living in what country, writes it. The key word here is "assurance."
Assurance is less about developing new security techniques than about using the ones we already have. It's all the things described in books on secure coding practices. It's what Microsoft is trying to do with its Security Development Lifecycle. It's the Department of Homeland Security's Build Security In program. It's what every aircraft manufacturer goes through before it fields a piece of avionics software. It's what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems. But most of the time, we don't care; commercial software, as insecure as it is, is good enough for most purposes.
Assurance is expensive, in terms of money and time, for both the process and the documentation. But the NSA needs assurance for critical military systems and Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be more common in government IT contracts.
The software used to run our critical infrastructure -- government, corporate, everything -- isn't very secure, and there's no hope of fixing it anytime soon. Assurance is really our only option to improve this, but it's expensive and the market doesn't care. Government has to step in and spend the money where its requirements demand it, and then we'll all benefit when we buy the same software.
This essay first appeared in Information Security, as the second part of a point-counterpoint with Marcus Ranum. You can read Marcus's essay there as well.
According to new research:
The researchers found that subjects assigned leadership roles were buffered from the negative effects of lying. Across all measures, the high-power liars -- the leaders -- resembled truthtellers, showing no evidence of cortisol reactivity (which signals stress), cognitive impairment or feeling bad. In contrast, low-power liars -- the subordinates -- showed the usual signs of stress and slower reaction times. "Having power essentially buffered the powerful liars from feeling the bad effects of lying, from responding in any negative way or giving nonverbal cues that low-power liars tended to reveal," Carney explains.
Of course, we know why he's really there. He's really there so that if the bridge is destroyed by terrorists, the authorities can appear on the television news and say they had taken all possible precautions. Plus, if you employ a security guard, then I should imagine that your insurance premiums are going to be significantly lower.
EDITED TO ADD (4/13): Another Clarkson essay, this one on security theater.
The amazing story of Gerald Blanchard.
Thorough as ever, Blanchard had spent many previous nights infiltrating the bank to do recon or to tamper with the locks while James acted as lookout, scanning the vicinity with binoculars and providing updates via a scrambled-band walkie-talkie. He had put a transmitter behind an electrical outlet, a pinhole video camera in a thermostat, and a cheap baby monitor behind the wall. He had even mounted handles on the drywall panels so he could remove them to enter and exit the ATM room. Blanchard had also taken detailed measurements of the room and set up a dummy version in a friend's nearby machine shop. With practice, he had gotten his ATM-cracking routine down to where he needed only 90 seconds after the alarm tripped to finish and escape with his score.
A potential new forensic:
To determine how similar a person's fingertip bacteria are to bacteria left on computer keys, the team took swabs from three computer keyboards and compared bacterial gene sequences with those from the fingertips of the keyboard owners. Today in the Proceedings of the National Academy of Sciences, they conclude that enough bacteria can be collected from even small surfaces such as computer keys to link them with the hand that laid them down.
Here's a link to the abstract; the full paper is behind a paywall.
Catchy one-liner ("interesting," with link):In this part of the blog post, Bruce quotes something from the article he links to in the catchy phrase. It might be the abstract to an academic article, or the key points in a subject he's trying to get across. To get the post looking right, you have to include at least a decent sized paragraph from the quoted source or otherwise it just looks like crap. So I will continue typing another sentence or two, until I have enough text to make this look like a legitimately quoted paragraph. See, now that wasn't so hard after all.
I don't always do this, but it's pretty common.
Modern photocopy machines contain hard drives that often have scans of old documents.
This matters when an office disposes of an old copier. It also matters if you make your copies at a commercial copy center like Kinko's.
Nice paper: "Side-Channel Leaks in Web Applications: a Reality Today, a Challenge Tomorrow," by Shuo Chen, Rui Wang, XiaoFeng Wang, and Kehuan Zhang.
Abstract. With software-as-a-service becoming mainstream, more and more applications are delivered to the client through the Web. Unlike a desktop application, a web application is split into browser-side and server-side components. A subset of the application's internal information flows are inevitably exposed on the network. We show that despite encryption, such a side-channel information leak is a realistic and serious threat to user privacy. Specifically, we found that surprisingly detailed sensitive information is being leaked out from a number of high-profile, top-of-the-line web applications in healthcare, taxation, investment and web search: an eavesdropper can infer the illnesses/medications/surgeries of the user, her family income and investment secrets, despite HTTPS protection; a stranger on the street can glean enterprise employees' web search queries, despite WPA/WPA2 Wi-Fi encryption. More importantly, the root causes of the problem are some fundamental characteristics of web applications: stateful communication, low entropy input for better interaction, and significant traffic distinctions. As a result, the scope of the problem seems industry-wide. We further present a concrete analysis to demonstrate the challenges of mitigating such a threat, which points to the necessity of a disciplined engineering practice for side-channel mitigations in future web application developments.
We already know that eavesdropping on an SSL-encrypted web session can leak a lot of information about the person's browsing habits. Since the size of both the page requests and the page downloads are different, an eavesdropper can sometimes infer which links the person clicked on and what pages he's viewing.
This paper extends that work. Ed Felten explains:
The new paper shows that this inference-from-size problem gets much, much worse when pages are using the now-standard AJAX programming methods, in which a web "page" is really a computer program that makes frequent requests to the server for information. With more requests to the server, there are many more opportunities for an eavesdropper to make inferences about what you're doing -- to the point that common applications leak a great deal of private information.
The paper goes on to talk about mitigation -- padding page requests and downloads to a constant size is the obvious one -- but they're difficult and potentially expensive.
James Fallows and I are being interviewed in Second Life tonight, 9:00 PM Eastern Time.
Sarcastic, yet a bit too close to the truth.
In this paper we revisit the assumption that shellcode need be fundamentally different in structure than non-executable data. Specifically, we elucidate how one can use natural language generation techniques to produce shellcode that is superficially similar to English prose. We argue that this new development poses significant challenges for inline payloadbased inspection (and emulation) as a defensive measure, and also highlights the need for designing more efficient techniques for preventing shellcode injection attacks altogether.
Some movie-plot attacks actually happen:
They never touched the floor—that would have set off an alarm.
If a person on the no-fly list dies, his name could stay on the list so that the government can catch anyone trying to assume his identity.
But since a terrorist might assume anyone's identity, by the same logic we should put everyone on the no-fly list.
Otherwise, it's an interesting article on how the no-fly list works.
I have a new book, sort of. Cryptography Engineering is really the second edition of Practical Cryptography. Niels Ferguson and I wrote Practical Cryptography in 2003. Tadayoshi Kohno did most of the update work—and added exercises to make it more suitable as a textbook—and is the third author on Cryptography Engineering. (I didn't like it that Wiley changed the title; I think it's too close to Ross Anderson's excellent Security Engineering.)
Cryptography Engineering is a techie book; it's for practitioners who are implementing cryptography or for people who want to learn more about the nitty-gritty of how cryptography works and what the implementation pitfalls are. If you've already bought Practical Cryptography, there's no need to upgrade unless you're actually using it.
EDITED TO ADD (3/23): Signed copies are available. See the bottom of this page for details.
EDITED TO ADD (3/29): In comments, someone asked what's new in this book.
We revised the introductory materials in Chapter 1 to help readers better understand the broader context for computer security, with some explicit exercises to help readers develop a security mindset. We updated the discussion of AES in Chapter 3; rather than speculating on algebraic attacks, we now talk about the recent successful (theoretical, not practical) attacks against AES. Chapter 4 used to recommended using nonce-based encryption schemes. We now find these schemes problematic, and instead recommend randomized encryption schemes, like CBC mode. We updated the discussion of hash functions in Chapter 5; we discuss new results against MD5 and SHA1, and allude to the new SHA3 candidates (but say it's too early to start using the SHA3 candidates). In Chapter 6, we no longer talk about UMAC, and instead talk about CMAC and GMAC. We revised Chapters 8 and 15 to talk about some recent implementation issue to be aware of. For example, we now talk about the cold boot attacks and challenges for generating randomness in VMs. In Chapter 19, we discuss online certificate verification.
In British Columbia:
When Auditor-General John Doyle and his staff investigated the security of electronic record-keeping at the Vancouver Coastal Health Authority, they found trouble everywhere they looked.
While this report is from Canada, the same issues apply to any electronic patient record system in the U.S. What I find really interesting is that the Canadian government actually conducted a security analysis of the system, rather than just maintaining that everything would be fine. I wish the U.S. would do something similar.
The report, "The PARIS System for Community Care Services: Access and Security," is here.
The United States Computer Emergency Response Team (US-CERT) has warned that the software included in the Energizer DUO USB battery charger contains a backdoor that allows unauthorized remote system access.
That's actually misleading. Even though the charger is an USB device, it does not contain the harmful installer described in the article—it has no storage capacity. The software has to be downloaded from the Energizer website, and the software is only used to monitor the progress of the charge. The software is not needed for the device to function properly.
Here are details.
Energizer has announced it will pull the software from its website, and also will stop selling the device.
MS Word has been dethroned:
Files based on Reader were exploited in almost 49 per cent of the targeted attacks of 2009, compared with about 39 per cent that took aim at Microsoft Word. By comparison, in 2008, Acrobat was targeted in almost 29 per cent of attacks and Word was exploited by almost 35 per cent.
This, from a former CIA chief of station:
The point is that in this day and time, with ubiquitous surveillance cameras, the ability to comprehensively analyse patterns of cell phone and credit card use, computerised records of travel documents which can be shared in the blink of an eye, the growing use of biometrics and machine-readable passports, and the ability of governments to share vast amounts of travel and security-related information almost instantaneously, it is virtually impossible for clandestine operatives not to leave behind a vast electronic trail which, if and when there is reason to examine it in detail, will amount to a huge body of evidence.
A not-terribly flattering article about Mossad:
It would be surprising if a key part of this extraordinary story did not turn out to be the role played by Palestinians. It is still Mossad practice to recruit double agents, just as it was with the PLO back in the 1970s. News of the arrest in Damascus of another senior Hamas operative though denied by Mash'al seems to point in this direction. Two other Palestinians extradited from Jordan to Dubai are members of the Hamas armed wing, the Izzedine al-Qassam brigades, suggesting treachery may indeed have been involved. Previous assassinations have involved a Palestinian agent identifying the target.
There's no proof, of course, that Mossad was behind this operation. But the author is certainly right that the Palestinians believe that Mossad was behind it.
The Cold Spy lists what he sees as the mistakes made:
1. Using passport names of real people not connected with the operation.
I disagree with a bunch of those.
EDITED TO ADD (4/13): The Cold Spy responds in comments. Actually, there's lots of interesting discussion in the comments.
For several years von Hagens and his team experimented using smaller squid, and found that the fragility of the skin needed a slower replacement process than other animal specimens.
This would worry me, if the liquid ban weren't already useless.
The reporter found the security flaw in the airport's duty-free shopping system. At Schiphol airport, passengers flying to countries outside the Schengan Agreement Area can buy bottles of alcohol at duty-free shops before going through security. They are then permitted to take these bottles onto flights, provided that they have the bottles sealed at the shop.
The flaw, of course, is the assumption that bottles bought at a duty-free shop actually come from the duty-free shop.
But note that 1) it's the same airport as underwear bomber, 2) reporter is known for trying to defeat airport security, and 3) body scanners would have made no difference.
Watch the TV program here.
Psychologist Jeremy Ginges and his colleagues identified this backfire effect in studies of the Israeli-Palestinian conflict in 2007. They interviewed both Israelis and Palestinians who possessed sacred values toward key issues such as ownership over disputed territories like the West Bank or the right of Palestinian refugees to return to villages they were forced to leave—these people viewed compromise on these issues completely unacceptable. Ginges and colleagues found that individuals offered a monetary payout to compromise their values expressed more moral outrage and were more supportive of violent opposition toward the other side. Opposition decreased, however, when the other side offered to compromise on a sacred value of its own, such as Israelis formally renouncing their right to the West Bank or Palestinians formally recognizing Israel as a state. Ginges and Scott Atran found similar evidence of this backfire effect with Indonesian madrassah students, who expressed less willingness to compromise their belief in sharia, strict Islamic law, when offered a material incentive.
Who didn't see this coming?
More than 100 drivers in Austin, Texas found their cars disabled or the horns honking out of control, after an intruder ran amok in a web-based vehicle-immobilization system normally used to get the attention of consumers delinquent in their auto payments.
Using insider knowledge the two hacked into software that controlled remote betting machines on live roulette wheels, the report said.
I'd like to know how they got caught.
EDITED TO ADD (4/17): They got their math wrong:
However, the scheme came unstuck after an alert cashier noticed a winning slip for £600 for a £10 bet at odds of 35-1. The casino launched an investigation that unearthed a string of other suspicious bets, traced back to Ashley and Bhagat, IT contractors working at the casino at the time of the scam.
Analysing our data for security, though, shows that essentially all human-generated names provide poor resistance to guessing. For an attacker looking to make three guesses per personal knowledge question (for example, because this triggers an account lock-down), none of the name distributions we looked at gave more than 8 bits of effective security except for full names. That is, about at least 1 in 256 guesses would be successful, and 1 in 84 accounts compromised. For an attacker who can make more than 3 guesses and wants to break into 50% of available accounts, no distributions gave more than about 12 bits of effective security. The actual values vary in some interesting ways-South Korean names are much easier to guess than American ones, female first names are harder than male ones, pet names are slightly harder than human names, and names are getting harder to guess over time.
I've written about this problem.
EDITED TO ADD (4/13): xkcd on the secret question.
Here's a promotional security product designed by someone who knows nothing about security. The USB drive is "protected" by a combination lock. There are only two dials, so there are only 100 possible combinations. And when the drive is "locked" and the connector is retracted, the contact are still accessible.
Maybe it should be given away by companies that sell security theater.
"Measuring the Perpetrators and Funders of Typosquatting," by Tyler Moore and Benjamin Edelman:
Abstract. We describe a method for identifying "typosquatting", the intentional registration of misspellings of popular website addresses. We estimate that at least 938 000 typosquatting domains target the top 3 264 .com sites, and we crawl more than 285 000 of these domains to analyze their revenue sources. We find that 80% are supported by pay-per-click ads often advertising the correctly spelled domain and its competitors.Another 20% include static redirection to other sites. We present an automated technique that uncovered 75 otherwise legitimate websites which benefited from direct links from thousands of misspellings of competing websites. Using regression analysis, we find that websites in categories with higher pay-per-click ad prices face more typosquatting registrations, indicating that ad platforms such as Google AdWords exacerbate typosquatting. However, our investigations also confirm the feasibility of signicantly reducing typosquatting. We find that typosquatting is highly concentrated: Of typo domains showing Google ads, 63% use one of five advertising IDs, and some large name servers host typosquatting domains as much as four times as often as the web as a whole.
The paper appeared at the Financial Cryptography conference this year.
This makes no sense to me, even though -- I suppose -- it's a squid cryptography joke.
This one on simple-talk.com.
Over at Wikibooks, they're trying to write an open source cryptography textbook.
It's good to dream:
IARPA's five-year plan aims to design experiments that can measure trust with high certainty -- a tricky proposition for a psychological study. Developing such experimental protocols could prove very useful for assessing levels of trust within one-on-one talks, or even during group interactions.
IARPA is the Intelligence Advanced Research Projects Activity, the U.S. intelligence community's answer to DARPA.
Since they are hard to conceal, the study says, noses would work well for identification in covert surveillance.
Good legal paper on the limits of identity cards: Stephen Mason and Nick Bohm, "Identity and its Verification," in Computer Law & Security Review, Volume 26, Number 1, Jan 2010.
Those faced with the problem of how to verify a person's identity would be well advised to ask themselves the question, 'Identity with what?' An enquirer equipped with the answer to this question is in a position to tackle, on a rational basis, the task of deciding what evidence will be useful for the purpose. Without the answer to the question, the verification of identity becomes a sadly familiar exercise in blind compliance with arbitrary rules.
I don't think this is really a case about ISP liability at all. It is a case about the use of a person's image, without their consent, that generates commercial value for someone else. That is the essence of the Italian law at issue in this case. It is also how the right of privacy was first established in the United States.
The whole thing is worth reading.
EDITED TO ADD (3/18): A rebuttal.
The "Microsoft Online Services Global Criminal Compliance Handbook (U.S. Domestic Version)" (also can be found here, here, and here) outlines exactly what Microsoft will do upon police request. Here's a good summary of what's in it:
The Global Criminal Compliance Handbook is a quasi-comprehensive explanatory document meant for law enforcement officials seeking access to Microsoft's stored user information. It also provides sample language for subpoenas and diagrams on how to understand server logs.
When it was first leaked, Microsoft tried to scrub it from the Internet. But they quickly realized that it was futile and relented.
MOUNTAIN VIEW, CA—Responding to recent public outcries over its handling of private data, search giant Google offered a wide-ranging and eerily well-informed apology to its millions of users Monday.
How not to destroy evidence:
In a bold and bizarre attempt to destroy evidence seized during a federal raid, a New York City man grabbed a flash drive and swallowed the data storage device while in the custody of Secret Service agents, records show.
The article wasn't explicit about this -- odd, as it's the main question any reader would have -- but it seems that the man's digestive tract did not destroy the evidence.
Interesting paper: "A Practical Attack to De-Anonymize Social Network Users."
Abstract. Social networking sites such as Facebook, LinkedIn, and Xing have been reporting exponential growth rates. These sites have millions of registered users, and they are interesting from a security and privacy point of view because they store large amounts of sensitive personal user data.
Squid teapot. Could be squiddier.
I gave this one two days ago, at the RSA Conference.
On Tuesday, the White House published an unclassified summary of its Comprehensive National Cybersecurity Initiative (CNCI). Howard Schmidt made the announcement at the RSA Conference. These are the 12 initiatives in the plan:
While this transparency is a good, in this sort of thing the devil is in the details -- and we don't have any details. We also don't have any information about the legal authority for cybersecurity, and how much the NSA is, and should be, involved. Good commentary on that here. EPIC is suing the NSA to learn more about its involvement.
Look at this new AES-encrypted USB memory stick. You enter the key directly into the stick via the keypad, thereby bypassing any eavesdropping software on the computer.
The problem is that in order to get full 256-bit entropy in the key, you need to enter 77 decimal digits using the keypad. I can't imagine anyone doing that; they'll enter an eight- or ten-digit key and call it done. (Likely, the password encrypts a random key that encrypts the actual data: not that it matters.) And even if you wanted to, is it reasonable to expect someone to enter 77 digits without making an error?
Nice idea, complete implementation failure.
EDITED TO ADD (3/4): According to the manual, the drive locks for two minutes after five unsuccessful attempts. This delay is enough to make brute-force attacks infeasible, even with only ten-digit keys.
So, not nearly as bad as I thought it was. Better would be a much longer delay after 100 or so unsuccessful attempts. Yes, there's a denial-of-service attack against the thing, but stealing it is an even more effective denial-of-service attack.
Similar sentiment from Newsweek.
Interesting essay by a former CIA field officer on the al-Mabhouh assassination:
The truth is that Mr. Mabhouh's assassination was conducted according to the book -- a military operation in which the environment is completely controlled by the assassins. At least 25 people are needed to carry off something like this. You need "eyes on" the target 24 hours a day to ensure that when the time comes he is alone. You need coverage of the police -- assassinations go very wrong when the police stumble into the middle of one. You need coverage of the hotel security staff, the maids, the outside of the hotel. You even need people in back-up accommodations in the event the team needs a place to hide.
I found this conclusion incredible:
I can only speculate about where exactly the hit went wrong. But I would guess the assassins failed to account for the marked advance in technology.
Does he really think that this professional a team simply didn't realize that there were security cameras in airports and hotels? I think that the "other explanation" is not only plausible, it's obvious.
The number of suspects is now at 27, by the way. And:
Also Monday, the sources said the UAE central bank is working with other nations to track funding and 14 credit cards -- issued mostly by a United States bank -- used by the suspects in different places, including the United States.
We'll see how well these people covered their tracks.
EDITED TO ADD (3/3): Speculation that it's Egypt or Jordan. I don't believe it.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.