Schneier on Security
A blog covering security and security technology.
June 2011 Archives
This is a really weird story:
After setting up its own cyber-warfare team, China's military has now developed its first online war game aimed at improving combat skills and battle awareness, state press said Wednesday.
How is this different from any of the dozens of other first-person shooter games with realistic weapons?
And does "training" on these games really translate into the real world?
EDITED TO ADD (7/13): The original story by China Daily is more detailed and easier to follow.
I'm really getting tired of stories like this:
Computer disks and USB sticks were dropped in parking lots of government buildings and private contractors, and 60% of the people who picked them up plugged the devices into office computers. And if the drive or CD had an official logo on it, 90% were installed.
Of course people plugged in USB sticks and computer disks. It's like "75% of people who picked up a discarded newspaper on the bus read it." What else are people supposed to do with them?
And this is not the right response:
Mark Rasch, director of network security and privacy consulting for Falls Church, Virginia-based Computer Sciences Corp., told Bloomberg: "There's no device known to mankind that will prevent people from being idiots."
Maybe it would be the right response if 60% of people tried to play the USB sticks like ocarinas, or tried to make omelettes out of the computer disks. But not if they plugged them into their computers. That's what they're for.
People get USB sticks all the time. The problem isn't that people are idiots, that they should know that a USB stick found on the street is automatically bad and a USB stick given away at a trade show is automatically good. The problem is that the OS trusts random USB sticks. The problem is that the OS will automatically run a program that can install malware from a USB stick. The problem is that it isn't safe to plug a USB stick into a computer.
Quit blaming the victim. They're just trying to get by.
EDITED TO ADD (7/4): As of February of this year, Windows no longer supports AutoRun for USB drives.
There's some great data on common iPhone passwords. I'm sure the results also apply to banking PINs.
Chris Cosentino, chef at Incanto in San Francisco, wants to serve you Humboldt squid.
Here's someone who is selling positive feedback on eBay:
Hello, for sale is a picture of a tree. This tree is an original and was taken by me. I have gotten nothing but 100% feedback from people from this picture. Great Picture! Once payment is made I will send you picture via email. Once payment is made and I send picture through email 100% feedback will be given to the buyer!!!! Once you pay for the item send me a ebay message with your email and I will email you the picture!
It's a new world:
An armed Valdez, 36, held a woman hostage at a motel in a tense 16-hour, overnight standoff with SWAT teams, all while finding time to keep his family and friends updated on Facebook.
AppFence is a technology -- with a working prototype -- that protects personal information on smart phones. It does this by either substituting innocuous information in place of sensitive information or blocking attempts by the application to send the sensitive information over the network.
The significance of systems like AppFence is that they have the potential to change the balance of power in privacy between mobile application developers and users. Today, application developers get to choose what information an application will have access to, and the user faces a take-it-or-leave-it proposition: users must either grant all the permissions requested by the application developer or abandon installation. Take-it-or-leave it offers may make it easier for applications to obtain access to information that users don't want applications to have. Many applications take advantage of this to gain access to users' device identifiers and location for behavioral tracking and advertising. Systems like AppFence could make it harder for applications to access these types of information without more explicit consent and cooperation from users.
The problem is that the mobile OS providers might not like AppFence. Google probably doesn't care, but Apple is one of the biggest consumers of iPhone personal information. Right now, the prototype only works on Android, because it requires flashing the phone. In theory, the technology can be made to work on any mobile OS, but good luck getting Apple to agree to it.
National Security Agency (NSA) SIGINT Reporter's Style and Usage Manual, 2010.
Protecting against insiders is hard.
Kluger and two accomplices -- a Wall Street trader and a mortgage broker -- allegedly stole and traded on material nonpublic information about M&A deals over a period of 17 years, according to federal authorities. The trio, facing charges from the U.S. Securities and Exchange Commission and the Department of Justice, allegedly made at least $32 million from the trades....
Many of our informal security systems involve convincing others to do what we want them to. Here's a theory that says human reasoning evolved not as a tool to better understand the world or solve problems, but to win arguments and persuade other humans. (Paper here.)
Nice article on Firesheep in action.
As my regular readers already know, I'm in the process of writing my next book. It's a book about why security exists: specifically, how a group of people protects itself from individuals within that group. My working title has been The Dishonest Minority. The idea behind the title is that "honesty" is defined by social convention, then those that don't follow the social conventions are by definition dishonest.
In my second blog post about the book, there was a lot of commentary about the word "dishonest." The problem is that there are two kinds of dishonest people: those who are selfish, and those who are differently moral than the rest of society. So the word has to apply to both burglars and abolitionists. It has to apply to a criminal within society as a whole, and a police informant within a society of criminals. It has to apply to people who don't pay their taxes because they're selfish, and those who don't pay because they are morally opposed to what the government is doing with the money. It has to apply to both Bernie Madoff and Gandhi.
It's true that it's a bit pejorative to use the word "dishonest" to describe both Madoff and Gandhi. But I can't think of a better word. Here are some options:
I don't really like any of them.
Another option is to explicitly call out the two different types:
Alliteration is always a plus. Biblical references I'm less sure about.
I like this general concept for title, because the potential reader will be intrigued how the two are related. They're both "transgressors," which might be a good word for the title.
Or the word alone:
The subtitle is still one of these:
In general, I like an exciting title paired with a descriptive subtitle. But I'm willing to be convinced otherwise.
Remember, the goal of a title is to make people -- people who don't already know me and my writing -- want to read my book.
Question 1: What do you think of the title options? What other words would work, either in the "adjective noun" title style, or the "A, B, and other Cs" style? What other completely different titles or subtitles would work?
Next: cover options. I'm not sure how much book cover matters anymore, now that my books will primarily be sold from online stores and in ebook formats. But I'd like a cover that doesn't suck. And it's hard. "Security" is a concept that's full of trite metaphors. And it's hard to come up with a picture that really captures what I am writing about. (Maybe this one.) Below are five options that my publisher has sent me.
Note that the stock photos sometimes have watermarks, or are shown in artificially reduced resolution. If we actually use one of the photos, those artifacts will disappear.
Question 2: What do you think of the cover options: the stock photos, the typefaces, the colors, the overall layout of the cover? Will any of those work, or do we have to go back to the drawing board?
I appreciate your opinions. Please first give them to me cold, without reading the other comments. Then feel free to comment on what other people think.
Good paper: "Sex, Lies and Cyber-crime Surveys," Dinei Florêncio and Cormac Herley, Microsoft Research.
Abstract: Much of the information we have on cyber-crime losses is derived from surveys. We examine some of the difficulties of forming an accurate estimate by survey. First, losses are extremely concentrated, so that representative sampling of the population does not give representative sampling of the losses. Second, losses are based on unverified self-reported numbers. Not only is it possible for a single outlier to distort the result, we find evidence that most surveys are dominated by a minority of responses in the upper tail (i.e., a majority of the estimate is coming from as few as one or two responses). Finally, the fact that losses are confined to a small segment of the population magnifies the difficulties of refusal rate and small sample sizes. Far from being broadly-based estimates of losses across the population, the cyber-crime estimates that we have appear to be largely the answers of a handful of people extrapolated to the whole population. A single individual who claims $50,000 losses, in an N=1000 person survey, is all it takes to generate a $10 billion loss over the population. One unverified claim of $7,500 in phishing losses translates into $1.5 billion.
I've been complaining about our reliance on self-reported statistics for cyber-crime.
Current aviation security procedures screen all passengers uniformly. Varying the amount of screening individuals receive based on an assessment of their relative risk has the potential to reduce the security burdens on some travelers, while improving security overall. This paper examines the security costs and benefits of a trusted traveler program, in which individuals who have been identified as posting less risk than others are allowed to pass through security with reduced security screening. This allows security resources to be shifted from travelers who have been identified as low risk, to the remaining unknown-risk population. However, fears that terrorists may exploit trusted traveler programs have dissuaded adoption of such programs. This analysis estimates the security performance of a trusted traveler program in the presence of attacker attempts to compromise it. It finds that, although these attempts would reduce the maximum potential security benefits of a program, they would not eliminate those benefits in all circumstances.
I'm at SHB 2011, the fourth Interdisciplinary Workshop on Security and Human Behavior, at Carnegie Mellon University. This is a two-day invitational gathering of computer security researchers, psychologists, behavioral economists, sociologists, political scientists, anthropologists, philosophers, and others -- all of whom are studying the human side of security -- organized by Alessandro Acquisti, Ross Anderson, and me. It's not just an interdisciplinary conference; most of the people here are individually interdisciplinary. For the past four years, this has been the most intellectually stimulating conference I have attended.
Ross Anderson is liveblogging this event. Matt Blaze is taping the sessions; I'll link to them if he puts them up on the Internet.
One of the pleasant side effects of being too busy to write longer blog posts is that -- if I wait long enough -- someone else writes what I would have wanted to.
The ruling in the Patco Construction vs. People's United Bank case is important, because the judge basically ruled that the bank's substandard security was good enough -- and Patco is stuck paying for the fraud that was a result of that substandard security. The details are important, and Brian Krebs has written an excellent summary.
EDITED TO ADD (7/13): Krebs also writes about a case going in the opposite direction in a Michigan court.
EDITED TO ADD (7/13): A similar article from The Economist.
This is not a good development.
It turns out that "fill-in-the-bubble" forms are not so anonymous.
Worth reading: Morgan Leigh Manning, "Less than Picture Perfect: The Legal Relationship between Photographers' Rights and Law Enforcement," Tennessee Law Review, Vol. 78, p. 105, 2010.
Abstract: Threats to national security and public safety, whether real or perceived, result in an atmosphere conducive to the abuse of civil liberties. History is littered with examples: The Alien and Sedition Acts of 1798, the suspension of habeas corpus during the Civil War, the Palmer Raids during World War I, and McCarthyism in the aftermath of World War II.Unfortunately, the post-9/11 world represents no departure from this age-old trend. Evidence of post-9/11 tension between national security and civil liberties is seen in the heightened regulation of photography; scholars have labeled it the "War on Photography" - a conflict between law enforcement officials and photographers over the right to take pictures in public places. A simple Google search reveals countless incidents of overzealous law enforcement officials detaining or arresting photographers and, in many cases, confiscating their cameras and memory cards, despite the fact that these individuals were in lawful places, at lawful times, partaking in lawful activities.
Last night, at the Third EPIC Champion of Freedom Awards Dinner, we gave an award to Susie Castillo, whose blog post and video of her treatment in the hands of the TSA has inspired thousands to complain about the agency and their treatment of travellers.
Sitting with her at dinner, I learned yet another way to evade the TSA's full body scanners: carry a small pet. She regularly travels with her small dog, and has found that she is always directed away from the full-body scanners and through the magnetometers. I suspect that the difficulty of keeping the dog still is why TSA makes that determination. (The carrier, of course, goes through the x-ray machine.)
I'm not sure what the TSA is going to do now that I've publicized this unpublished exception. Those of you who travel with small pets: please let me know what happens.
(For those of you who are appalled that I could give the terrorists ideas on how to evade the full-body scanners, there are already so many ways that one more can't hurt.)
I've been asked this question by countless reporters in the past couple of weeks. Here's a good explanation. Shorter answer: it's easy to spoof source destination, and it's easy to hijack unsuspecting middlemen and use them as proxies.
No, mandating attribution won't solve the problem. Any Internet design will necessarily include anonymity.
EDITED TO ADD (6/12): Adam Shostack's rant about Patrick Gray's rant.
Iscon's patented, thermo-conductive technology combines infrared (IR) and heat transfer, for high-resolution imaging without using any radiation. The core of this is state of the art imaging which detects and processes a break in the established thermal balance between the clothes and a hidden object. The IR camera detects the heat radiating from even a tiny object, producing a dark/light shape. It is irrelevant how long an object is concealed under clothing as a new temperature imprint is created every time it is scanned. Using IR, the rays don't penetrate beyond the clothing so there are no privacy issues.
EDITED TO ADD (6/14): Another article.
I know no details.
Interesting research: Kirill Levchenko, et al. (2010), "Click Trajectories -- End-to-End Analysis of the Spam Value Chain," IEEE Symposium on Security and Privacy 2011, Oakland, California, 24 May 2011.
Abstract: Spam-based advertising is a business. While it has engendered both widespread antipathy and a multi-billion dollar anti-spam industry, it continues to exist because it fuels a profitable enterprise. We lack, however, a solid understanding of this enterprise's full structure, and thus most anti-spam interventions focus on only one facet of the overall spam value chain (e.g., spam filtering, URL blacklisting, site takedown). In this paper we present a holistic analysis that quantifies the full set of resources employed to monetize spam email -- including naming, hosting, payment and fulfillment -- using extensive measurements of three months of diverse spam data, broad crawling of naming and hosting infrastructures, and over 100 purchases from spam-advertised sites. We relate these resources to the organizations who administer them and then use this data to characterize the relative prospects for defensive interventions at each link in the spam value chain. In particular, we provide the first strong evidence of payment bottlenecks in the spam value chain; 95% of spam-advertised pharmaceutical, replica and software products are monetized using merchant services from just a handful of banks.
It's a surprisingly small handful of banks:
All told, they saw 13 banks handling 95% of the 76 orders for which they received transaction information. (Only one U.S. bank was seen settling spam transactions: Wells Fargo.) But just three banks handled the majority of transactions: Azerigazbank in Azerbaijan, DnB NOR in Latvia (although the bank is headquartered in Norway), and St. Kitts-Nevis-Anguilla National Bank in the Caribbean. In addition, "most herbal and replica purchases cleared through the same bank in St. Kitts, ... while most pharmaceutical affiliate programs used two banks (in Azerbaijan and Latvia), and software was handled entirely by two banks (in Latvia and Russia)," they said.
This points to a fruitful avenue to reduce spam: go after the banks.
Here's an older paper on the economics of spam.
I have no idea if this is true:
In some cases, popular illegal forums used by cyber criminals as marketplaces for stolen identities and credit card numbers have been run by hacker turncoats acting as FBI moles. In others, undercover FBI agents posing as "carders" -- hackers specialising in ID theft -- have themselves taken over the management of crime forums, using the intelligence gathered to put dozens of people behind bars.
But if I were the FBI, I would want everyone to believe that it's true.
Here's a new law that won't work:
State lawmakers in country music's capital have passed a groundbreaking measure that would make it a crime to use a friend's login -- even with permission -- to listen to songs or watch movies from services such as Netflix or Rhapsody.
MI6 hacked into an online al-Qaeda magazine and replaced bomb-making instructions with a cupcake recipe.
It's a more polite hack than subtly altering the recipe so it blows up during the making process. (I've been told, although I don't know for sure, that the 1971 Anarchist's Cookbook has similarly flawed recipes.)
The rebuild team had only a few photographs, partial circuit diagrams and the fading memories of a few original Tunny operators to go on. Nonetheless a team led by John Pether and John Whetter was able to complete this restoration work.
Now they have a working Tunny to complement their working Colossus and working Bombe.
Daniel Solove on the security vs. privacy debate.
At first glance, this seems like a particularly dumb opening line of an article:
Open-source software may not sound compatible with the idea of strong cybersecurity, but....
But it's not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They'll figure out how it works. They'll find flaws. They'll -- in extreme cases -- sneak back-doors into the code when no one is looking.
Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I've written about this several times in the past, and there's no need to rewrite the arguments again.
Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn't sound compatible with the idea of strong cybersecurity.
Reporters have been calling me pretty much constantly about this story, but I can't figure out why in the world this is news. Attacks from China -- old news; attacks from China against Google -- old news; attacks from China against Google Gmail accounts -- old news. Spear phishing attacks from China against senior government officials -- old news. There's even a WikiLeaks cable about this stuff.
When I first read the story, I decided it wasn't worth blogging about. Why is this news?
In Applied Cryptography, I wrote about the "Chess Grandmaster Problem," a man-in-the-middle attack. Basically, Alice plays chess remotely with two grandmasters. She plays Grandmaster 1 as white and Grandmaster 2 as black. After the standard opening of 1. e4, she just replays the moves from one game to the other, and convinces both of them that she's a grandmaster in the process.
Detecting these sorts of man-in-the-middle attacks is difficult, and involves things like synchronous clocks, complex cryptographic protocols, or -- more practically -- proctors. Proctors, of course, can be fooled. Here's a real-world attempt of this type of attack on the MCAT medical-school admissions test.
Police allege he used a pinhole camera and wireless technology to transmit images of the questions on a computer screen back to his co-conspirator, Ruben, at the University of British Columbia.
And as long as we're on the topic, we can think about all the ways to hack this system of remote exam proctoring via webcam.
CI Reader: An American Revolution Into the New Millennium, Volumes I, II, and III is published by the U.S. Office of the National Counterintelligence Executive. (No, I've never heard of them, either.)
EDITED TO ADD (6/14): There's a fourth volume, too.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.