Schneier on Security
A blog covering security and security technology.
October 2009 Archives
They're washing ashore on Vancouver Island.
Scientists have begun attaching tracking devices to squid off the coast of Vancouver Island to find out why the marine animals have wandered so far from their traditional territory.
Article on me from a Luxembourg magazine.
We have a cognitive bias to exaggerate risks caused by other humans, and downplay risks caused by animals (and, even more, by natural phenomena.)
"Capability of the People's Republic of China to Conduct Cyber Warfare and Computer Network Exploitation," prepared for the US-China Economic and Security Review Commission, Northrop Grumman Corporation, October 9, 2009.
I have not read it yet. Post the interesting bits in comments, if there are any.
A critical essay on the TSA from a former assistant police chief:
This is where I find myself now obsessing over TSA policy, or its apparent lack. Every one of us goes to work each day harboring prejudice. This is simply human nature. What I have witnessed in law enforcement over the course of the last two decades serves to remind me how active and passive prejudice can undermine public trust in important institutions, like police agencies. And TSA.
EDITED TO ADD (11/12): Follow-on essay by the same person.
Keep tabs on your child at all times with this small but sophisticated device that combines GPS and cellular technology to provide you with real-time location updates. The small and lightweight Little Buddy transmitter fits easily into a backpack, lunchbox or other receptacle, making it easy for your child to carry so you can check his or her location at any time using a smartphone or computer. Customizable safety checks allow you to establish specific times and locations where your child is supposed to be -- for example, in school -- causing the device to alert you with a text message if your child leaves the designated area during that time. Additional real-time alerts let you know when the device's battery is running low so you can take steps to ensure your monitoring isn't interrupted.
Presumably it can also be used to track people who aren't your kids.
EDITED TO ADD (11/12): You can also use an iPhone as a tracking device.
Ross Anderson has put together a great resource page on security and psychology:
At a deeper level, the psychology of security touches on fundamental scientific and philosophical problems. The 'Machiavellian Brain' hypothesis states that we evolved high intelligence not to make better tools, but to use other monkeys better as tools: primates who were better at deception, or at detecting deception in others, left more descendants. Conflict is also deeply tied up with social psychology and anthropology, while evolutionary explanations for the human religious impulse involve both trust and conflict. The dialogue between researchers in security and in psychology has thus been widening, bringing in people from usability engineering, protocol design, privacy, and policy on the one hand, and from social psychology, evolutionary biology, and behavioral economics on the other. We believe that this new discipline will increasingly become one of the active contact points between computing and psychology -- an exchange that has hugely benefited both disciplines for over a generation.
Interesting story of a 2006 Wal-Mart hack from, probably, Minsk.
In-Q-Tel, the investment arm of the CIA and the wider intelligence community, is putting cash into Visible Technologies, a software firm that specializes in monitoring social media. It's part of a larger movement within the spy services to get better at using "open source intelligence" -- information that's publicly available, but often hidden in the flood of TV shows, newspaper articles, blog posts, online videos and radio reports generated every day.
Here's the Visible Technologies press release on the funding.
Draw a squid, win Jeff Vandermeer`s Ambergris novels.
Earlier this month, Joanna Rutkowska implemented the "evil maid" attack against TrueCrypt. The same kind of attack should work against any whole-disk encryption, including PGP Disk and BitLocker. Basically, the attack works like this:
Step 1: Attacker gains access to your shut-down computer and boots it from a separate volume. The attacker writes a hacked bootloader onto your system, then shuts it down.
Step 2: You boot your computer using the attacker's hacked bootloader, entering your encryption key. Once the disk is unlocked, the hacked bootloader does its mischief. It might install malware to capture the key and send it over the Internet somewhere, or store it in some location on the disk to be retrieved later, or whatever.
You can see why it's called the "evil maid" attack; a likely scenario is that you leave your encrypted computer in your hotel room when you go out to dinner, and the maid sneaks in and installs the hacked bootloader. The same maid could even sneak back the next night and erase any traces of her actions.
This attack exploits the same basic vulnerability as the "Cold Boot" attack from last year, and the "Stoned Boot" attack from earlier this year, and there's no real defense to this sort of thing. As soon as you give up physical control of your computer, all bets are off.
Similar hardware-based attacks were among the main reasons why Symantec’s CTO Mark Bregman was recently advised by "three-letter agencies in the US Government" to use separate laptop and mobile device when traveling to China, citing potential hardware-based compromise.
PGP sums it up in their blog.
No security product on the market today can protect you if the underlying computer has been compromised by malware with root level administrative privileges. That said, there exists well-understood common sense defenses against "Cold Boot," "Stoned Boot" "Evil Maid," and many other attacks yet to be named and publicized.
The defenses are basically two-factor authentication: a token you don't leave in your hotel room for the maid to find and use. The maid could still corrupt the machine, but it's more work than just storing the password for later use. Putting your data on a thumb drive and taking it with you doesn't work; when you return you're plugging your thumb into a corrupted machine.
The real defense here is trusted boot, something Trusted Computing is supposed to enable. But Trusted Computing has its own problems, which is why we haven't seen anything out of Microsoft in the seven-plus years they have been working on it (I wrote this in 2002 about what they then called Palladium).
In the meantime, people who encrypt their hard drives, or partitions on their hard drives, have to realize that the encryption gives them less protection than they probably believe. It protects against someone confiscating or stealing their computer and then trying to get at the data. It does not protect against an attacker who has access to your computer over a period of time during which you use it, too.
EDITED TO ADD (10/23): A few readers have pointed out that BitLocker, the one thing that has come out of Microsoft's Trusted Computing initiative in the seven-plus years they've been working on it, can prevent these sorts of attacks if the computer has a TPM module, version 1.2 or later, on the motherboard. (Note: Not all computers do.) I actually knew that; I just didn't remember it.
EDITED TO ADD (11/12): Peter Kleissner's Stoned Boot attacks on TrueCrypt.
James Bamford -- author of The Shadow Factory: The NSA from 9/11 to the Eavesdropping on America writes about the NSA's new data center in Utah as he reviews another book: The Secret Sentry: The Untold History of the National Security Agency:
Just how much information will be stored in these windowless cybertemples? A clue comes from a recent report prepared by the MITRE Corporation, a Pentagon think tank. "As the sensors associated with the various surveillance missions improve," says the report, referring to a variety of technical collection methods, "the data volumes are increasing with a projection that sensor data volume could potentially increase to the level of Yottabytes (1024 Bytes) by 2015." Roughly equal to about a septillion (1,000,000,000,000,000,000,000,000) pages of text, numbers beyond Yottabytes haven't yet been named. Once vacuumed up and stored in these near-infinite "libraries," the data are then analyzed by powerful infoweapons, supercomputers running complex algorithmic programs, to determine who among us may be -- or may one day become -- a terrorist.
Of course, that yottabyte number is hyperbole. The problem with all of that data is that there's no time to process it. Think of it as trying to drink from a fire hose. The NSA has to make lightning-fast real-time decisions about what to save for later analysis. And there's not a lot of time for later analysis; more data is coming constantly at the same fire-hose rate.
Bamford's entire article is worth reading. He summarizes some of the things he talks about in his book: the inability of the NSA to predict national security threats (9/11 being one such failure) and the manipulation of intelligence data for political purposes.
According to the Telegraph:
Mr Ballmer said: "We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don't think from a word-of-mouth perspective we ever recovered from that."
Vista's failure and Ballmer's faulting security is a bit of being careful for what you wish. Vista (codename "Longhorn" during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft's history.
There was also the problem of Vista's endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.
Security warnings are often a way for the developer to avoid making a decision. "We don't know what to do here, so we'll put up a warning and ask the user." But unless the users have the information and the expertise to make the decision, they're not going to be able to. We need user interfaces that only put up warnings when it matters.
I never upgraded to Vista. I'm hoping Windows 7 is worth upgrading to. We'll see.
EDITED TO ADD (10/22): Another opinion.
From the Courier-Mail:
A man who established a sophisticated network of peepholes and cameras to spy on his flatmates has escaped a jail sentence after police were unable to crack an encryption code on his home computer.
There was a similar story in 2007. Then, I wrote:
Why is it that we all -- myself included -- believe these stories? Why are we so quick to assume that the TSA is a bunch of jack-booted thugs, officious and arbitrary and drunk with power?
EDITED TO ADD (11/12): Follow up by the woman who posted the original story. She claims that the TSA's video is incomplete, and omits the part where she is separated from her son. I don't believe her.
All it takes is a computer that can track every card:
The anti-card-counter system uses cameras to watch players and keep track of the actual "count" of the cards, the same way a player would. It also measures how much each player is betting on each hand, and it syncs up the two data points to look for patterns in the action. If a player is betting big when the count is indeed favorable, and keeping his chips to himself when it's not, he's fingered by the computer... and, in the real world, he'd probably receive a visit from a burly dude in a bad suit, too.
Of course it does; it's just a signal-to-noise problem.
I have long been impressed with the casino industry's ability to, in the case of blackjack, convince the gambling public that using strategy equals cheating.
Nice article summing up six years of Microsoft Patch Tuesdays:
The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays.
I wrote about the "patch treadmill," pointing out that there are simply too many patches and that it's impossible to keep up:
Security professionals are quick to blame system administrators who don't install every patch. "They should have updated their systems; it's their own fault when they get hacked." This is beginning to feel a lot like blaming the victim. "He should have known not to walk down that deserted street; it's his own fault he was mugged." "She should never have dressed that provocatively; it's her own fault she was attacked." Perhaps such precautions should have been taken, but the real blame lies elsewhere.
Patching is essentially an impossible problem. A patch needs to be incredibly well-tested. It has to work, without tweaking, on every configuration of the software out there. And for security reasons, it needs to be pushed out to users within days -- hours, if possible. These two requirements are mutually contradictory: you can't have a piece of software that is both well-tested and quickly written.
Before October 2003, Microsoft's patching was a mess. Patches weren't well-tested. They broke systems so frequently that many sysadmins wouldn't install them without extensive testing. There were jokes that a Microsoft patch was indistinguishable from a DoS attack.
In 2003, Microsoft went to a once-a-month patching cycle, and I think it's been a resounding success. Microsoft's patches are much better tested. They're much less likely to break other things. And, as a result, many more people have turned on automatic update, meaning that many more people have their patches up to date. The downside is that the window of exposure -- the time period between a vulnerability's release and the availability of a patch -- is longer. Patch Tuesdays might be the best we can do, but the whole patching system is fundamentally broken. This is what I wrote last year:
The real lesson is that the patch treadmill doesn't work, and it hasn't for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won't prevent every vulnerability, but it's much more secure -- and cheaper -- than the patch treadmill we're all on now.
Investigators scoured social networking sites such as Facebook and MySpace but initially could find no trace of him and were unable to pin down his location in Mexico.
It's easy to say "so dumb," and it would be true, but what's interesting is how people just don't think through the privacy implications of putting their information on the Internet. Facebook is how we interact with friends, and we think of it in the frame of interacting with friends. We don't think that our employers might be looking -- they're not our friends! -- that the information will be around forever, or that it might be abused. Privacy isn't salient; chatting with friends is.
A few years ago, a company began to sell a liquid with identification codes suspended in it. The idea was that you would paint it on your stuff as proof of ownership. I commented that I would paint it on someone else's stuff, then call the police.
I was reminded of this recently when a group of Israeli scientists demonstrated that it's possible to fabricate DNA evidence. So now, instead of leaving your own DNA at a crime scene, you can leave fabricated DNA. And it isn't even necessary to fabricate. In Charlie Stross's novel Halting State, the bad guys foul a crime scene by blowing around the contents of a vacuum cleaner bag, containing the DNA of dozens, if not hundreds, of people.
This kind of thing has been going on for ever. It's an arms race, and when technology changes, the balance between attacker and defender changes. But when automated systems do the detecting, the results are different. Face recognition software can be fooled by cosmetic surgery, or sometimes even just a photograph. And when fooling them becomes harder, the bad guys fool them on a different level. Computer-based detection gives the defender economies of scale, but the attacker can use those same economies of scale to defeat the detection system.
Google, for example, has anti-fraud systems that detect and shut down advertisers who try to inflate their revenue by repeatedly clicking on their own AdSense ads. So people built bots to repeatedly click on the AdSense ads of their competitors, trying to convince Google to kick them out of the system.
Similarly, when Google started penalizing a site's search engine rankings for having "bad neighbors" -- backlinks from link farms, adult or gambling sites, or blog spam -- people engaged in sabotage: they built link farms and left blog comment spam linking to their competitors' sites.
The same sort of thing is happening on Yahoo Answers. Initially, companies would leave answers pushing their products, but Yahoo started policing this. So people have written bots to report abuse on all their competitors. There are Facebook bots doing the same sort of thing.
Last month, Google introduced Sidewiki, a browser feature that lets you read and post comments on virtually any webpage. People and industries are already worried about the effects unrestrained commentary might have on their businesses, and how they might control the comments. I'm sure Google has sophisticated systems ready to detect commercial interests that try to take advantage of the system, but are they ready to deal with commercial interests that try to frame their competitors? And do we want to give one company the power to decide which comments should rise to the top and which get deleted?
Whenever you build a security system that relies on detection and identification, you invite the bad guys to subvert the system so it detects and identifies someone else. Sometimes this is hard -- leaving someone else's fingerprints on a crime scene is hard, as is using a mask of someone else's face to fool a guard watching a security camera -- and sometimes it's easy. But when automated systems are involved, it's often very easy. It's not just hardened criminals that try to frame each other, it's mainstream commercial interests.
With systems that police internet comments and links, there's money involved in commercial messages -- so you can be sure some will take advantage of it. This is the arms race. Build a detection system, and the bad guys try to frame someone else. Build a detection system to detect framing, and the bad guys try to frame someone else framing someone else. Build a detection system to detect framing of framing, and well, there's no end, really. Commercial speech is on the internet to stay; we can only hope that they don't pollute the social systems we use so badly that they're no longer useful.
This essay originally appeared in The Guardian.
Zachary's offense? [He's six years old.] Taking a camping utensil that can serve as a knife, fork and spoon to school. He was so excited about recently joining the Cub Scouts that he wanted to use it at lunch. School officials concluded that he had violated their zero-tolerance policy on weapons, and Zachary was suspended and now faces 45 days in the district's reform school.
The problem, of course, is that the global rule trumps any situational common sense, any discretion. But in granting discretion those in overall charge must trust people below them who have more detailed situational knowledge. It's CYA security -- the same thing you see at airports. Those involved in the situation can't be blamed for making a bad decision as long as they follow the rules, no matter how stupid they are and how little they apply to the situation.
I'm just going to quote without comment:
About the file: the text message file encrypted with a symmetric key combine 3 modes
Anyone have any ideas?
Good essay: "Malware to crimeware: How far have they gone, and how do we catch up?" ;login:, August 2009:
I have surveyed over a decade of advances in delivery of malware. Over this period, attackers have shifted to using complex, multi-phase attacks based on subtle social engineering tactics, advanced cyptographic techniques to defeat takeover and analysis, and highly targeted attacks that are intended to fly below the radar of current technical defenses. I will show how malicious technology combined with social manipulation is used against us and conclude that this understanding might even help us design our own combination of technical and social mechanisms to better protect us.
While paints blocking lower frequencies have been available for some time, Mr Ohkoshi's technology is the first to absorb frequencies transmitting at 100GHz (gigahertz). Signals carrying a larger amount of data -- such as wireless internet -- travel at a higher frequency than, for example, FM radio.
Gallery of virtual art.
Pretty clever (for a pig, that is).
EDITED TO ADD (10/10): Better link for video.
Yesterday, DHS Secretary Janet Napolitano said that the U.S. needed to hire 1,000 cybersecurity experts over the next three years. Bob Cringley doubts that there even are 1,000 cybersecurity experts out there to hire.
I suppose it depends on what she meant by "expert."
This is just silly:
Beaver Stadium is a terrorist target. It is most likely the No. 1 target in the region. As such, it deserves security measures commensurate with such a designation, but is the stadium getting such security?
Actually, the Brooklyn Bridge plot failed because the plotters were idiots and the plot -- cutting through cables with blowtorches -- was dumb. That, and the all-too-common police informant who egged the plotters on.
But never mind that. Beaver Stadium is Pennsylvania State University's football stadium, and this article argues that it's a potential terrorist target that needs 24/7 police protection.
The problem with that kind of reasoning is that it makes no sense. As I said in an article that will appear in New Internationalist:
To be sure, reasonable arguments can be made that some terrorist targets are more attractive than others: aeroplanes because a small bomb can result in the death of everyone aboard, monuments because of their national significance, national events because of television coverage, and transportation because of the numbers of people who commute daily. But there are literally millions of potential targets in any large country (there are five million commercial buildings alone in the US), and hundreds of potential terrorist tactics; it's impossible to defend every place against everything, and it's impossible to predict which tactic and target terrorists will try next.
Defending individual targets only makes sense if the number of potential targets is few. If there are seven terrorist targets and you defend five of them, you seriously reduce the terrorists' ability to do damage. But if there are a million terrorist targets and you defend five of them, the terrorists won't even notice. I tend to dislike security measures that merely cause the bad guys to make a minor change in their plans.
And the expense would be enormous. Add up these secondary terrorist targets -- stadiums, theaters, churches, schools, malls, office buildings, anyplace where a lot of people are packed together -- and the number is probably around 200,000, including Beaver Stadium. Full-time police protection requires people, so that's 1,000,000 policemen. At an encumbered cost of $100,000 per policeman per year, probably a low estimate, that's a total annual cost of $100B. (That's about what we're spending each year in Iraq.) On the other hand, hiring one out of every 300 Americans to guard our nation's infrastructure would solve our unemployment problem. And since policemen get health care, our health care problem as well. Just make sure you don't accidentally hire a terrorist to guard against terrorists -- that would be embarrassing.
The whole idea is nonsense. As I've been saying for years, what works is investigation, intelligence, and emergency response:
We need to defend against the broad threat of terrorism, not against specific movie plots. Security is most effective when it doesn't make arbitrary assumptions about the next terrorist act. We need to spend more money on intelligence and investigation: identifying the terrorists themselves, cutting off their funding, and stopping them regardless of what their plans are. We need to spend more money on emergency response: lessening the impact of a terrorist attack, regardless of what it is. And we need to face the geopolitical consequences of our foreign policy and how it helps or hinders terrorism.
Songhua Xu presented an interesting idea for measuring pen angle and pressure to present beautiful flower-like visual versions of a handwritten signature. You could argue that signatures are already a visual form, nicely identifiable and universal. However, with the added data about pen pressure and angle, the authors were able to create visual signatures that offer potentially greater security, assuming you can learn to read them.
This is interesting:
Since then, his scams have tended to take place in luxury hotels around the world.
Doesn't the hotel staff ask for ID before doing something like that?
At a demonstration of the technology this week, project manager Robert P. Burns said the idea is to track a set of involuntary physiological reactions that might slip by a human observer. These occur when a person harbors malicious intent—but not when someone is late for a flight or annoyed by something else, he said, citing years of research into the psychology of deception.
I have a lot of respect for Paul Ekman's opinion on the matter:
"I can understand why there's an attempt being made to find a way to replace or improve on what human observers can do: the need is vast, for a country as large and porous as we are. However, I'm by no means convinced that any technology, any hardware will come close to doing what a highly trained human observer can do,'" said Ekman, who directs a company that trains government workers, including for the Transportation Security Administration, to detect suspicious behavior.
Witnesses are much more accurate at identifying criminals when computers assist in the identification process, not police officers.
A major cause of miscarriages of justice could be avoided if computers, rather than detectives, guided witnesses through the identification of suspects. That's according to Brent Daugherty at the University of North Carolina in Charlotte and colleagues, who say that too often officers influence witnesses' choices.
Makes sense to me.
You'd think this would be obvious:
Douglas Havard, 27, serving six years for stealing up to £6.5million using forged credit cards over the internet, was approached after governors wanted to create an internal TV station but needed a special computer program written.
And you shouldn't give a prisoner who is a lockpicking expert access to the prison's keys, either. No, wait:
The blunder emerged a week after the Sunday Mirror revealed how an inmate at the same jail managed to get a key cut that opened every door.
Next week: inmate sharpshooters in charge of prison's gun locker.
This is brilliant:
The sophisticated hack uses a Trojan horse program installed on the victim's machine that alters html coding before it's displayed in the user's browser, to either erase evidence of a money transfer transaction entirely from a bank statement, or alter the amount of money transfers and balances.
If there's a moral here, it's that banks can't rely on the customer to detect fraud. But we already knew that.
Wow. It's over 2,000 pages, so it'll take time to make any sense of. According to Ross Anderson, who's given it a quick look over, "it seems to be the bureaucratic equivalent of spaghetti code: a hodgepodge of things written by people from different backgrounds, and with different degrees of clue, in different decades."
The computer security stuff starts at page 1,531.
EDITED TO ADD (10/6): An article.
It's a security risk:
The crate was hoisted onto the flatbed with a 120-ton construction crane. For security reasons, there were no signs on the truck indicating that the cargo was a hippopotamus, the zoo said.
Does this make any sense? Has there ever been a zoo animal hijacking anywhere?
EDITED TO ADD (10/13): Kidnapped zoo animals.
If you were curious what the DHS knows about you.
For the U.N. General Assembly:
For those entranced by security theater, New York City is a sight to behold this week. A visit to one of the two centers of the action -- the Waldorf Astoria, where the presidents of China, Russia, the Prime Ministers of Israel and the Palestinian Authority, and the President of the United States -- are all staying. (Who gets the presidential suite? Our POTUS.) Getting to the Waldorf is a little intimidating, which is the point. Wade through the concrete barriers, the double-parked police cars, the NYPD mobile command post, a signals post, acreages of metal fencing, snipers, counter surveillance teams, FBI surveillance teams in street clothes, dodge traffic and a dignitary motorcade or two, and you're right at the front door of the hotel. A Secret Service agent from the Midwest gestured dismissively when a reporter showed him a press credential. "You don't need it. Just go in that door over there."
This is interesting:
Professor Gernot Heiser, the John Lions Chair in Computer Science in the School of Computer Science and Engineering and a senior principal researcher with NICTA, said for the first time a team had been able to prove with mathematical rigour that an operating-system kernel—the code at the heart of any computer or microprocessor—was 100 per cent bug-free and therefore immune to crashes and failures.
Don't expect this to be practical any time soon:
Verifying the kernel—known as the seL4 microkernel—involved mathematically proving the correctness of about 7,500 lines of computer code in an project taking an average of six people more than five years.
That's 250 lines of code verified per man-year. Both Linux and Windows have something like 50 million lines of code; verifying that would take 200,000 man-years, assuming no increased complexity resulting from the increased complexity. Clearly some efficiency improvements are required.
Reproducing keys from distant and angled photographs:
Those of you who carry your keys on a ring dangling from a belt loop, take note.
During a daring bank robbery in Sweden that involved a helicopter, the criminals disabled a police helicopter by placing a package with the word "bomb" near the helicopter hangar, thus engaging the full caution/evacuation procedure while they escaped.
I wrote about this exact sort of thing in Beyond Fear.
EDITED TO ADD (10/13): The attack was successfully carried off even though the Swedish police had been warned.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.