Schneier on Security
A blog covering security and security technology.
May 2011 Archives
That's what the U.S. destroyed after a malfunction in Pakistan during the bin Laden assassination. (For helicopters, "stealth" is less concerned with radar signatures and more concerned with acoustical quiet.)
There was some talk about Pakistan sending it to China, but they're returning it to the U.S. I presume that the Chinese got everything they needed quickly.
In my forthcoming book (available February 2012), I talk about various mechanisms for societal security: how we as a group protect ourselves from the "dishonest minority" within us. I have four types of societal security systems:
We spend most of our effort in the third and fourth category. I am spending a lot of time researching how the first two categories work.
Given that, I was very interested in seeing an article by Dallas Boyd in Homeland Security Affairs: "Protecting Sensitive Information: The Virtue of Self-Restraint," where he basically says that people should not publish information that terrorists could use out of moral responsibility (he calls it "civic duty"). Ignore for a moment the debate about whether publishing information that could give the terrorists ideas is actually a bad idea -- I think it's not -- what Boyd is proposing is actually very interesting. He specifically says that censorship is bad and won't work, and wants to see voluntary self-restraint along with public shaming of offenders.
As an alternative to formal restrictions on communication, professional societies and influential figures should promote voluntary self-censorship as a civic duty. As this practice is already accepted among many scientists, it may be transferrable to members of other professions. As part of this effort, formal channels should be established in which citizens can alert the government to vulnerabilities and other sensitive information without exposing it to a wide audience. Concurrent with this campaign should be the stigmatization of those who recklessly disseminate sensitive information. This censure would be aided by the fact that many such people are unattractive figures whose writings betray their intellectual vanity. The public should be quick to furnish the opprobrium that presently escapes these individuals.
I don't think it will work, and I don't even think it's possible in this international day and age, but it's interesting to read the proposal.
Cyber criminals are getting aggressive with their social engineering tactics.
Val Christopherson said she received a telephone call last Tuesday from a man stating he was with an online security company who was receiving error messages from the computer at her Charleswood home.
All I know is what's in these two blog posts from Elcomsoft. Note that they didn't break AES-256; they figured out how to extract the keys from the hardware (iPhones, iPads). The company "will be releasing the product implementing this functionality for the exclusive use of law enforcement, forensic and intelligence agencies."
It's not something I know anything about -- actually, it's not something many people know about -- but I've posted some links about the security features of the U.S. presidential limousine. So it's amusing to watch the limo immobilized by a steep grade at the U.S. embassy in Dublin. (You'll get a glimpse of how thick the car doors are toward the end of the video.)
EDITED TO ADD (6/1): It was a spare; the president was not riding in it at the time.
EDITED TO ADD (6/13): Here's a video of President Bush's limo breaking down in Rome.
Proposed new rules in the U.S.
It's now available as a free download:
A free version of the Blackhole exploit kit has appeared online in a development that radically reduces the entry-level costs of getting into cybercrime.
SCADA systems -- computer systems that control industrial processes -- are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It's not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it's bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It's Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it's scarier.
Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details "before Siemens could patch the vulnerabilities."
Beresford wouldn't say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share "commonality" in their products. Beresford wouldn't reveal more details, but says he hopes to do so at a later date.
We've been living with full disclosure for so long that many people have forgotten what life was like before it was routine.
Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies -- who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.
I wrote that in 2007. Siemens is doing it right now:
Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs "were discovered while working under special laboratory conditions with unlimited access to protocols and controllers."
That's precisely the point. Me again from 2007:
Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers.... But that assumes that hackers can't discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.
With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.
I haven't written about Dropbox's security problems; too busy with the book. But here's an excellent summary article from The Economist.
The meta-issue is pretty simple. If you expect a cloud provider to do anything more interesting than simply store your files for you and give them back to you at a later date, they are going to have to have access to the plaintext. For most people -- Gmail users, Google Docs users, Flickr users, and so on -- that's fine. For some people, it isn't. Those people should probably encrypt their files themselves before sending them into the cloud.
EDITED TO ADD (6/13): Another security issue with Dropbox.
The Centers for Disease Control and Prevention weigh in on preparations for the zombie apocalypse.
TSA-style security is now so normal that it's part of a Disney ride:
The second room of the queue is now a security check area, similar to a TSA checkpoint. The two G-series droids are still there, G2-9T scanning luggage and G2-4T scanning passengers. For those attraction junkies, you'll remember that the G-series droids are so named because in the original Disneyland Park version of the ride, they were created by removing the "skins" from two of the goose animatronics from the soon-to-close America Sings attraction (Goose = "G" series). While we won't tell you why, you'll enjoy paying a lot of attention to what the scans of the luggage show is inside. When it's your turn to go through the passenger scan (a thermal body scan), you may be verbally accosted by a security droid. Also, keep an eye out in the queue for an earlier version of RX-24 ("Captain Rex") from the original Star Tours; he's labeled "defective" and has some familiar dialogue.
This is the new Star Tours ride at Walt Disney World in Orlando.
For years, an employee of Cubic Corp -- the company that makes the automatic fare card systems for most of the subway systems around the world -- forged and then sold monthly passes for the Boston MBTA system.
The scheme was discovered by accident:
Coakley said the alleged scheme was only discovered after a commuter rail operator asked a rider where he had bought his pass. When the rider said he'd purchased the pass on Craigslist, the operator became suspicious and confiscated the ticket.
Although you'd think the MBTA would poke around the net occasionally, looking for discount tickets being sold on places like Craigslist.
Cubic Transportation Systems said in a written statement that it is cooperating with authorities. "Our company has numerous safeguards designed to prevent fraudulent production or distribution of Charlie Tickets," the statement said, referring to the monthly MBTA passes.
It always amuses me when companies pretend the obvious isn't true in their press releases. "Someone completely broke our system." "Say that we have a lot of security." "But it didn't work." "Say it anyway; the press will just blindly report it."
To be fair, we don't -- and probably will never -- know how this proprietary system was broken. In this case, an insider did it. But did that insider just have access to the system specifications, or was access to blank ticket stock or specialized equipment necessary as well?
EDITED TO ADD (5/22): More details:
On March 11, a conductor on the commuter rail’s Providence/Stoughton Line did a double-take when a customer flashed a discolored monthly pass, its arrow an unusually light shade of orange. The fading, caused by inadvertent laundering, would have happened even if the pass were legitimate, but the customer, perhaps out of nervousness, volunteered that he had purchased it at a discount on Craigslist, Coakley said.
Auditing could have discovered the fraud much earlier:
A records check would have indicated that the serial numbers were not tied to accounts for paying customers. But the financially strapped MBTA, which handles thousands of passes and moves millions of riders a month, did not have practices in place to sniff out the small percentage of unauthorized passes in circulation, Davey said.
EDITED TO ADD (6/12): Good write-up.
From the Associated Press:
Bin Laden's system was built on discipline and trust. But it also left behind an extensive archive of email exchanges for the U.S. to scour. The trove of electronic records pulled out of his compound after he was killed last week is revealing thousands of messages and potentially hundreds of email addresses, the AP has learned.
I'm impressed. It's hard to maintain this kind of COMSEC discipline.
It was a slow, toilsome process. And it was so meticulous that even veteran intelligence officials have marveled at bin Laden's ability to maintain it for so long. The U.S. always suspected bin Laden was communicating through couriers but did not anticipate the breadth of his communications as revealed by the materials he left behind.
Entries due by the end of the month.
Scanning fingerprints from six feet away.
Slightly smaller than a square tissue box, AIRprint houses two 1.3 megapixel cameras and a source of polarized light. One camera receives horizontally polarized light, while the other receives vertically polarized light. When light hits a finger, the ridges of the fingerprint reflect one polarization of light, while the valleys reflect another. "That's where the real kicker is, because if you look at an image without any polarization, you can kind of see fingerprints, but not really well," says Burcham. By separating the vertical and the horizontal polarization, the device can overlap those images to produce an accurate fingerprint, which is fed to a computer for verification.
No information on how accurate it is, but it'll only get better.
This FBI surveillance device, designed to be attached to a car, has been taken apart and analyzed.
A recent ruling by the 9th U.S. Circuit Court of Appeals affirms that it's legal for law enforcement to secretly place a tracking device on your car without a warrant, even if it's parked in a private driveway.
We learned to cook squid sous vide at 59°C when we were at Atelier in Canada. The cooking time and temperature we picked up produce squid which is meaty, juicy and rich in texture. Here we marinated the squid with mango pickle and then cooked them for three hours at 59°C. Then we cooled them down in an ice bath. Once cooled, we were able to score them and then sear them in olive oil. When the squid was good and brown we added butter to the pan, let it foam, and basted the squid. Then we removed the squid from the pan and added cabbage leaves to saute them in the juices. When the cabbage was blistered we dressed the squid and cabbage with fresh lemon juice. To bring the dish together we added a few spoonfuls of grilled yogurt.
These are what I get for giving interviews when I'm in a bad mood. For the record, I think Sony did a terrible job with its customers' security. I also think that most companies do a terrible job with customers' security, simply because there isn't a financial incentive to do better. And that most of us are pretty secure, despite that.
One of my biggest complaints with these stories is how little actual information we have. We often don't know if any data was actually stolen, only that hackers had access to it. We rarely know how the data was accessed: what sort of vulnerability was used by the hackers. We rarely know the motivations of the hackers: were they criminals, spies, kids, or someone else? We rarely know if the data is actually used for any nefarious purposes; it's generally impossible to connect a data breach with a corresponding fraud incident. Given all of that, it's impossible to say anything useful or definitive about the attack. But the press always wants definitive statements.
This is a pretty scary criminal tactic from Turkey. Burglars dress up as doctors, and ring doorbells handing out pills under some pretense or another. They're actually powerful sedatives, and when people take them they pass out, and the burglars can ransack the house.
According to the article, when the police tried the same trick with placebos, they got an 86% compliance rate.
Kind of like a real-world version of those fake anti-virus programs that actually contain malware.
Interesting blog post from EFF.
The stealing of hotel towels isn't a big problem in the scheme of world problems, but it can be expensive for hotels. Sure, we have moral prohibitions against stealing -- that'll prevent most people from stealing the towels. Many hotels put their name or logo on the towels. That works as a reputational societal security system; most people don't want their friends to see obviously stolen hotel towels in their bathrooms. Sometimes, though, this has the opposite effect: making towels and other items into souvenirs of the hotel and thus more desirable to steal. It's against the law to steal hotel towels, of course, but with the exception of large-scale thefts, the crime will never be prosecuted. (This might be different in third world countries. In 2010, someone was sentenced to three months in jail for stealing two towels from a Nigerian hotel.) The result is that more towels are stolen than hotels want. And for expensive resort hotels, those towels are expensive to replace.
The only thing left for hotels to do is take security into their own hands. One system that has become increasingly common is to set prices for towels and other items -- this is particularly common with bathrobes -- and charge the guest for them if they disappear from the rooms. This works with some things, but it's too easy for the hotel to lose track of how many towels a guest has in his room, especially if piles of them are available at the pool.
A more recent system, still not widespread, is to embed washable RFID chips into the towels and track them that way. The one data point I have for this is an anonymous Hawaii hotel that claims they've reduced towel theft from 4,000 a month to 750, saving $16,000 in replacement costs monthly.
Assuming the RFID tags are relatively inexpensive and don't wear out too quickly, that's a pretty good security trade-off.
This blog post by Richard Clayton is worth reading.
The well-preserved tally stick was used in the Middle Ages to count the debts owed by the holder in a time when most people were unable to read or write.
Note the security built into this primitive contract system. Neither side can cheat -- alter the notches -- because if they do, the two sides won't match. I wonder what the dispute resolution system was: what happened when the two sides didn't match.
EDITED TO ADD (5/14): In comments, lollardfish answers my question: "One then gets accused of fraud in court. In most circumstances, local power/reputation wins in fraud cases, since it's not about finding of fact but who do you trust."
"We're moving into an era of 'steal everything'," said David Emm, a senior security researcher for Kaspersky Labs.
As both data storage and data processing becomes cheaper, more and more data is collected and stored. An unanticipated effect of this is that more and more data can be stolen and used. As the article says, data minimization is the most effective security tool against this sort of thing. But -- of course -- it's not in the database owner's interest to limit the data it collects; it's in the interests of those whom the data is about.
This hack was conducted as a research project. It's unlikely it's being done in the wild:
In one attack, Wang and colleagues used a plug-in for the Firefox web browser to examine data being sent and received by the online retailer Buy.com. When users make a purchase, Buy.com directs them to PayPal. Once they have paid, PayPal sends Buy.com a confirmation message tagged with a code that identifies the transaction.
Three months ago, I announced that I was writing a book on why security exists in human societies. This is basically the book's thesis statement:
All complex systems contain parasites. In any system of cooperative behavior, an uncooperative strategy will be effective -- and the system will tolerate the uncooperatives -- as long as they're not too numerous or too effective. Thus, as a species evolves cooperative behavior, it also evolves a dishonest minority that takes advantage of the honest majority. If individuals within a species have the ability to switch strategies, the dishonest minority will never be reduced to zero. As a result, the species simultaneously evolves two things: 1) security systems to protect itself from this dishonest minority, and 2) deception systems to successfully be parasitic.
At this point, I have most of a first draft: 75,000 words. The tentative title is still "The Dishonest Minority: Security and its Role in Modern Society." I have signed a contract with Wiley to deliver a final manuscript in November for February 2012 publication. Writing a book is a process of exploration for me, and the final book will certainly be a little different -- and maybe even very different -- from what I wrote above. But that's where I am today.
And it's why my other writings continue to be sparse.
It literally blows holes in their heads:
In the study, led by Michel André of the Technical University of Catalonia in Barcelona, biologists exposed 87 individual cephalopods of four species -- Loligo vulgaris, Sepia officinalis, Octopus vulgaris and Illex coindeti -- to short sweeps of relatively low intensity, low frequency sound between 50 and 400 Hertz (Hz). Then they examined the animals' statocysts -- fluid-filled, balloon-like structures that help these invertebrates maintain balance and position in the water. André and his colleagues found that, immediately following exposure to low frequency sound, the cephalopods showed hair cell damage within the statocysts. Over time, nerve fibers became swollen and, eventually, large holes appeared.
There are live squids on the last Endeavor mission.
A scary development in rootkits:
Rootkits typically modify certain areas in the memory of the running operating system (OS) to hijack execution control from the OS. Doing so forces the OS to present inaccurate results to detection software (anti-virus, anti-rootkit).
Here's a clever Web app that locates your stolen camera by searching the EXIF data on public photo databases for your camera's serial number.
Exactly how did they confirm it was Bin Laden's body?
Officials compared the DNA of the person killed at the Abbottabad compound with the bin Laden "family DNA" to determine that the 9/11 mastermind had in fact been killed, a senior administration official said.
EDITED TO ADD (5/5): A better article.
It's not that the risk is greater, it's that the fear is greater. Data from New York:
There were 10,566 reports of suspicious objects across the five boroughs in 2010. So far this year, the total was 2,775 as of Tuesday compared with 2,477 through the same period last year.
Despite all the false alarms, the New York Police Department still wants to hear them:
"We anticipate that with increased public vigilance comes an increase in false alarms for suspicious packages," Kelly said at the Monday news conference. "This typically happens at times of heightened awareness. But we don't want to discourage the public. If you see something, say something."
That slogan, oddly enough, is owned by New York's transit authority.
I have a different opinion: "If you ask amateurs to act as front-line security personnel, you shouldn't be surprised when you get amateur security."
People have always come forward to tell the police when they see something genuinely suspicious, and should continue to do so. But encouraging people to raise an alarm every time they're spooked only squanders our security resources and makes no one safer.
"Refuse to be terrorized," people.
Wouldn't it be great if this were not a joke: the security contingency that was in place in the event that Kate Middleton tried to run away just before the wedding.
After protracted, top-secret negotiations between royal staff from Clarence House and representatives from the Metropolitan Police, MI5 and elements of the military, a compromise was agreed. In the event of Operation Pumpkin being put into effect Ms Middleton will be permitted to run out of Westminster Abbey with her bodyguards trailing discreetly at a distance. Plain-clothes undercover police, MI5 officers and SAS soldiers stationed in the crowd will form a mobile flying wedge ahead of her, clearing a path for the fugitive future princess to escape down.
I wonder what security would have done if she just took off and ran.
EDITED TO ADD (5/5): The double negative in the first sentence has confused some people. To be clear: the article quoted, and Operation Pumpkin in general, is fiction.
This is interesting:
When World Kitchen took over the Pyrex brand, it started making more products out of prestressed soda-lime glass instead of borosilicate. With pre-stressed, or tempered, glass, the surface is under compression from forces inside the glass. It is stronger than borosilicate glass, but when it's heated, it still expands as much as ordinary glass does. It doesn't shatter immediately, because the expansion first acts only to release some of the built-in stress. But only up to a point.
According to this article, students are no longer learning how to write in cursive. And, if they are learning it, they're forgetting how. Certainly the ubiquity of keyboards is leading to a decrease in writing by hand. Relevant to this blog, the article claims that this is making signatures easier to forge.
While printing might be legible, the less complex the handwriting, the easier it is to forge, said Heidi H. Harralson, a graphologist in Tucson. Even though handwriting can change -- and become sloppier -- as a person ages, people who are not learning or practicing it are at a disadvantage, Ms. Harralson said.
Maybe, but I'm skeptical. Everyone has a scrawl of some sort; mine has been completely illegible for years. But I don't see document forgery as a big risk; far bigger is the automatic authentication systems that don't have anything to do with traditional forgery.
Not a lot of details:
ElcomSoft research shows that image metadata and image data are processed independently with a SHA-1 hash function. There are two 160-bit hash values produced, which are later encrypted with a secret (private) key by using an asymmetric RSA-1024 algorithm to create a digital signature. Two 1024-bit (128-byte) signatures are stored in EXIF MakerNote tag 0×0097 (Color Balance).
Canon's system is just as bad, by the way.
Fifteen years ago, I co-authored a paper on the problem. The idea was to use a hash chain to better deal with the possibility of a secret-key compromise.
"ReallyVirtual" tweeted the bin Laden assassination without realizing it.
Earlier this month, the FBI seized control of the Coreflood botnet and shut it down:
According to the filing, ISC, under law enforcement supervision, planned to replace the servers with servers that it controlled, then collect the IP addresses of all infected machines communicating with the criminal servers, and send a remote "stop" command to infected machines to disable the Coreflood malware operating on them.
This is a big deal; it's the first time the FBI has done something like this. My guess is that we're going to see a lot more of this sort of thing in the future; it's the obvious solution for botnets.
Not that the approach is without risks:
"Even if we could absolutely be sure that all of the infected Coreflood botnet machines were running the exact code that we reverse-engineered and convinced ourselves that we understood," said Chris Palmer, technology director for the Electronic Frontier Foundation, "this would still be an extremely sketchy action to take. It's other people's computers and you don't know what's going to happen for sure. You might blow up some important machine."
I just don't see this argument convincing very many people. Leaving Coreflood in place could blow up some important machine. And leaving Coreflood in place not only puts the infected computers at risk; it puts the whole Internet at risk. Minimizing the collateral damage is important, but this feels like a place where the interest of the Internet as a whole trumps the interest of those affected by shutting down Coreflood.
The problem as I see it is the slippery slope. Because next, the RIAA is going to want to remotely disable computers they feel are engaged in illegal file sharing. And the FBI is going to want to remotely disable computers they feel are encouraging terrorism. And so on. It's important to have serious legal controls on this counterattack sort of defense.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.