Schneier on Security
A blog covering security and security technology.
October 2010 Archives
From the Wall Street Journal:
Take "stranger danger," the classic Halloween horror. Even when I was a kid, back in the "Bewitched" and "Brady Bunch" costume era, parents were already worried about neighbors poisoning candy. Sure, the folks down the street might smile and wave the rest of the year, but apparently they were just biding their time before stuffing us silly with strychnine-laced Smarties.
I remember one year when I filled a few Pixie Stix with garlic powder. But that was a long time ago.
EDITED TO ADD (11/2): Interesting essay:
The precise methods of the imaginary Halloween sadist are especially interesting. Apples and home goods occasionally appear in the stories, but the most common culprit is regular candy. This crazed person would purchase candy, open the wrapper, and DO SOMETHING to it, something that would be designed to hurt the unsuspecting child. But also something that would be sufficiently obvious and clumsy that the vigilant parent could spot it (hence the primacy of candy inspection).
EDITED TO ADD (11/11): Wondermark comments.
The New York Times writes:
Despite the increased scrutiny of people and luggage on passenger planes since 9/11, there are far fewer safeguards for packages and bundles, particularly when loaded on cargo-only planes.
Well, of course. We've always known this. We've not worried about terrorism on cargo planes because it isn't very terrorizing. Packages aren't people. If a passenger plane blows up, it affects a couple of hundred people. If a cargo plane blows up, it just affects the crew.
Cargo that is loaded on to passenger planes should be subjected to the same level of security as passenger luggage. Cargo that is loaded onto cargo planes should be treated no differently from cargo loaded into ships, trains, trucks, and the trunks of cars.
Of course: now that the media is talking about cargo security, we have to "do something." (Something must be done. This is something. Therefore, we must do it.) But if we're so scared that we have to devote resources to this kind of terrorist threat, we've well and truly lost.
EDITED TO ADD (10/30): The plot -- it's still unclear how serious it was -- wasn't uncovered by any security screening, but by intelligence gathering:
Intelligence officials were onto the suspected plot for days, officials said. The packages in England and Dubai were discovered after Saudi Arabian intelligence picked up information related to Yemen and passed it on to the U.S., two officials said.
This is how you fight through terrorism: not by defending against specific threats, but through intelligence, investigation, and emergency response.
Interesting television program from UK Channel 4.
Okay, it's not TED. It's one of the independent regional TED events: TEDxPSU. My talk was "Reconceptualizing Security," a condensation of the hour-long talk into 18 minutes.
Good blog post.
They're not worth it:
In seven years, New Orleans' crime camera program has yielded six indictments: three for crimes caught on video and three for bribes and kickbacks a vendor is accused of paying a former city official to sell the cameras to City Hall.
Old -- but recently released -- document discussing the bugging of the Russian embassy in 1940. The document also mentions bugging the embassies of France, Germany, Italy, and Japan.
Firesheep is a new Firefox plugin that makes it easy for you to hijack other people's social network connections. Basically, Facebook authenticates clients with cookies. If someone is using a public WiFi connection, the cookies are sniffable. Firesheep uses wincap to capture and display the authentication information for accounts it sees, allowing you to hijack the connection.
Slides from the Toorcon talk.
Protect yourself by forcing the authentication to happen over TLS. Or stop logging in to Facebook from public networks.
EDITED TO ADD (10/27): To protect against this attack, you have to encrypt the entire session -- not just the initial authentication.
EDITED TO ADD (11/17): Blacksheep detects Firesheep.
Excellent article from The New Yorker.
It's a long list. These items are not online; they're at the National Archives and Records Administration in College Park, MD. You can either ask for copies by mail under FOIA (at a 75 cents per page) or come in in person. There, you can read and scan them for free, or photocopy them for about 20 cents a page.
While the notion that a few animals produce polarization signals and use them in communication is not new, Mäthger and Hanlon’s findings present the first anatomical evidence for a “hidden communication channel” that can remain masked by typical camouflage patterns. Their results suggest that it might be possible for squid to send concealed polarized signals to one another while staying camouflaged to fish or mammalian predators, most of which do not have polarization vision.
I was interviewed last week at RSA Europe.
Once a user has logged into FaceTime, anyone with access to the machine can change the user's Apple ID password without knowing the old password.
Of course, it's just as easy to change it back, if the victim notices.
EDITED TO ADD (11/9): It's been fixed.
Inspector Richard Haycock told local newspapers that the possible use of the car lock jammers would help explain a recent spate of thefts from vehicles that have occurred without leaving any signs of forced entry.
I thought car door locks weren't much of a deterrent to a professional car thief.
EDITED TO ADD (10/22): The thieves are not stealing cars, they're stealing things left inside the cars.
EDITED TO ADD (11/10): Related paper.
I am the program chair for WEIS 2011, which is to be held next June in Washington, DC.
Submissions are due at the end of February.
Please forward and repost the call for papers.
This isn't good:
Intelligent Integration Systems (IISi), a small Boston-based software development firm, alleges that their Geospatial Toolkit and Extended SQL Toolkit were pirated by Massachusetts-based Netezza for use by a government client. Subsequent evidence and court proceedings revealed that the "government client" seeking assistance with Predator drones was none other than the Central Intelligence Agency.
The obvious joke is that this is what you get when you go with the low bidder, but it doesn't have to be that way. And there's nothing special about this being a government procurement; any bespoke IT procurement needs good contractual oversight.
EDITED TO ADD (11/10): Another article.
When he's out and about near his Denver home, former Broncos quarterback John Elway has come up with a novel way to travel incognito—he wears his own jersey. "I do that all the time here," the 50-year-old Hall of Famer told me. "I go to the mall that way. They know it's not me because they say there's no way Elway would be wearing his own jersey in the mall. So it actually is the safest thing to do."
Of course, now everybody knows.
This is clever:
The tool is called PinDr0p, and works by analysing the various characteristic noise artifacts left in audio by the different types of voice network -- cellular, VoIP etc. For instance, packet loss leaves tiny gaps in audio signals, too brief for the human ear to detect, but quite perceptible to the PinDr0p algorithms. Vishers and others wishing to avoid giving away the origin of a call will often route a call through multiple different network types.
This system can be used to differentiate telephone calls from your bank from telephone calls from someone in Nigeria pretending to be from your bank.
The PinDr0p analysis can't produce an IP address or geographical location for a given caller, but once it has a few calls via a given route, it can subsequently recognise further calls via the same route with a high degree of accuracy: 97.5 per cent following three calls and almost 100 per cent after five.
Unless your bank is outsourcing its customer support to Russia, of course.
The GIT researchers hope to develop a database of different signatures which would let their system provide a geolocation as well as routing information in time.
Statement from the researchers.
India is writing its own operating system so it doesn't have to rely on Western technology:
India's Defence Research and Development Organisation (DRDO) wants to build an OS, primarily so India can own the source code and architecture. That will mean the country won't have to rely on Western operating systems that it thinks aren't up to the job of thwarting cyber attacks. The DRDO specifically wants to design and develop its own OS that is hack-proof to prevent sensitive data from being stolen.
On the one hand, this is great. We could use more competition in the OS market -- as more and more applications move into the cloud and are only accessed via an Internet browser, OS compatible matters less and less -- and an OS that brands itself as "more secure" can only help. But this security by obscurity thinking just isn't true:
"The only way to protect it is to have a home-grown system, the complete architecture ... source code is with you and then nobody knows what's that," he added.
The only way to protect it is to design and implement it securely. Keeping control of your source code didn't magically make Windows secure, and it won't make this Indian OS secure.
Interesting new technology.
Squarehead's new system is like bullet-time for sound. 325 microphones sit in a carbon-fiber disk above the stadium, and a wide-angle camera looks down on the scene from the center of this disk. All the operator has to do is pinpoint a spot on the court or field using the screen, and the Audioscope works out how far that spot is from each of the mics, corrects for delay and then synchronizes the audio from all 315 of them. The result is a microphone that can pick out the pop of a bubblegum bubble in the middle of a basketball game....
Some copycat imitated this xkcd cartoon in Sweden, hand writing an SQL injection attack onto a paper ballot. Even though the ballot was manually entered into the vote database, the attack (and the various other hijinks) failed. This time.
They're tracking a college student in Silicon Valley. He's 20, partially Egyptian, and studying marketing at Mission College. He found the tracking device attached to his car. Near as he could tell, what he did to warrant the FBI's attention is be the friend of someone who did something to warrant the FBI's attention.
Afifi retrieved the device from his apartment and handed it over, at which point the agents asked a series of questions did he know anyone who traveled to Yemen or was affiliated with overseas training? One of the agents produced a printout of a blog post that Afifi’s friend Khaled allegedly wrote a couple of months ago. It had "something to do with a mall or a bomb," Afifi said. He hadn’t seen it before and doesn’t know the details of what it said. He found it hard to believe Khaled meant anything threatening by the post.
Here's the Reddit post:
bombing a mall seems so easy to do. i mean all you really need is a bomb, a regular outfit so you arent the crazy guy in a trench coat trying to blow up a mall and a shopping bag. i mean if terrorism were actually a legitimate threat, think about how many fucking malls would have blown up already.. you can put a bag in a million different places, there would be no way to foresee the next target, and really no way to prevent it unless CTU gets some intel at the last minute in which case every city but LA is fucked...so...yea...now i'm surely bugged : /
This weird story poses three sets of questions.
Remember, the Ninth Circuit Court recently ruled that the police do not need a warrant to attach one of these things to your car. That ruling holds true only for the Ninth Circuit right now; the Supreme Court will probably rule on this soon.
Meanwhile, the ACLU is getting involved:
Brian Alseth from the American Civil Liberties Union in Washington state contacted Afifi after seeing pictures of the tracking device posted online and told him the ACLU had been waiting for a case like this to challenge the ruling.
Police spent about 10,000 hours poring over footage from some 1,500 security cameras around Dubai. Using face-recognition software, electronic-payment records, receipts and interviews with taxi drivers and hotel staff, they put together a list of suspects and publicized it.
Seems ubiquitous electronic surveillance is no match for a sufficiently advanced adversary.
Here's my essay on biometrics, from 1999.
In Chapel Hill, NC.
We do nothing, first and foremost, because there is nothing we can do. Unless the State Department gets specific—e.g., "don't go to the Eiffel Tower tomorrow"—information at that level of generality is completely meaningless. Unless we are talking about weapons of mass destruction, the chances of being hit by a car while crossing the street are still greater than the chances of being on the one plane or one subway car that comes under attack. Besides, nobody living or working in a large European city (or even a small one) can indefinitely avoid coming within close proximity of "official and private" structures affiliated with U.S. interests—a Hilton hotel, an Apple computer store—not to mention subways, trains, airplanes, boats, and all other forms of public transportation.
EDITED TO ADD (10/13): Another article.
Sounds like it was easy:
Last week, the D.C. Board of Elections and Ethics opened a new Internet-based voting system for a weeklong test period, inviting computer experts from all corners to prod its vulnerabilities in the spirit of "give it your best shot." Well, the hackers gave it their best shot -- and midday Friday, the trial period was suspended, with the board citing "usability issues brought to our attention."
My primary worry about contests like this is that people will think a positive result means something. If a bunch of students can break into a system after a couple of weeks of attempts, we know it's insecure. But just because a system withstands a test like this doesn't mean it's secure. We don't know who tried. We don't know what they tried. We don't know how long they tried. And we don't know if someone who tries smarter, harder, and longer could break the system.
Computer security experts are often surprised at which stories get picked up by the mainstream media. Sometimes it makes no sense. Why this particular data breach, vulnerability, or worm and not others? Sometimes it's obvious. In the case of Stuxnet, there's a great story.
As the story goes, the Stuxnet worm was designed and released by a government--the U.S. and Israel are the most common suspects--specifically to attack the Bushehr nuclear power plant in Iran. How could anyone not report that? It combines computer attacks, nuclear power, spy agencies and a country that's a pariah to much of the world. The only problem with the story is that it's almost entirely speculation.
Here's what we do know: Stuxnet is an Internet worm that infects Windows computers. It primarily spreads via USB sticks, which allows it to get into computers and networks not normally connected to the Internet. Once inside a network, it uses a variety of mechanisms to propagate to other machines within that network and gain privilege once it has infected those machines. These mechanisms include both known and patched vulnerabilities, and four "zero-day exploits": vulnerabilities that were unknown and unpatched when the worm was released. (All the infection vulnerabilities have since been patched.)
Stuxnet doesn't actually do anything on those infected Windows computers, because they're not the real target. What Stuxnet looks for is a particular model of Programmable Logic Controller (PLC) made by Siemens (the press often refers to these as SCADA systems, which is technically incorrect). These are small embedded industrial control systems that run all sorts of automated processes: on factory floors, in chemical plants, in oil refineries, at pipelines--and, yes, in nuclear power plants. These PLCs are often controlled by computers, and Stuxnet looks for Siemens SIMATIC WinCC/Step 7 controller software.
If it doesn't find one, it does nothing. If it does, it infects it using yet another unknown and unpatched vulnerability, this one in the controller software. Then it reads and changes particular bits of data in the controlled PLCs. It's impossible to predict the effects of this without knowing what the PLC is doing and how it is programmed, and that programming can be unique based on the application. But the changes are very specific, leading many to believe that Stuxnet is targeting a specific PLC, or a specific group of PLCs, performing a specific function in a specific location--and that Stuxnet's authors knew exactly what they were targeting.
It's already infected more than 50,000 Windows computers, and Siemens has reported 14 infected control systems, many in Germany. (These numbers were certainly out of date as soon as I typed them.) We don't know of any physical damage Stuxnet has caused, although there are rumors that it was responsible for the failure of India's INSAT-4B satellite in July. We believe that it did infect the Bushehr plant.
All the anti-virus programs detect and remove Stuxnet from Windows systems.
Stuxnet was first discovered in late June, although there's speculation that it was released a year earlier. As worms go, it's very complex and got more complex over time. In addition to the multiple vulnerabilities that it exploits, it installs its own driver into Windows. These have to be signed, of course, but Stuxnet used a stolen legitimate certificate. Interestingly, the stolen certificate was revoked on July 16, and a Stuxnet variant with a different stolen certificate was discovered on July 17.
Over time the attackers swapped out modules that didn't work and replaced them with new ones--perhaps as Stuxnet made its way to its intended target. Those certificates first appeared in January. USB propagation, in March.
Stuxnet has two ways to update itself. It checks back to two control servers, one in Malaysia and the other in Denmark, but also uses a peer-to-peer update system: When two Stuxnet infections encounter each other, they compare versions and make sure they both have the most recent one. It also has a kill date of June 24, 2012. On that date, the worm will stop spreading and delete itself.
We don't know who wrote Stuxnet. We don't know why. We don't know what the target is, or if Stuxnet reached it. But you can see why there is so much speculation that it was created by a government.
Stuxnet doesn't act like a criminal worm. It doesn't spread indiscriminately. It doesn't steal credit card information or account login credentials. It doesn't herd infected computers into a botnet. It uses multiple zero-day vulnerabilities. A criminal group would be smarter to create different worm variants and use one in each. Stuxnet performs sabotage. It doesn't threaten sabotage, like a criminal organization intent on extortion might.
Stuxnet was expensive to create. Estimates are that it took 8 to 10 people six months to write. There's also the lab setup--surely any organization that goes to all this trouble would test the thing before releasing it--and the intelligence gathering to know exactly how to target it. Additionally, zero-day exploits are valuable. They're hard to find, and they can only be used once. Whoever wrote Stuxnet was willing to spend a lot of money to ensure that whatever job it was intended to do would be done.
None of this points to the Bushehr nuclear power plant in Iran, though. Best I can tell, this rumor was started by Ralph Langner, a security researcher from Germany. He labeled his theory "highly speculative," and based it primarily on the facts that Iran had an unusually high number of infections (the rumor that it had the most infections of any country seems not to be true), that the Bushehr nuclear plant is a juicy target, and that some of the other countries with high infection rates--India, Indonesia, and Pakistan--are countries where the same Russian contractor involved in Bushehr is also involved. This rumor moved into the computer press and then into the mainstream press, where it became the accepted story, without any of the original caveats.
Once a theory takes hold, though, it's easy to find more evidence. The word "myrtus" appears in the worm: an artifact that the compiler left, possibly by accident. That's the myrtle plant. Of course, that doesn't mean that druids wrote Stuxnet. According to the story, it refers to Queen Esther, also known as Hadassah; she saved the Persian Jews from genocide in the 4th century B.C. "Hadassah" means "myrtle" in Hebrew.
Stuxnet also sets a registry value of "19790509" to alert new copies of Stuxnet that the computer has already been infected. It's rather obviously a date, but instead of looking at the gazillion things--large and small--that happened on that the date, the story insists it refers to the date Persian Jew Habib Elghanain was executed in Tehran for spying for Israel.
Sure, these markers could point to Israel as the author. On the other hand, Stuxnet's authors were uncommonly thorough about not leaving clues in their code; the markers could have been deliberately planted by someone who wanted to frame Israel. Or they could have been deliberately planted by Israel, who wanted us to think they were planted by someone who wanted to frame Israel. Once you start walking down this road, it's impossible to know when to stop.
Another number found in Stuxnet is 0xDEADF007. Perhaps that means "Dead Fool" or "Dead Foot," a term that refers to an airplane engine failure. Perhaps this means Stuxnet is trying to cause the targeted system to fail. Or perhaps not. Still, a targeted worm designed to cause a specific sabotage seems to be the most likely explanation.
If that's the case, why is Stuxnet so sloppily targeted? Why doesn't Stuxnet erase itself when it realizes it's not in the targeted network? When it infects a network via USB stick, it's supposed to only spread to three additional computers and to erase itself after 21 days--but it doesn't do that. A mistake in programming, or a feature in the code not enabled? Maybe we're not supposed to reverse engineer the target. By allowing Stuxnet to spread globally, its authors committed collateral damage worldwide. From a foreign policy perspective, that seems dumb. But maybe Stuxnet's authors didn't care.
My guess is that Stuxnet's authors, and its target, will forever remain a mystery.
This essay originally appeared on Forbes.com.
My alternate explanations for Stuxnet were cut from the essay. Here they are:
Note that some of these alternate explanations overlap.
EDITED TO ADD (10/7): Symantec published a very detailed analysis. It seems like one of the zero-day vulnerabilities wasn't a zero-day after all. Good CNet article. More speculation, without any evidence. Decent debunking. Alternate theory, that the target was the uranium centrifuges in Natanz, Iran.
From the Journal of Homeland Security and Emergency Management: "Politics or Risks? An Analysis of Homeland Security Grant Allocations to the States."
Abstract: In the days following the September 11 terrorist attacks on the United States, the nation's elected officials created the USA Patriot Act. The act included a grant program for the 50 states that was intended to assist them with homeland security and preparedness efforts. However, not long after its passage, critics charged the Department of Homeland Security with allocating the grant funds on the basis of "politics" rather than "risk." This study analyzes the allocation of funds through all seven of the grant subprograms for the years 2003 through 2006. Conducting a linear regression analysis for each year, our research indicates that the total per capita amounts are inversely related to risk factors but are not related at all to partisan political factors between 2003-2005. In 2006, Congress changed the formula with the intention of increasing the relationship between allocations and risk. However, our findings reveal that this change did not produce the intended effect and the allocations were still negatively related to risk and unrelated to partisan politics.
I'm not sure I buy the methodology, but there it is.
This will help some.
At least two rival systems plan to put unique codes on packages containing antimalarials and other medications. Buyers will be able to text the code to a phone number on the package and get an immediate reply of "NO" or "OK," with the drug's name, expiration date, and other information.
To defeat the system, the counterfeiter has to copy the bar codes. If the stores selling to customers are in on the scam, it can be the same code. If not, there have to be sufficient different bar codes that the store doesn't detect duplications. Presumably, numbers that are known to have been copied are added to the database, so the counterfeiters need to keep updating their codes. And presumably the codes are cryptographically hard to predict, so the only way to keep updating them is to look at legitimate products.
Another attack would be to intercept the verification system. A man-in-the-middle attack against the phone number or the website would be difficult, but presumably the verification information would be on the object itself. It would be easy to swap in a fake phone number that would verify anything.
It'll be interesting to see how the counterfeiters get around this security measure.
New research: "Attacks and Design of Image Recognition CAPTCHAs."
Abstract. We systematically study the design of image recognition CAPTCHAs (IRCs) in this paper. We first review and examine all IRCs schemes known to us and evaluate each scheme against the practical requirements in CAPTCHA applications, particularly in large-scale real-life applications such as Gmail and Hotmail. Then we present a security analysis of the representative schemes we have identified. For the schemes that remain unbroken, we present our novel attacks. For the schemes for which known attacks are available, we propose a theoretical explanation why those schemes have failed. Next, we provide a simple but novel framework for guiding the design of robust IRCs. Then we propose an innovative IRC called Cortcha that is scalable to meet the requirements of large-scale applications. Cortcha relies on recognizing an object by exploiting its surrounding context, a task that humans can perform well but computers cannot. An infinite number of types of objects can be used to generate challenges, which can effectively disable the learning process in machine learning attacks. Cortcha does not require the images in its image database to be labeled. Image collection and CAPTCHA generation can be fully automated. Our usability studies indicate that, compared with Google's text CAPTCHA, Cortcha yields a slightly higher human accuracy rate but on average takes more time to solve a challenge.
The paper attacks IMAGINATION (designed at Penn State around 2005) and ARTiFACIAL (designed at MSR Redmond around 2004).
I regularly say that security decisions are primarily made for non-security reasons. This article about the placement of sky marshals on airplanes is an excellent example. Basically, the airlines would prefer they fly coach instead of first class.
Airline CEOs met recently with TSA administrator John Pistole and officials from the Federal Air Marshal Service requesting the TSA to reconsider the placement of marshals based on current security threats.
When I list the few improvements to airline security since 9/11, I don't include sky marshals.
EDITED TO ADD (10/9): An article from The Economist.
Not their online behavior at work, but their online behavior in life.
Using automation software that slogs through Facebook, Twitter, Flickr, YouTube, LinkedIn, blogs, and "thousands of other sources," the company develops a report on the "real you" --- not the carefully crafted you in your resume. The service is called Social Intelligence Hiring. The company promises a 48-hour turn-around.
This is being sold using fear:
...company spokespeople emphasize liability. What happens if one of your employees freaks out, comes to work and starts threatening coworkers with a samurai sword? You'll be held responsible because all of the signs of such behavior were clear for all to see on public Facebook pages. That's why you should scan every prospective hire and run continued scans on every existing employee.
Okay, so this isn't a normal blog post.
It's not about security.
If you're interested in a copy, it's only $15 -- including shipping anywhere in the world.
If you're in Minneapolis, come to the Renaissance Festival tomorrow to hear us play -- I'm not going to be there on Sunday.
Click to order Brother Seamus' "Hale and Sound." Order it here; the PayPal button on the CD's webpage doesn't work.
If we frame this discussion as a war discussion, then what you do when there's a threat of war is you call in the military and you get military solutions. You get lockdown; you get an enemy that needs to be subdued. If you think about these threats in terms of crime, you get police solutions. And as we have this debate, not just on stage, but in the country, the way we frame it, the way we talk about it; the way the headlines read, determine what sort of solutions we want, make us feel better. And so the threat of cyberwar is being grossly exaggerated and I think it's being done for a reason. This is a power grab by government. What Mike McConnell didn't mention is that grossly exaggerating a threat of cyberwar is incredibly profitable.
More of my writings on cyberwar, and the debate, here.
This is a list of master's theses from the Naval Postgraduate School's Center for Homeland Defense and Security, this year.
Some interesting stuff in there.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.