Schneier on Security
A blog covering security and security technology.
November 2005 Archives
Daniel Solove on Google and privacy:
A New York Times editorial observes:At a North Carolina strangulation-murder trial this month, prosecutors announced an unusual piece of evidence: Google searches allegedly done by the defendant that included the words "neck" and "snap." The data were taken from the defendant's computer, prosecutors say. But it might have come directly from Google, which -- unbeknownst to many users -- keeps records of every search on its site, in ways that can be traced back to individuals.
Solove goes on to argue that if companies like Google want to collect people's data (even if people are willing to supply it), the least they can do is fight for greater protections against government access to that data. While this won't address all the problems, it would be a step forward to see companies like Google use their power to foster meaningful legislative change.
EDITED TO ADD (12/3): Here's an op ed from The Boston Globe on the same topic.
How here's a good idea:
US intelligence chief John Negroponte announced Tuesday the creation of a new CIA-managed center to exploit publicly available information for intelligence purposes.
This sentence jumped out at me in an otherwise pedestrian article on criminal fraud:
"Fraud is fundamentally fuelling the growth of organised crime in the UK, earning more from fraud than they do from drugs," Chris Hill, head of fraud at the Norwich Union, told BBC News.
I'll bet that most of that involves the Internet to some degree.
And then there's this:
Global cybercrime turned over more money than drug trafficking last year, according to a US Treasury advisor. Valerie McNiven, an advisor to the US government on cybercrime, claimed that corporate espionage, child pornography, stock manipulation, phishing fraud and copyright offences cause more financial harm than the trade in illegal narcotics such as heroin and cocaine.
This doesn't bode well for computer security in general.
Police assisted by U.S. Secret Service agents on Sunday broke up a network capable of printing millions of dollars a month of excellent quality counterfeit money and arrested five suspects during a raid on a remote village in northwest Colombia, officials said.
It's a big industry there:
Fernandez said Valle del Cauca, of which Cali is the state capital, has turned into a center of global counterfeiting. "Entire families are dedicated to falsifying and trafficking money."
Colombia is thought to produce more than 40 percent of fake money circulating around the world.
They actually think this is a good idea:
Miami police announced Monday they will stage random shows of force at hotels, banks and other public places to keep terrorists guessing and remind people to be vigilant.
Boy, is this one a mess. How does "in-your-face" affect getting the people on your side? What happens if someone refuses to show an ID? What good is demanding an ID in the first place? And if I were writing a movie plot, I would plan my terrorist attack for a different part of town when the police were out playing pretend.
The response from the ACLU of Florida is puzzling, though. Let's hope he just didn't understand what was being planned.
EDITED TO ADD (11/29): This article is in error.
EDITED TO ADD (11/30): more info.
What is it with this week? I can't turn around without seeing another dumb movie-plot threat:
A Thai minister has claimed that by returning missed calls on their cell phones people from the Muslim-majority southern provinces could unintentionally trigger bombs set by Islamic militants.
This has got to be the most bizarre movie-plot threat to date: alien viruses downloaded via the SETI project:
In his [Richard Carrigan, a particle physicist at the US Fermi National Accelerator Laboratory in Illinois] report, entitled "Do potential Seti signals need to be decontaminated?", he suggests the Seti scientists may be too blase about finding a signal. "In science fiction, all the aliens are bad, but in the world of science, they are all good and simply want to get in touch." His main concern is that, intentionally or otherwise, an extra-terrestrial signal picked up by the Seti team could cause widespread damage to computers if released on to the internet without being checked.
Here's his website.
Although you have to admit, it could make a cool movie
EDITED TO ADD (12/16): Here's a good rebuttal.
Want to make the country safer from terrorism? Take the money now being wasted on national ID cards, massive data mining projects, fingerprinting foreigners, airline passenger profiling, etc., and use it to fund worldwide efforts to interdict terrorist funding:
The government's efforts to help foreign nations cut off the supply of money to terrorists, a critical goal for the Bush administration, have been stymied by infighting among American agencies, leadership problems and insufficient financing, a new Congressional report says.
One unidentified Treasury official quoted anonymously in the report said that the intergovernmental process for deterring terrorist financing abroad is "broken" and that the State Department "creates obstacles rather than coordinates effort." A State Department official countered that the real problem lies in the Treasury Department's reluctance to accept the State Department's leadership in the process.
More nonsense in the name of defending ourselves from terrorism:
The Defense Department has expanded its programs aimed at gathering and analyzing intelligence within the United States, creating new agencies, adding personnel and seeking additional legal authority for domestic security activities in the post-9/11 world.
The police and the military have fundamentally different missions. The police protect citizens. The military attacks the enemy. When you start giving police powers to the military, citizens start looking like the enemy.
We gain a lot of security because we separate the functions of the police and the military, and we will all be much less safer if we allow those functions to blur. This kind of thing worries me far more than terrorist threats.
The European music industry is lobbying the European Parliament, demanding things that the RIAA can only dream about:
The music and film industries are demanding that the European parliament extends the scope of proposed anti-terror laws to help them prosecute illegal downloaders. In an open letter to MEPs, companies including Sony BMG, Disney and EMI have asked to be given access to communications data - records of phone calls, emails and internet surfing - in order to take legal action against pirates and filesharers. Current proposals restrict use of such information to cases of terrorism and organised crime.
Our society definitely needs a serious conversation about the fundamental freedoms we are sacrificing in a misguided attempt to keep us safe from terrorism. It feels both surreal and sickening to have to defend our fundamental freedoms against those who want to stop people from sharing music. How is it possible that we can contemplate so much damage to our society simply to protect the business model of a handful of companies?
Chris Hoofnagle is the West Coast Director for EPIC. It's his list.
I've been working for some time on writing easy-to-understand guides for protecting privacy. Here's my "top 10" things you can do with very little money or effort to protect your privacy.
Do you own shares of a Janus mutual fund? Can you vote your shares through a website called vote.proxy-direct.com? If so, you can vote the shares of others.
If you have a valid proxy number, you can add 1300 to the number to get another valid proxy number. Once entered, you get another person's name, address, and account number at Janus! You could then vote their shares too.
Definitely a great resource for identity thieves.
Recently I have been hearing some odd "Twofish has been broken" rumors. I thought I'd quell them once and for all.
Rumors of the death of Twofish has been greatly exaggerated.
The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.
Here's the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results -- and it's a really good paper.
In more detail, here are the paper's three results:
The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say "let's just assume all other paths behave randomly." Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.
We simply don't know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst's benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs' worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.
Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn't even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.
The United States is highly vulnerable to attack from electronic pulses caused by a nuclear blast in space, according to a new book on threats to U.S. security.
The "single most serious national-security challenge." Absolutely nothing more serious.
I'm the first to admit that I don't know anything about Australian politics. I don't know who Amanda Vanstone is, what she stands for, and what other things she's said about any other topic.
But I happen to think she's right about airline security:
In a wide-ranging speech to Adelaide Rotarians, Senator Vanstone dismissed many commonwealth security measures as essentially ineffective. "To be tactful about these things, a lot of what we do is to make people feel better as opposed to actually achieve an outcome," Senator Vanstone said.
During her Adelaide speech, Senator Vanstone implied the use of plastic cutlery on planes to thwart terrorism was foolhardy.
Implied? I'll say it outright. It's stupid. For all its faults, I'm always pleased when Northwest Airlines gives me a real metal knife, and I am always annoyed when American Airlines still gives me a plastic one.
"Has it ever occurred to you that you just smash your wine glass and jump at someone, grab the top of their head and put it in their carotid artery and ask anything?" Senator Vanstone told her audience of about 100 Rotarians. "And believe me, you will have their attention. I think of this every time I see more money for the security agencies."
Okay, so maybe that was a bit graphic for the Rotarians. But her comments are basically right, and don't deserve this kind of response:
"(Her) extraordinary outburst that airport security was a sham to make the public feel good has made a mockery of the Howard Government's credibility in this important area of counter-terrorism," Mr Bevis said yesterday. "And for Amanda Vanstone to once again put her foot in her mouth while John Howard is overseas for serious talks on terrorism is appalling. She should apologise and quit, or if the Prime Minister can't shut her up he should sack her."
But Mr. Bevis, airport security is largely a sham to make the public feel better about flying. And if your Prime Minister doesn't know that, then you should worry about how serious his talks will be.
Vanstone has been defending herself:
Vanstone rejected calls from the Labor Party opposition for her resignation over the comments they said trivialised an important issue, saying she was not ridiculing security measures.
Plastic knives on airplanes drive me crazy too, and they don't do anything to improve our security against terrorism. I know nothing about Vanstone and her policies, but she has this one right.
Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year's Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database -- probably close to a million people overall -- that the FBI's computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.
A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.
The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.
The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.
In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.
September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.
The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country's strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.
On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.
These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on "fishing expeditions," looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.
This isn't about our ability to combat terrorism; it's about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.
As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value -- not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.
This essay originally appeared in the Minneapolis Star-Tribune.
I'm just not able to keep up with all the twists and turns in this story. (My previous posts are here, here, here, and here, but a way better summary of the events is on BoingBoing: here, here, and here. Actually, you should just read every post on the topic in Freedom to Tinker. This is also worth reading.)
Many readers pointed out to me that the DMCA is one of the reasons antivirus companies aren't able to disable invasive copy-protection systems like Sony's rootkit: it may very well be illegal for them to do so. (Adam Shostack made this point.)
And it turns out you can easily defeat the rootkit:
With a small bit of tape on the outer edge of the CD, the PC then treats the disc as an ordinary single-session music CD and the commonly used music "rip" programs continue to work as usual.
The fallout from this has been simply amazing. I've heard from many sources that the anti-copy-protection forces in Sony and other companies have newly found power, and that copy-protection has been set back years. Let's hope that the entertainment industry realizes that digital copy protection is a losing game here, and starts trying to make money by embracing the characteristics of digital technology instead of fighting against them. I've written about that here and here (both from 2001).
Even Foxtrot has a cartoon on the topic.
I think I'm done here. Others are covering this much more extensively than I am. Unless there's a new twist that I simply have to comment on....
EDITED TO ADD (11/21): The EFF is suing Sony. (The page is a good summary of the whole saga.)
EDITED TO ADD (11/22): Here's a great idea; Sony can use a feature of the rootkit to inform infected users that they're infected.
As it turns out, there's a clear solution: A self-updating messaging system already built into Sony's XCP player. Every time a user plays a XCP-affected CD, the XCP player checks in with Sony's server. As Russinovich explained, usually Sony's server sends back a null response. But with small adjustments on Sony's end -- just changing the output of a single script on a Sony web server -- the XCP player can automatically inform users of the software improperly installed on their hard drives, and of their resulting rights and choices.
This is so obviously the right thing to do. My guess is that it'll never happen.
The suit is also the first filed under the state’s spyware law of 2005. It alleges the company surreptitiously installed the spyware on millions of compact music discs (CDs) that consumers inserted into their computers when they play the CDs, which can compromise the systems.
And here's something I didn't know: the rootkit consumes 1% - 2% of CPU time, whether or not you're playing a Sony CD. You'd think there would be a "theft of services" lawsuit in there somewhere.
EDITED TO ADD (11/30): Business Week has a good article on the topic.
The amazing story of Doris Payne:
Never did she grab the jewels and run. That wasn't her way. Instead, she glided in, engaged the clerk in one of her stories, confused them and easily slipped away with a diamond ring, usually to a waiting taxi cab.
Don't think that she never got caught:
She wasn’t always so lucky. She’s been arrested more times than she can remember. One detective said her arrest report is more than 6 feet long — she’s done time in Ohio, Kentucky, West Virginia, Colorado and Wisconsin. Still, the arrests are really "just the tip of the iceberg," said FBI supervisory special agent Paul G. Graupmann.
I regularly get anonymous e-mail from people exposing software vulnerabilities. This one looks interesting.
Beta testers have discovered a serious security flaw that exposes a site created using Net Objects Fusion 9 (NOF9) that has the potential to expose an entire site to hacking, including passwords and log in info for that site. The vulnerability exists for any website published using versioning (that is, all sites using nPower).
I don't use NOF9, and I haven't tested this vulnerability. Can someone do so and get back to me? And if it is a real problem, spread the word. I don't know yet if Website Pros prefers to pay lawyers to suppress information rather than pay developers to fix software vulnerabilities.
Coming soon to airports:
Tested in Russia, the two-stage GK-1 voice analyser requires that passengers don headphones at a console and answer "yes" or "no" into a microphone to questions about whether they are planning something illicit.
In general, I prefer security systems that are invasive yet anonymous to ones that are based on massive databases. And automatic systems that divide people into a "probably fine" and "investigate a bit more" categories seem like a good use of technology. I have no idea whether this system works (there is a lot of evidence that it does not), what the false positive and false negative rates are (this article states a completely useless 12% false positive rate), or how easy it would be to learn how to fool the system, though. And in all of these trade-off discussions, the devil is in the details.
Security theater humor.
This Iowa prison break illustrates an important security principle:
State Sen. Gene Fraise said he was told by prison officials that the inmates somehow got around a wire that is supposed to activate an alarm when touched. The wall also had razor wire, he said.
Guards = dynamic security. Tripwires = static security. Dynamic security is better than static security.
Unfortunately, some people simply don't understand the fundamentals of security:
State Rep. Lance Horbach, a Republican, criticized Fraise for suggesting budget cuts were a factor in the escape.
Actually, in reality you should be putting guards in the guard towers.
Western Union has been the conduit of a lot of fraud. But since they're not the victim, they don't care much about security. It's an externality to them. It took a lawsuit to convince them to take security seriously.
Western Union, one of the world's most frequently used money transfer services, will begin warning its customers against possible fraud in their transactions.
The case for identity cards has been branded "bogus" after an ex-MI5 chief said they might not help fight terror.
A Canadian reporter was able to get phone records for the personal and professional accounts held by Canadian Privacy Commissioner Jennifer Stoddart through an American data broker, locatecell.com. The security concerns are obvious.
Canada has an exception in the privacy laws that allows newspapers to do this type of investigative reporting. My guess is that's the only reason we haven't seen an American reporter pull phone records on one of our government officials.
More evidence that hackers are migrating into crime:
Since then, organised crime units have continued to provide a fruitful income for a group of hackers that are effectively on their payroll. Their willingness to pay for hacking expertise has also given rise to a new subset of hackers. These are not hardcore criminals in pursuit of defrauding a bank or duping thousands of consumers. In one sense, they are the next generation of hackers that carry out their activities in pursuit of credibility from their peers and the 'buzz' of hacking systems considered to be unbreakable.
This is my sixth column for Wired.com:
EDITED TO ADD (11/17): Slashdotted.
I'm glad to see that someone wrote this article. For a long time now, I've been saying that the rate of identity theft has been grossly overestimated: too many things are counted as identity theft that are just traditional fraud. Here's some interesting data to back that claim up:
Multiple surveys have found that around 20 percent of Americans say they have been beset by identity theft. But what exactly is identity theft?
Identity theft is a serious crime, and it's a major growth industry in the criminal world. But we do everyone a disservice when we count things as identity theft that really aren't.
Can a cell phone detect if it is stolen by measuring the gait of the person carrying it?
Researchers at the VTT Technical Research Centre of Finland have developed a prototype of a cell phone that uses motion sensors to record a user's walking pattern of movement, or gait. The device then periodically checks to see that it is still in the possession of its legitimate owner, by measuring the current stride and comparing it against that stored in its memory.
Clever, as long as you realize that there are going to be a lot of false alarms. This seems okay:
If the phone suspects it has fallen into the wrong hands, it will prompt the user for a password if they attempt to make calls or access its memory.
Sony already said that they're stopping production of CDs with the embedded rootkit. Now they're saying that they will pull the infected disks from stores and offer free exchanges to people who inadvertently bought them.
Sony BMG Music Entertainment said Monday it will pull some of its most popular CDs from stores in response to backlash over copy-protection software on the discs.
That's good news, but there's more bad news. The patch Sony is distributing to remove the rootkit opens a huge security hole:
The root of the problem is a serious design flaw in Sony’s web-based uninstaller. When you first fill out Sony’s form to request a copy of the uninstaller, the request form downloads and installs a program – an ActiveX control created by the DRM vendor, First4Internet – called CodeSupport. CodeSupport remains on your system after you leave Sony’s site, and it is marked as safe for scripting, so any web page can ask CodeSupport to do things. One thing CodeSupport can be told to do is download and install code from an Internet site. Unfortunately, CodeSupport doesn’t verify that the downloaded code actually came from Sony or First4Internet. This means any web page can make CodeSupport download and install code from any URL without asking the user’s permission.
Even more interesting is that there may be at least half a million infected computers:
Using statistical sampling methods and a secret feature of XCP that notifies Sony when its CDs are placed in a computer, [security researcher Dan] Kaminsky was able to trace evidence of infections in a sample that points to the probable existence of at least one compromised machine in roughly 568,200 networks worldwide. This does not reflect a tally of actual infections, however, and the real number could be much higher.
I say "may be at least" because the data doesn't smell right to me. Look at the list of infected titles, and estimate what percentage of CD buyers will play them on their computers; does that seem like half a million sales to you? It doesn't to me, although I readily admit that I don't know the music business. Their methodology seems sound, though:
Kaminsky discovered that each of these requests leaves a trace that he could follow and track through the internet's domain name system, or DNS. While this couldn't directly give him the number of computers compromised by Sony, it provided him the number and location (both on the net and in the physical world) of networks that contained compromised computers. That is a number guaranteed to be smaller than the total of machines running XCP.
In any case, Sony's rapid fall from grace is a great example of the power of blogs; it's been fifteen days since Mark Russinovich first posted about the rootkit. In that time the news spread like a firestorm, first through the blogs, then to the tech media, and then into the mainstream media.
Hans Bethe was one of the first nuclear scientists, a member of the Manhattan Project, and a political activist. In this article about him, there's a great quote:
Sometimes insistence on 100 percent security actually impairs our security, while the bold decision -- though at the time it seems to involve some risk -- will give us more security in the long run.
The NSA has a site for kids.
Crypto Cat, Decipher Dog, and friends.
There's a new report from Sandia National Laboratories (written with Lawrence Berkeley National Laboratory) titled "Guidelines to Improve Airport Preparedness Against Chemical and Biological Terrorism." It's classified, but there's an unclassified version available. (Press release. Unclassified report.)
I haven't read it yet, but it looks interesting.
Hidden metadata is in the news again. The New York Times reported that an unsigned Microsoft Word document being circulated by the Democratic National Committee was actually written by, wait for it, the Democratic National Committee.
Okay, so that's not much of a revelation, but it does serve to remind us that there can be all sorts of unintended information hidden in Microsoft Office documents. The particular bits of unintended information that precipitated this news story is the metadata.
Metadata is information on who created the file, what it was originally called, etc. To see your metadata, open a file, go to the "File" menu, and choose "Properties."
I'll bet at least some of you will be really surprised by what's in there. Not because it's secret, but because it has nothing to do with you or your document. That's because metadata follows the file, and not its contents.
Here's what I do when I want to create a MS Word document. Maybe it's a file I've written, and maybe it's a file I received from someone else. I find some other document that has basically the same style I want, open it up, delete all the contents, and save it under a new filename. MS Word doesn't change the metadata, so whatever was in the "Title," "Subject", "Author," "Company," and other fields of the original document remains in my new document. This means that occasionally those metadata fields are filled with information I've never seen of before and from who knows where. I'm sure I'm not the only one who uses this trick to avoid dealing with MS Word stylesheets. So metadata is much less a smoking gun than many make it out to be.
I don't mean this to minimize the problem of hidden data in Microsoft Office documents. It's not just the metadata, but comments, deleted parts of the document, even parts of other documents (it's happened).
I have two recommendations regarding Microsoft Office and hidden data. The first is to realize that programs like Word and Excel are designed for authoring documents, not for publishing them. Get into the habit of saving your documents into pdf before distributing them. (Although if you're going to redact a pdf document, be smart about it or you'll have similar problems.)
The second is to install Microsoft's tool for deleting hidden data. (Works for Office 2003; there are third-party tools for older versions.) Or at least read the page about deleting private data in MS Office files. And to follow through on deleting data.
This probably won't work for many of us, though. The last sentence of the article explains why:
"The real scandal here," Mr. Max told The Los Angeles Times after Democrats expressed outrage over the White House's fingerprints on the testimony, "is that after 15 years of using Microsoft Word, I don't know how to turn off 'track changes.'"
Here's a report that the CIA slipped software bugs to the Soviets in the 1980s:
In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline, according to a new memoir by a Reagan White House official.
A CIA article from 1996 also describes this.
EDITED TO ADD (11/14): Marcus Ranum wrote about this.
Abstract: Among a fringe community of paranoids, aluminum helmets serve as the protective measure of choice against invasive radio signals. We investigate the efficacy of three aluminum helmet designs on a sample group of four individuals. Using a $250,000 network analyser, we find that although on average all helmets attenuate invasive radio frequencies in either directions (either emanating from an outside source, or emanating from the cranium of the subject), certain frequencies are in fact greatly amplified. These amplified frequencies coincide with radio bands reserved for government use according to the Federal Communication Commission (FCC). Statistical evidence suggests the use of helmets may in fact enhance the government's invasive abilities. We theorize that the government may in fact have started the helmet craze for this reason.
And a rebuttal:
A recent MIT study  calls into question the effectiveness of Aluminum Foil Deflector Beanies. However, there are serious flaws in this study, not the least of which is a complete mischaracterization of the process of psychotronic mind control. I theorize that the study is, in fact, NWO propaganda designed to spread FUD against deflector beanie technology, and aluminum shielding in general, in order to disembeanie paranoids, leaving them open to mind control.
Counter-terrorist vigilantes, or people just trying to make flying safer?
The Sky Posse is an organization, not affiliated with anyone official, of people who vow to fight back in the event of an airplane hijacking. Members wear cool-looking pins, made to resemble a Western sheriff's badge, with the slogan "Ready to Roll."
Kind of silly, but I guess there's no harm.
Here's the story, edited to add lots of news.
There will be lawsuits. (Here's the first.) Police are getting involved. There's a Trojan that uses Sony's rootkit to hide. And today Sony temporarily halted production of CDs protected with this technology.
Sony really overreached this time. I hope they get slapped down hard for it.
EDITED TO ADD (13 Nov): More information on uninstalling the rootkit. And Microsoft will update its security tools to detect and remove the rootkit. That makes a lot of sense. If Windows crashes because of this -- and others of this ilk -- Microsoft will be blamed.
Here's an Illinois bill that:
Provides that it is unlawful to possess, use, or allow to be used, any materials, hardware, or software specifically designed or primarily used for the reading of encrypted language from the bar code or magnetic strip of an official Illinois Identification Card, Disabled Person Identification Card, driver's license, or permit.
Full text is here.
If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.
Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly -- less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.
The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.
By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other -- stealing "owned" computers back and forth. If your network was infected, it was a mess.
Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.
The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.
What could you have done beforehand to protect yourself against Zotob and its kin? "Install the patch" is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install -- at least Microsoft Windows system patches -- large corporate networks can’t. Far too often, patches cause other things to break.
It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.
Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?
Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.
The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.
From a Business Week story:
During July 13-26, stocks and mutual funds had been sold, and the proceeds wired out of his account in six transactions of nearly $30,000 apiece. Murty, a 64-year-old nuclear engineering professor at North Carolina State University, could only think it was a mistake. He hadn't sold any stock in months.
That last clause is critical. E*trade insists it did nothing wrong. It executed $174,000 in fraudulent transactions, but it did nothing wrong. It sold stocks without the knowledge or consent of the owner of those stocks, but it did nothing wrong.
Now quite possibly, E*trade did nothing wrong legally. There may very well be a paragraph buried in whatever agreement this guy signed that says something like: "You agree that any trade request that comes to us with the right password, whether it came from you or not, will be processed." But there's the market failure. Until we fix that, these losses are an externality to E*Trade. They'll only fix the problem up to the point where customers aren't leaving them in droves, not to the point where the customers' stocks are secure.
I'm a former Marine I in Afghanistan. Silly string has served me well in Combat especially in looking for IADs, simply put, booby traps. When you spray the silly string in dark areas, especially when you doing house to house fighting. On many occasions the silly string has saved me and my men's lives.
When you spray the string it just spreads everywhere and when it sets it lays right on the wire. Even in a dark room the string stands out revealing the trip wire.
She said about half the hotels use shared network media (i.e., a hub versus an Ethernet switch), so any plain text password you transmit is sniffable by any like-minded person in the hotel. Most wireless access points are shared media as well; even networks requiring a WEP key often allow the common users to sniff each other's passwords.
I am interested in analyzing that password database. What percentage of those passwords are English words? What percentage are in the common password dictionaries? What percentage use mixed case, or numbers, or punctuation? What's the frequency distribution of different password lengths?
Real password data is hard to come by. There's an interesting research paper in that data.
A team at the German Federal Agency for Information Technology Security has factored a 193-digit number. (Note that this is not a record; in May a 200-digit number was factored. But there's a cash prize associated with this one.)
The links do a good job explaining the news and giving context.
Here's an excellent use for cameras:
Now, to help better examine how Tasers are used, manufacturer Taser International Inc. has developed a Taser Cam, which company executives hope will illuminate why Tasers are needed -- and add another layer of accountability for any officer who would abuse the weapon.
It's the same idea as having cameras record all police interrogations, or record all police-car stops. It helps protect the populace against police abuse, and helps protect the police of accusations of abuse.
This is where cameras do good: when they lessen a power imbalance. Imagine if they were continuously recording the actions of elected officials -- when they were acting in their official capacity, that is.
Of course, cameras are only as useful as their data. If critical recordings are "lost," then there's no accountability. The system is pretty kludgy:
The Taser Cam records in black and white but is equipped with infrared technology to record images in very low light. The camera will have at least one hour of recording time, the company said, and the video can be downloaded to a computer over a USB cable.
How soon before the cameras simply upload their recordings, in real time, to some trusted vault somewhere?
EDITED TO ADD: CNN has a story.
Now this is a surprise. Richard Clarke advised New York City to perform those pointless subway searches:
Mr. Clarke, a former counterterrorism adviser to two presidents, received widespread attention last year for his criticism of President Bush's response to the Sept. 11 attacks, detailed in a searing memoir and in security testimony before the 9/11 Commission.
Seems that his goal wasn't to deter terrorism, but simply to move it from the New York City subways to another target; perhaps the Boston subways?
"Obviously you want to catch people with bombs on their back, but there is a value to a program that doesn't stop everyone and isn't compulsory," he said in a deposition.
This essay outlines what he really thinks:
Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.
He's against making vendors liable for defects in their products, unlike every other industry:
I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.
And he closes with:
In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.
That first sentence, I think, nicely sums up what's wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.
The Washington Post reports that the FBI has been obtaining and reviewing records of ordinary Americans in the name of the war on terror through the use of national security letters that gag the recipients.
Merritt's entire post is worth reading.
The ACLU has been actively litigating the legality of the National Security Letters. Their latest press release is here.
EDITED TO ADD: Here's a good personal story of someone's FBI file.
EDITED TO ADD: Several people have written to tell me that the CapitolHillBlue website, above, is not reliable. I don't know one way or the other, but consider yourself warned.
Here's some good news from Microsoft:
In an eight-page document released on Capitol Hill today, Microsoft outlined a series of steps it would like to see Congress take to preempt a growing number of state laws that impose varying requirements on the collection, use, storage and disclosure of personal information.
According to the press release:
[Microsoft's senior vice president and general counsel Brad] Smith described four core principles that Microsoft believes should be the foundation of any federal legislation on data privacy:
Here's Microsoft's document, with a bunch more details.
With this kind of thing, the devil is in the details. But it's definitely a good start. Certainly Microsoft has become more pro-privacy in recent years.
I think this is a harbinger of the future:
A high roller walks into the casino, ever so mindful of the constant surveillance cameras. Wanting to avoid sales pitches and other unwanted attention, he pays cash at each table and anonymously moves around frequently to discourage people who are trying to track his movements.
On the one hand, the technology isn't very interesting; it's probably just a camera and some OCR software optimized for driver's licenses. But what is interesting is that the technology is available as a mass-market product.
Where else do you routinely show your ID? Who else might want all that information for marketing purposes?
It's at MIT:
MIT's newly upgraded wireless network -- extended this month to cover the entire school -- doesn't merely get you online in study halls, stairwells or any other spot on the 9.4 million square foot campus. It also provides information on exactly how many people are logged on at any given location at any given time.
WiFi is certainly a good technology for this sort of massive surveillance. It's an open and well-standardized technology that allows anyone to go into the surveillance business. Bluetooth is a similar technology: open and easy to use. Cell phone technologies, on the other hand, are closed and proprietary. RFID might be the preferred surveillance technology of the future, depending on how open and standardized it becomes.
Whatever the technology, privacy is a serious concern:
While every device connected to the campus network via Wi-Fi is visible on the constantly refreshed electronic maps, the identity of the users is confidential unless they volunteer to make it public.
This fascinating research paper discusses the vulnerabilities of the U.S. Navy Fleet Broadcast System in the 1980s. If you remember, John Walker and cohorts handed the Soviets the secrets that allowed them to eavesdrop on this system.
Here's a paper on Oracle's password hashing algorithm. It isn't very good.
In this paper the authors examine the mechanism used in Oracle databases for protecting users' passwords. We review the algorithm used for generating password hashes, and show that the current mechanism presents a number of weaknesses, making it straightforward for an attacker with limited resources to recover a user's plaintext password from the hashed value. We also describe how to implement a password recovery tool using off-the-shelf software. We conclude by discussing some possible attack vectors and recommendations to mitigate this risk.
My fifth column for Wired:
The State Department has done a great job addressing specific security and privacy concerns, but its lack of technical skills is hurting it. The collision-avoidance ID is just one example of where, apparently, the State Department didn't have enough of the expertise it needed to do this right.
I've often said that security discussions are rarely about security. Here's a story that illustrates that.
A New Jersey mother doesn't like her child's school bus stopping at McDonald's on Friday mornings. Apparently unable to come up with a cogent argument against these stops (which seems odd to me, honestly, as I can think of several), she invokes movie-plot security threats:
"I think they all like it," Tyler [the mother] said. "They are anywhere from 9th to 12th graders. They don't really think about the point that it could be a dangerous situation. They just think it's breakfast."
The University of Regensburg in Germany has released authentication software that makes use of the fact that each person's typing behavior is unique. It works by requesting that the person who seeks access to a computer or a password-protected file type a short passage on an ordinary keyboard: the longer the passage, the more reliable the authentication.
The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it's based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it's any good, but it's cool to see something so different.
I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.
The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.
Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there's a grain of fundamental truth buried here.
And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they're sprinkled all over security protocols. They're used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.
So we need a fast answer for immediate applications.
We also need "SHA2," whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)
Unfortunately, we're in no position to have an AES-like competition to replace SHA right now. We simply don't know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses.... Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.
NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.
I need to get to work on a hash function based on Phelix.
This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.
Lots of interesting ideas, but no real discussion about trade-offs. But it's the trade-offs that are important. It's easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest's dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I'm making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it's the correct analysis here as well.
Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn't be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we've just been too optimistic about how strong hash functions are.
The other thing we need to be doing is providing answers to developers. It's not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They'll need an answer in a year. They'll need an answer in four years. Maybe the answers will be the same, and maybe they'll be different. But if we don't give them answers, they'll make something up. They won't wait for us.
And while it's true that we don't have any real theory of hash functions, and it's true that anything we choose will be based partly on faith, we have no choice but to choose.
And finally, I think we need to stimulate research more. Whether it's a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses.... We need a whole bunch of new hash functions to beat up; that's how we'll learn to design better ones.
Mark Russinovich discovered a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system.
At that point I knew conclusively that the rootkit and its associated files were related to the First 4 Internet DRM software Sony ships on its CDs. Not happy having underhanded and sloppily written software on my system I looked for a way to uninstall it. However, I didn’t find any reference to it in the Control Panel’s Add or Remove Programs list, nor did I find any uninstall utility or directions on the CD or on First 4 Internet’s site. I checked the EULA and saw no mention of the fact that I was agreeing to have software put on my system that I couldn't uninstall. Now I was mad.
Removing the rootkit kills Windows.
Could Sony have violated the the Computer Misuse Act in the UK? If this isn't clearly in the EULA, they have exceeded their privilege on the customer's system by installing a rootkit to hide their software.
Certainly Mark has a reasonable lawsuit against Sony in the U.S.
EDITED TO ADD: The Washington Post is covering this story.
Sony lies about their rootkit:
November 2, 2005 - This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers.
Their update does not remove the rootkit, it just gets rid of the $sys$ cloaking.
Ed Felton has a great post on the issue:
The update is more than 3.5 megabytes in size, and it appears to contain new versions of almost all the files included in the initial installation of the entire DRM system, as well as creating some new files. In short, they're not just taking away the rootkit-like function -- they're almost certainly adding things to the system as well. And once again, they're not disclosing what they're doing.
World of Warcraft hackers have confirmed that the hiding capabilities of Sony BMG's content protection software can make tools made for cheating in the online world impossible to detect..
EDITED TO ADD: F-Secure makes a good point:
A member of our IT security team pointed out quite chilling thought about what might happen if record companies continue adding rootkit based copy protection into their CDs.
EDITED TO ADD: Declan McCullagh has a good essay on the topic. There will be lawsuits.
EDITED TO ADD: The Italian police are getting involved.
EDITED TO ADD: Here's a Trojan that uses Sony's rootkit to hide.
EDITED TO ADD: Sony temporarily halts production of CDs protected with this technology.
From The New Scientist:
The hyper-secretive US National Security Agency -- the government’s eavesdropping arm -- appears to be having its patent applications increasingly blocked by the Pentagon. And the grounds for this are for reasons of national security, reveals information obtained under a freedom of information request.
EDITED TO ADD: This story is wrong.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.