Blog: November 2005 Archives

Google and Privacy

Daniel Solove on Google and privacy:

A New York Times editorial observes:

At a North Carolina strangulation-murder trial this month, prosecutors announced an unusual piece of evidence: Google searches allegedly done by the defendant that included the words “neck” and “snap.” The data were taken from the defendant’s computer, prosecutors say. But it might have come directly from Google, which—unbeknownst to many users—keeps records of every search on its site, in ways that can be traced back to individuals.

This is an interesting fact—Google keeps records of every search in a way that can be traceable to individuals. The op-ed goes on to say:

Google has been aggressive about collecting information about its users’ activities online. It stores their search data, possibly forever, and puts “cookies” on their computers that make it possible to track those searches in a personally identifiable way—cookies that do not expire until 2038. Its e-mail system, Gmail, scans the content of e-mail messages so relevant ads can be posted. Google’s written privacy policy reserves the right to pool what it learns about users from their searches with what it learns from their e-mail messages, though Google says it won’t do so. . . .

The government can gain access to Google’s data storehouse simply by presenting a valid warrant or subpoena. . . .

This is an important point. No matter what Google’s privacy policy says, the fact that it maintains information about people’s search activity enables the government to gather that data, often with a mere subpoena, which provides virtually no protection to privacy—and sometimes without even a subpoena.

Solove goes on to argue that if companies like Google want to collect people’s data (even if people are willing to supply it), the least they can do is fight for greater protections against government access to that data. While this won’t address all the problems, it would be a step forward to see companies like Google use their power to foster meaningful legislative change.

EDITED TO ADD (12/3): Here’s an op ed from The Boston Globe on the same topic.

Posted on November 30, 2005 at 3:08 PM60 Comments

Open-Source Intelligence

How here’s a good idea:

US intelligence chief John Negroponte announced Tuesday the creation of a new CIA-managed center to exploit publicly available information for intelligence purposes.

The so-called Open Source Center will gather and analyze information from a host of sources from the Internet and commercial databases to newspapers, radio, video, maps, publications and conference reports.

Posted on November 30, 2005 at 10:42 AM39 Comments

Cybercrime Pays

This sentence jumped out at me in an otherwise pedestrian article on criminal fraud:

“Fraud is fundamentally fuelling the growth of organised crime in the UK, earning more from fraud than they do from drugs,” Chris Hill, head of fraud at the Norwich Union, told BBC News.

I’ll bet that most of that involves the Internet to some degree.

And then there’s this:

Global cybercrime turned over more money than drug trafficking last year, according to a US Treasury advisor. Valerie McNiven, an advisor to the US government on cybercrime, claimed that corporate espionage, child pornography, stock manipulation, phishing fraud and copyright offences cause more financial harm than the trade in illegal narcotics such as heroin and cocaine.

This doesn’t bode well for computer security in general.

Posted on November 30, 2005 at 6:05 AM21 Comments

Counterfeiting Ring in Colombia

Interesting:

Police assisted by U.S. Secret Service agents on Sunday broke up a network capable of printing millions of dollars a month of excellent quality counterfeit money and arrested five suspects during a raid on a remote village in northwest Colombia, officials said.

It’s a big industry there:

Fernandez said Valle del Cauca, of which Cali is the state capital, has turned into a center of global counterfeiting. “Entire families are dedicated to falsifying and trafficking money.”

And:

Colombia is thought to produce more than 40 percent of fake money circulating around the world.

Posted on November 29, 2005 at 4:29 PM27 Comments

Miami Police Stages "Random Shows of Force"

They actually think this is a good idea:

Miami police announced Monday they will stage random shows of force at hotels, banks and other public places to keep terrorists guessing and remind people to be vigilant.

Deputy Police Chief Frank Fernandez said officers might, for example, surround a bank building, check the IDs of everyone going in and out and hand out leaflets about terror threats.

“This is an in-your-face type of strategy. It’s letting the terrorists know we are out there,” Fernandez said.

The operations will keep terrorists off guard, Fernandez said. He said al-Qaida and other terrorist groups plot attacks by putting places under surveillance and watching for flaws and patterns in security.

Boy, is this one a mess. How does “in-your-face” affect getting the people on your side? What happens if someone refuses to show an ID? What good is demanding an ID in the first place? And if I were writing a movie plot, I would plan my terrorist attack for a different part of town when the police were out playing pretend.

The response from the ACLU of Florida is puzzling, though. Let’s hope he just didn’t understand what was being planned.

EDITED TO ADD (11/29): This article is in error.

EDITED TO ADD (11/30): more info.

Posted on November 29, 2005 at 1:07 PM52 Comments

Missed Cellphone Calls as Bomb Triggers

What is it with this week? I can’t turn around without seeing another dumb movie-plot threat:

A Thai minister has claimed that by returning missed calls on their cell phones people from the Muslim-majority southern provinces could unintentionally trigger bombs set by Islamic militants.

Thai authorities have begun tracing cell phone calls in a bid to track down suspects who use mobiles to detonate bombs across three provinces along the Malaysian border.

But the minister for information and communication warned that militants could try to foil the two-week-old cell phone registry by calling a random number, hanging up and then wiring the handset to a bomb.

If someone returned to the call, the bomb would blow up and authorities would trace the call to an innocent person, Sora-at Klinpratum told reporters.

Posted on November 29, 2005 at 10:01 AM46 Comments

A Science-Fiction Movie-Plot Threat

This has got to be the most bizarre movie-plot threat to date: alien viruses downloaded via the SETI project:

In his [Richard Carrigan, a particle physicist at the US Fermi National Accelerator Laboratory in Illinois] report, entitled “Do potential Seti signals need to be decontaminated?”, he suggests the Seti scientists may be too blase about finding a signal. “In science fiction, all the aliens are bad, but in the world of science, they are all good and simply want to get in touch.” His main concern is that, intentionally or otherwise, an extra-terrestrial signal picked up by the Seti team could cause widespread damage to computers if released on to the internet without being checked.

Here’s his website.

Although you have to admit, it could make a cool movie

EDITED TO ADD (12/16): Here’s a good rebuttal.

Posted on November 29, 2005 at 7:16 AM59 Comments

Interdicting Terrorist Funding

Want to make the country safer from terrorism? Take the money now being wasted on national ID cards, massive data mining projects, fingerprinting foreigners, airline passenger profiling, etc., and use it to fund worldwide efforts to interdict terrorist funding:

The government’s efforts to help foreign nations cut off the supply of money to terrorists, a critical goal for the Bush administration, have been stymied by infighting among American agencies, leadership problems and insufficient financing, a new Congressional report says.

More than four years after the Sept. 11 attacks, “the U.S. government lacks an integrated strategy” to train foreign countries and provide them with technical assistance to shore up their financial and law enforcement systems against terrorist financing, according to the report prepared by the Government Accountability Office, an investigative arm of Congress.

More:

One unidentified Treasury official quoted anonymously in the report said that the intergovernmental process for deterring terrorist financing abroad is “broken” and that the State Department “creates obstacles rather than coordinates effort.” A State Department official countered that the real problem lies in the Treasury Department’s reluctance to accept the State Department’s leadership in the process.

In another problem area, private contractors used by the Treasury Department and other agencies have been allowed to draft proposed laws in foreign countries for curbing terrorist financing, even though Justice Department officials voiced strong concerns that contractors should not be allowed to play such an active role in the legislative process.

The contractors’ work at times produced legislative proposals that had “substantial deficiencies,” the report said.

The administration has made cutting off money to terrorists one of the main prongs in its attack against Al Qaeda and other terrorist groups. It has seized tens of millions of dollars in American accounts and assets linked to terrorist groups, prodded other countries to do the same, and is now developing a program to gain access to and track potentially hundreds of millions of international bank transfers into the United States.

But experts in the field say the results have been spotty, with few clear dents in Al Qaeda’s ability to move money and finance terrorist attacks. The Congressional report- a follow-up to a 2003 report that offered a similarly bleak assessment – buttresses those concerns.

Posted on November 28, 2005 at 9:44 PM17 Comments

Giving the U.S. Military the Power to Conduct Domestic Surveillance

More nonsense in the name of defending ourselves from terrorism:

The Defense Department has expanded its programs aimed at gathering and analyzing intelligence within the United States, creating new agencies, adding personnel and seeking additional legal authority for domestic security activities in the post-9/11 world.

The moves have taken place on several fronts. The White House is considering expanding the power of a little-known Pentagon agency called the Counterintelligence Field Activity, or CIFA, which was created three years ago. The proposal, made by a presidential commission, would transform CIFA from an office that coordinates Pentagon security efforts—including protecting military facilities from attack—to one that also has authority to investigate crimes within the United States such as treason, foreign or terrorist sabotage or even economic espionage.

The Pentagon has pushed legislation on Capitol Hill that would create an intelligence exception to the Privacy Act, allowing the FBI and others to share information gathered about U.S. citizens with the Pentagon, CIA and other intelligence agencies, as long as the data is deemed to be related to foreign intelligence. Backers say the measure is needed to strengthen investigations into terrorism or weapons of mass destruction.

The police and the military have fundamentally different missions. The police protect citizens. The military attacks the enemy. When you start giving police powers to the military, citizens start looking like the enemy.

We gain a lot of security because we separate the functions of the police and the military, and we will all be much less safer if we allow those functions to blur. This kind of thing worries me far more than terrorist threats.

Posted on November 28, 2005 at 2:11 PM40 Comments

European Terrorism Law and Music Downloaders

The European music industry is lobbying the European Parliament, demanding things that the RIAA can only dream about:

The music and film industries are demanding that the European parliament extends the scope of proposed anti-terror laws to help them prosecute illegal downloaders. In an open letter to MEPs, companies including Sony BMG, Disney and EMI have asked to be given access to communications data – records of phone calls, emails and internet surfing – in order to take legal action against pirates and filesharers. Current proposals restrict use of such information to cases of terrorism and organised crime.

Our society definitely needs a serious conversation about the fundamental freedoms we are sacrificing in a misguided attempt to keep us safe from terrorism. It feels both surreal and sickening to have to defend our fundamental freedoms against those who want to stop people from sharing music. How is it possible that we can contemplate so much damage to our society simply to protect the business model of a handful of companies?

Posted on November 27, 2005 at 12:20 PM82 Comments

Vote Someone Else's Shares

Do you own shares of a Janus mutual fund? Can you vote your shares through a website called vote.proxy-direct.com? If so, you can vote the shares of others.

If you have a valid proxy number, you can add 1300 to the number to get another valid proxy number. Once entered, you get another person’s name, address, and account number at Janus! You could then vote their shares too.

It’s easy.

Probably illegal.

Definitely a great resource for identity thieves.

Certainly pathetic.

Posted on November 24, 2005 at 10:41 AM41 Comments

Twofish Cryptanalysis Rumors

Recently I have been hearing some odd “Twofish has been broken” rumors. I thought I’d quell them once and for all.

Rumors of the death of Twofish has been greatly exaggerated.

The analysis in question is by Shiho Moriai and Yiqun Lisa Yin, who published their results in Japan in 2000. Recently, someone either got a copy of the paper or heard about the results, and rumors started spreading.

Here’s the actual paper. It presents no cryptanalytic attacks, only some hypothesized differential characteristics. Moriai and Yin discovered byte-sized truncated differentials for 12- and 16-round Twofish (the full cipher has 16 rounds), but were unable to use them in any sort of attack. They also discovered a larger, 5-round truncated differential. No one has been able to convert these differentials into an attack, and Twofish is nowhere near broken. On the other hand, they are excellent and interesting results—and it’s a really good paper.

In more detail, here are the paper’s three results:

  1. The authors show a 12-round truncated differential characteristic that predicts that the 2nd byte of the ciphertext difference will be 0 when the plaintext difference is all-zeros except for its last byte. They say the characteristic holds with probability 2-40.9. Note that for an ideal cipher, we expect the 2nd byte of ciphertext to be 0 with probability 2-8, just by chance. Of course, 2-8 is much, much larger than 2-40.9. Therefore, this is not particularly useful in a distinguishing attack.

    One possible interpretation of their result would be to conjecture that the 2nd byte of ciphertext difference will be 0 with probability 2-8 + 2-40.9 for Twofish, but only 2-8 for an ideal cipher. Their characteristic is just one path. If one is lucky, perhaps all other paths behave randomly and contribute an additional 2-8 factor to the total probability of getting a 0 in the 2nd byte of ciphertext difference. Perhaps. One might conjecture that, anyway.

    It is not at all clear whether this conjecture is true, and the authors are careful not to claim it. If it were true, it might lead to a theoretical distinguishing attack using 275 chosen plaintexts or so (very rough estimate). But I’m not at all sure that the conjecture is true.

  2. They show a 16-round truncated differential that predicts that the 2nd byte of the ciphertext difference will be 0 (under the same input difference). Their characteristic holds with probability 2-57.3 (they say). Again, this is not very useful.

    Analogously to the first result, one might conjecture that the 2nd byte of the ciphertext difference will be 0 with probability 2-8 + 2-57.3 for Twofish, but probability 2-8 for an ideal cipher. If this were true, one might be able to mount a distinguishing attack with 2100 chosen plaintexts or so (another very rough estimate). But I have no idea whether the conjecture is true.

  3. They also show a 5-round truncated differential characteristic that predicts that the input difference that is non-zero everywhere except in its 9th byte will lead to an output difference of the same form. This characteristic has probability 2-119.988896, they say (but they also say that they have made some approximations, and the actual probabilities can be a little smaller or a little larger). Compared to an ideal cipher, where one would expect this to happen by chance with probability 2-120, this isn’t very interesting. It’s hard to imagine how this could be useful in a distinguishing attack.

The paper theorizes that all of these characteristics might be useful in an attack, but I would be very careful about drawing any conclusions. It can be very tricky to go from single-path characteristics whose probability is much smaller than the chances of it happening by chance in an ideal cipher, to a real attack. The problem is in the part where you say “let’s just assume all other paths behave randomly.” Often the other paths do not behave randomly, and attacks that look promising fall flat on their faces.

We simply don’t know whether these truncated differentials would be useful in a distinguishing attack. But what we do know is that even if everything works out perfectly to the cryptanalyst’s benefit, and if an attack is possible, then such an attack is likely to require a totally unrealistic number of chosen plaintexts. 2100 plaintexts is something like a billion billion DVDs’ worth of data, or a T1 line running for a million times the age of the universe. (Note that these numbers might be off by a factor of 1,000 or so. But honestly, who cares? The numbers are so huge as to be irrelevent.) And even with all that data, a distinguishing attack is not the same as a key recovery attack.

Again, I am not trying to belittle the results. Moriai and Yin did some great work here, and they deserve all kinds of credit for it. But even from a theoretical perspective, Twofish isn’t even remotely broken. There have been no extensions to these results since they were published five years ago. The best Twofish cryptanalysis is still the work we did during the design process: available on the Twofish home page.

Posted on November 23, 2005 at 12:15 PM38 Comments

Today's Movie-Plot Threat: Electronic Pulses from Space

No. Really:

The United States is highly vulnerable to attack from electronic pulses caused by a nuclear blast in space, according to a new book on threats to U.S. security.

A single nuclear weapon carried by a ballistic missile and detonated a few hundred miles over the United States would cause “catastrophe for the nation” by damaging electricity-based networks and infrastructure, including computers and telecommunications, according to “War Footing: 10 Steps America Must Take to Prevail in the War for the Free World.”

“This is the single most serious national-security challenge and certainly the least known,” said Frank J. Gaffney Jr. of the Center for Security Policy, a former Pentagon official and lead author of the book, which includes contributions by 34 security and intelligence specialists.

The “single most serious national-security challenge.” Absolutely nothing more serious.

Sheesh.

Posted on November 23, 2005 at 7:39 AM57 Comments

Australian Minister's Sensible Comments on Airline Security Sparks Outcry

I’m the first to admit that I don’t know anything about Australian politics. I don’t know who Amanda Vanstone is, what she stands for, and what other things she’s said about any other topic.

But I happen to think she’s right about airline security:

In a wide-ranging speech to Adelaide Rotarians, Senator Vanstone dismissed many commonwealth security measures as essentially ineffective. “To be tactful about these things, a lot of what we do is to make people feel better as opposed to actually achieve an outcome,” Senator Vanstone said.

And:

During her Adelaide speech, Senator Vanstone implied the use of plastic cutlery on planes to thwart terrorism was foolhardy.

Implied? I’ll say it outright. It’s stupid. For all its faults, I’m always pleased when Northwest Airlines gives me a real metal knife, and I am always annoyed when American Airlines still gives me a plastic one.

“Has it ever occurred to you that you just smash your wine glass and jump at someone, grab the top of their head and put it in their carotid artery and ask anything?” Senator Vanstone told her audience of about 100 Rotarians. “And believe me, you will have their attention. I think of this every time I see more money for the security agencies.”

The Immigration Minister also told of a grisly conversation with Mr Howard during a discussion on increased spending on national security.

Senator Vanstone said: “I asked him if I was able to get on a plane with an HB pencil, which you are able to, and I further asked him if I went down and came and grabbed him by the front of the head and stabbed the HB pencil into your eyeball and wiggled it around down to your brain area, do you think you’d be focusing? He’s thinking, she’s gone mad again.”

Okay, so maybe that was a bit graphic for the Rotarians. But her comments are basically right, and don’t deserve this kind of response:

“(Her) extraordinary outburst that airport security was a sham to make the public feel good has made a mockery of the Howard Government’s credibility in this important area of counter-terrorism,” Mr Bevis said yesterday. “And for Amanda Vanstone to once again put her foot in her mouth while John Howard is overseas for serious talks on terrorism is appalling. She should apologise and quit, or if the Prime Minister can’t shut her up he should sack her.”

But Mr. Bevis, airport security is largely a sham to make the public feel better about flying. And if your Prime Minister doesn’t know that, then you should worry about how serious his talks will be.

Vanstone has been defending herself:

Vanstone rejected calls from the Labor Party opposition for her resignation over the comments they said trivialised an important issue, saying she was not ridiculing security measures.

“If the day has come when a minister can’t say what every other Australian says and that is that plastic knives drive us crazy, I think we’re in desperate straits,” the minister told commercial radio on Monday.

Vanstone said she did not believe the security measures should be scrapped.

“What I have said is that putting a plastic knife on a plane doesn’t necessarily make you very much safer. Bear in mind there are other things that are on planes,” she said.

“People should not feel that because plastic knives are there, the world has dramatically changed—because there are still HB pencils.”

Plastic knives on airplanes drive me crazy too, and they don’t do anything to improve our security against terrorism. I know nothing about Vanstone and her policies, but she has this one right.

Posted on November 22, 2005 at 1:41 PM97 Comments

Surveillance and Oversight

Christmas 2003, Las Vegas. Intelligence hinted at a terrorist attack on New Year’s Eve. In the absence of any real evidence, the FBI tried to compile a real-time database of everyone who was visiting the city. It collected customer data from airlines, hotels, casinos, rental car companies, even storage locker rental companies. All this information went into a massive database—probably close to a million people overall—that the FBI’s computers analyzed, looking for links to known terrorists. Of course, no terrorist attack occurred and no plot was discovered: The intelligence was wrong.

A typical American citizen spending the holidays in Vegas might be surprised to learn that the FBI collected his personal data, but this kind of thing is increasingly common. Since 9/11, the FBI has been collecting all sorts of personal information on ordinary Americans, and it shows no signs of letting up.

The FBI has two basic tools for gathering information on large groups of Americans. Both were created in the 1970s to gather information solely on foreign terrorists and spies. Both were greatly expanded by the USA Patriot Act and other laws, and are now routinely used against ordinary, law-abiding Americans who have no connection to terrorism. Together, they represent an enormous increase in police power in the United States.

The first are FISA warrants (sometimes called Section 215 warrants, after the section of the Patriot Act that expanded their scope). These are issued in secret, by a secret court. The second are national security letters, less well known but much more powerful, and which FBI field supervisors can issue all by themselves. The exact numbers are secret, but a recent Washington Post article estimated that 30,000 letters each year demand telephone records, banking data, customer data, library records, and so on.

In both cases, the recipients of these orders are prohibited by law from disclosing the fact that they received them. And two years ago, Attorney General John Ashcroft rescinded a 1995 guideline that this information be destroyed if it is not relevant to whatever investigation it was collected for. Now, it can be saved indefinitely, and disseminated freely.

September 2005, Rotterdam. The police had already identified some of the 250 suspects in a soccer riot from the previous April, but most were unidentified but captured on video. In an effort to help, they sent text messages to 17,000 phones known to be in the vicinity of the riots, asking that anyone with information contact the police. The result was more evidence, and more arrests.

The differences between the Rotterdam and Las Vegas incidents are instructive. The Rotterdam police needed specific data for a specific purpose. Its members worked with federal justice officials to ensure that they complied with the country’s strict privacy laws. They obtained the phone numbers without any names attached, and deleted them immediately after sending the single text message. And their actions were public, widely reported in the press.

On the other hand, the FBI has no judicial oversight. With only a vague hinting that a Las Vegas attack might occur, the bureau vacuumed up an enormous amount of information. First its members tried asking for the data; then they turned to national security letters and, in some cases, subpoenas. There was no requirement to delete the data, and there is every reason to believe that the FBI still has it all. And the bureau worked in secret; the only reason we know this happened is that the operation leaked.

These differences illustrate four principles that should guide our use of personal information by the police. The first is oversight: In order to obtain personal information, the police should be required to show probable cause, and convince a judge to issue a warrant for the specific information needed. Second, minimization: The police should only get the specific information they need, and not any more. Nor should they be allowed to collect large blocks of information in order to go on “fishing expeditions,” looking for suspicious behavior. The third is transparency: The public should know, if not immediately then eventually, what information the police are getting and how it is being used. And fourth, destruction. Any data the police obtains should be destroyed immediately after its court-authorized purpose is achieved. The police should not be able to hold on to it, just in case it might become useful at some future date.

This isn’t about our ability to combat terrorism; it’s about police power. Traditional law already gives police enormous power to peer into the personal lives of people, to use new crime-fighting technologies, and to correlate that information. But unfettered police power quickly resembles a police state, and checks on that power make us all safer.

As more of our lives become digital, we leave an ever-widening audit trail in our wake. This information has enormous social value—not just for national security and law enforcement, but for purposes as mundane as using cell-phone data to track road congestion, and as important as using medical data to track the spread of diseases. Our challenge is to make this information available when and where it needs to be, but also to protect the principles of privacy and liberty our country is built on.

This essay originally appeared in the Minneapolis Star-Tribune.

Posted on November 22, 2005 at 6:06 AM35 Comments

The Sony Rootkit Saga Continues

I’m just not able to keep up with all the twists and turns in this story. (My previous posts are here, here, here, and here, but a way better summary of the events is on BoingBoing: here, here, and here. Actually, you should just read every post on the topic in Freedom to Tinker. This is also worth reading.)

Many readers pointed out to me that the DMCA is one of the reasons antivirus companies aren’t able to disable invasive copy-protection systems like Sony’s rootkit: it may very well be illegal for them to do so. (Adam Shostack made this point.)

Here are two posts about the rootkit before Russinovich posted about it.

And it turns out you can easily defeat the rootkit:

With a small bit of tape on the outer edge of the CD, the PC then treats the disc as an ordinary single-session music CD and the commonly used music “rip” programs continue to work as usual.

(Original here.)

The fallout from this has been simply amazing. I’ve heard from many sources that the anti-copy-protection forces in Sony and other companies have newly found power, and that copy-protection has been set back years. Let’s hope that the entertainment industry realizes that digital copy protection is a losing game here, and starts trying to make money by embracing the characteristics of digital technology instead of fighting against them. I’ve written about that here and here (both from 2001).

Even Foxtrot has a cartoon on the topic.

I think I’m done here. Others are covering this much more extensively than I am. Unless there’s a new twist that I simply have to comment on….

EDITED TO ADD (11/21): The EFF is suing Sony. (The page is a good summary of the whole saga.)

EDITED TO ADD (11/22): Here’s a great idea; Sony can use a feature of the rootkit to inform infected users that they’re infected.

As it turns out, there’s a clear solution: A self-updating messaging system already built into Sony’s XCP player. Every time a user plays a XCP-affected CD, the XCP player checks in with Sony’s server. As Russinovich explained, usually Sony’s server sends back a null response. But with small adjustments on Sony’s end—just changing the output of a single script on a Sony web server—the XCP player can automatically inform users of the software improperly installed on their hard drives, and of their resulting rights and choices.

This is so obviously the right thing to do. My guess is that it’ll never happen.

Texas is suing Sony. According to the official statement:

The suit is also the first filed under the state’s spyware law of 2005. It alleges the company surreptitiously installed the spyware on millions of compact music discs (CDs) that consumers inserted into their computers when they play the CDs, which can compromise the systems.

And here’s something I didn’t know: the rootkit consumes 1% – 2% of CPU time, whether or not you’re playing a Sony CD. You’d think there would be a “theft of services” lawsuit in there somewhere.

EDITED TO ADD (11/30): Business Week has a good article on the topic.

Posted on November 21, 2005 at 4:34 PM37 Comments

Reminiscences of a 75-Year-Old Jewel Thief

The amazing story of Doris Payne:

Never did she grab the jewels and run. That wasn’t her way. Instead, she glided in, engaged the clerk in one of her stories, confused them and easily slipped away with a diamond ring, usually to a waiting taxi cab.

Don’t think that she never got caught:

She wasn’t always so lucky. She’s been arrested more times than she can remember. One detective said her arrest report is more than 6 feet long—she’s done time in Ohio, Kentucky, West Virginia, Colorado and Wisconsin. Still, the arrests are really “just the tip of the iceberg,” said FBI supervisory special agent Paul G. Graupmann.

Posted on November 21, 2005 at 3:00 PM26 Comments

Possible Net Objects Fusion 9 Vulnerability

I regularly get anonymous e-mail from people exposing software vulnerabilities. This one looks interesting.

Beta testers have discovered a serious security flaw that exposes a site created using Net Objects Fusion 9 (NOF9) that has the potential to expose an entire site to hacking, including passwords and log in info for that site. The vulnerability exists for any website published using versioning (that is, all sites using nPower).

The vulnerability is easy to exploit. In your browser enter:
http://domain.com/_versioning_repository_/rollbacklog.xml

Now enter:
http://domain.com/_versioning_repository_/n.zip, where n is the number you got from rollback.xml.

Then, open Fusion and create a new site from the d/l’ed template. Edit and republish.

This means that anyone can edit a NOF9 site and get any usernames and passwords involved in it. Every site using versioning in NOF9 is exposing their site.

Website Pros has refused to fix the hole. The only concession that they have made is to put a warning in the publishing dialog box telling the user to “Please make sure your profiles repository are [sic] stored in a secure area of your remote server.”

I don’t use NOF9, and I haven’t tested this vulnerability. Can someone do so and get back to me? And if it is a real problem, spread the word. I don’t know yet if Website Pros prefers to pay lawyers to suppress information rather than pay developers to fix software vulnerabilities.

Posted on November 21, 2005 at 12:31 PM13 Comments

Automatic Lie Detector

Coming soon to airports:

Tested in Russia, the two-stage GK-1 voice analyser requires that passengers don headphones at a console and answer “yes” or “no” into a microphone to questions about whether they are planning something illicit.

The software will almost always pick up uncontrollable tremors in the voice that give away liars or those with something to hide, say its designers at Israeli firm Nemesysco.

Fascinating.

In general, I prefer security systems that are invasive yet anonymous to ones that are based on massive databases. And automatic systems that divide people into a “probably fine” and “investigate a bit more” categories seem like a good use of technology. I have no idea whether this system works (there is a lot of evidence that it does not), what the false positive and false negative rates are (this article states a completely useless 12% false positive rate), or how easy it would be to learn how to fool the system, though. And in all of these trade-off discussions, the devil is in the details.

Posted on November 21, 2005 at 8:07 AM26 Comments

Prisons and Guards

This Iowa prison break illustrates an important security principle:

State Sen. Gene Fraise said he was told by prison officials that the inmates somehow got around a wire that is supposed to activate an alarm when touched. The wall also had razor wire, he said.

“The only thing I know for sure is they went over the wall in the southwest corner with a rope and a grappling hook they fashioned out of metal from somewhere,” Fraise said.

Fred Scaletta, a Corrections Department spokesman, said the inmates used upholstery webbing, a material used by inmates who make furniture at a shop inside the prison, to scale the wall. The guard tower in that section of the prison was unmanned at the time because of budget cuts, he said.

“I don’t want to say I told you so, but those towers were put there for security, and when you don’t man those towers, that puts a hole in your security,” Fraise said.

Guards = dynamic security. Tripwires = static security. Dynamic security is better than static security.

Unfortunately, some people simply don’t understand the fundamentals of security:

State Rep. Lance Horbach, a Republican, criticized Fraise for suggesting budget cuts were a factor in the escape.

“In reality, we should explore why the taut wire system failed to alert guards and security staff that these two convicts were attempting to escape,” he said.

Actually, in reality you should be putting guards in the guard towers.

Posted on November 18, 2005 at 3:34 PM42 Comments

Fraud and Western Union

Western Union has been the conduit of a lot of fraud. But since they’re not the victim, they don’t care much about security. It’s an externality to them. It took a lawsuit to convince them to take security seriously.

Western Union, one of the world’s most frequently used money transfer services, will begin warning its customers against possible fraud in their transactions.

Persuading consumers to send wire transfers, particularly to Canada, has been a popular method for con artists. Recent scams include offering consumers counterfeit cashier’s checks, advance-fee loans and phony lottery winnings.

More than $113 million was swindled in 2002 from U.S. residents through wire transfer fraud to Canada alone, according to a survey conducted by investigators in seven states.

Washington was one of 10 states that negotiated an $8.5 million settlement with Western Union. Most of the settlement would fund a national program to counsel consumers against telemarketing fraud.

In addition to the money, the company has agreed to increase fraud awareness at more than 50,000 locations, develop a computer program that would spot likely fraud-induced transfers before they are completed and block transfers from specific consumers to specific recipients when the company receives fraud information from state authorities.

Posted on November 18, 2005 at 11:06 AM

Ex-MI5 Chief Calls ID Cards "Useless"

Refreshing candor:

The case for identity cards has been branded “bogus” after an ex-MI5 chief said they might not help fight terror.

Dame Stella Rimington has said most documents could be forged and this would render ID cards “useless”.

[…]

She said: “ID cards have possibly some purpose.

“But I don’t think that anybody in the intelligence services, particularly in my former service, would be pressing for ID cards.

“My angle on ID cards is that they may be of some use but only if they can be made unforgeable – and all our other documentation is quite easy to forge.

“If we have ID cards at vast expense and people can go into a back room and forge them they are going to be absolutely useless.

“ID cards may be helpful in all kinds of things but I don’t think they are necessarily going to make us any safer.”

Posted on November 18, 2005 at 6:48 AM26 Comments

U.S. Compromises Canadian Privacy

A Canadian reporter was able to get phone records for the personal and professional accounts held by Canadian Privacy Commissioner Jennifer Stoddart through an American data broker, locatecell.com. The security concerns are obvious.

Canada has an exception in the privacy laws that allows newspapers to do this type of investigative reporting. My guess is that’s the only reason we haven’t seen an American reporter pull phone records on one of our government officials.

Posted on November 17, 2005 at 2:32 PM21 Comments

Hackers and Criminals

More evidence that hackers are migrating into crime:

Since then, organised crime units have continued to provide a fruitful income for a group of hackers that are effectively on their payroll. Their willingness to pay for hacking expertise has also given rise to a new subset of hackers. These are not hardcore criminals in pursuit of defrauding a bank or duping thousands of consumers. In one sense, they are the next generation of hackers that carry out their activities in pursuit of credibility from their peers and the ‘buzz’ of hacking systems considered to be unbreakable.

Where they come into contact with serious criminals is through underworld forums and chatrooms, where their findings are published and they are paid effectively for their intellectual property. This form of hacking – essentially ‘hacking for hire’ – is becoming more common with hackers trading zero-day exploit information, malcode, bandwidth, identities and toolkits underground for cash. So a hacker might package together a Trojan that defeats the latest version of an anti-virus client and sell that to a hacking community sponsored by criminals.

Posted on November 17, 2005 at 12:25 PM16 Comments

Sony's DRM Rootkit: The Real Story

This is my sixth column for Wired.com:

It’s a David and Goliath story of the tech blogs defeating a mega-corporation.

On Oct. 31, Mark Russinovich broke the story in his blog: Sony BMG Music Entertainment distributed a copy-protection scheme with music CDs that secretly installed a rootkit on computers. This software tool is run without your knowledge or consent—if it’s loaded on your computer with a CD, a hacker can gain and maintain access to your system and you wouldn’t know it.

The Sony code modifies Windows so you can’t tell it’s there, a process called “cloaking” in the hacker world. It acts as spyware, surreptitiously sending information about you to Sony. And it can’t be removed; trying to get rid of it damages Windows.

This story was picked up by other blogs (including mine), followed by the computer press. Finally, the mainstream media took it up.

The outcry was so great that on Nov. 11, Sony announced it was temporarily halting production of that copy-protection scheme. That still wasn’t enough—on Nov. 14 the company announced it was pulling copy-protected CDs from store shelves and offered to replace customers’ infected CDs for free.

But that’s not the real story here.

It’s a tale of extreme hubris. Sony rolled out this incredibly invasive copy-protection scheme without ever publicly discussing its details, confident that its profits were worth modifying its customers’ computers. When its actions were first discovered, Sony offered a “fix” that didn’t remove the rootkit, just the cloaking.

Sony claimed the rootkit didn’t phone home when it did. On Nov. 4, Thomas Hesse, Sony BMG’s president of global digital business, demonstrated the company’s disdain for its customers when he said, “Most people don’t even know what a rootkit is, so why should they care about it?” in an NPR interview. Even Sony’s apology only admits that its rootkit “includes a feature that may make a user’s computer susceptible to a virus written specifically to target the software.”

However, imperious corporate behavior is not the real story either.

This drama is also about incompetence. Sony’s latest rootkit-removal tool actually leaves a gaping vulnerability. And Sony’s rootkit—designed to stop copyright infringement—itself may have infringed on copyright. As amazing as it might seem, the code seems to include an open-source MP3 encoder in violation of that library’s license agreement. But even that is not the real story.

It’s an epic of class-action lawsuits in California and elsewhere, and the focus of criminal investigations. The rootkit has even been found on computers run by the Department of Defense, to the Department of Homeland Security’s displeasure. While Sony could be prosecuted under U.S. cybercrime law, no one thinks it will be. And lawsuits are never the whole story.

This saga is full of weird twists. Some pointed out how this sort of software would degrade the reliability of Windows. Someone created malicious code that used the rootkit to hide itself. A hacker used the rootkit to avoid the spyware of a popular game. And there were even calls for a worldwide Sony boycott. After all, if you can’t trust Sony not to infect your computer when you buy its music CDs, can you trust it to sell you an uninfected computer in the first place? That’s a good question, but—again—not the real story.

It’s yet another situation where Macintosh users can watch, amused (well, mostly) from the sidelines, wondering why anyone still uses Microsoft Windows. But certainly, even that is not the real story.

The story to pay attention to here is the collusion between big media companies who try to control what we do on our computers and computer-security companies who are supposed to be protecting us.

Initial estimates are that more than half a million computers worldwide are infected with this Sony rootkit. Those are amazing infection numbers, making this one of the most serious internet epidemics of all time—on a par with worms like Blaster, Slammer, Code Red and Nimda.

What do you think of your antivirus company, the one that didn’t notice Sony’s rootkit as it infected half a million computers? And this isn’t one of those lightning-fast internet worms; this one has been spreading since mid-2004. Because it spread through infected CDs, not through internet connections, they didn’t notice? This is exactly the kind of thing we’re paying those companies to detect—especially because the rootkit was phoning home.

But much worse than not detecting it before Russinovich’s discovery was the deafening silence that followed. When a new piece of malware is found, security companies fall over themselves to clean our computers and inoculate our networks. Not in this case.

McAfee didn’t add detection code until Nov. 9, and as of Nov. 15 it doesn’t remove the rootkit, only the cloaking device. The company admits on its web page that this is a lousy compromise. “McAfee detects, removes and prevents reinstallation of XCP.” That’s the cloaking code. “Please note that removal will not impair the copyright-protection mechanisms installed from the CD. There have been reports of system crashes possibly resulting from uninstalling XCP.” Thanks for the warning.

Symantec’s response to the rootkit has, to put it kindly, evolved. At first the company didn’t consider XCP malware at all. It wasn’t until Nov. 11 that Symantec posted a tool to remove the cloaking. As of Nov. 15, it is still wishy-washy about it, explaining that “this rootkit was designed to hide a legitimate application, but it can be used to hide other objects, including malicious software.”

The only thing that makes this rootkit legitimate is that a multinational corporation put it on your computer, not a criminal organization.

You might expect Microsoft to be the first company to condemn this rootkit. After all, XCP corrupts Windows’ internals in a pretty nasty way. It’s the sort of behavior that could easily lead to system crashes—crashes that customers would blame on Microsoft. But it wasn’t until Nov. 13, when public pressure was just too great to ignore, that Microsoft announced it would update its security tools to detect and remove the cloaking portion of the rootkit.

Perhaps the only security company that deserves praise is F-Secure, the first and the loudest critic of Sony’s actions. And Sysinternals, of course, which hosts Russinovich’s blog and brought this to light.

Bad security happens. It always has and it always will. And companies do stupid things; always have and always will. But the reason we buy security products from Symantec, McAfee and others is to protect us from bad security.

I truly believed that even in the biggest and most-corporate security company there are people with hackerish instincts, people who will do the right thing and blow the whistle. That all the big security companies, with over a year’s lead time, would fail to notice or do anything about this Sony rootkit demonstrates incompetence at best, and lousy ethics at worst.

Microsoft I can understand. The company is a fan of invasive copy protection—it’s being built into the next version of Windows. Microsoft is trying to work with media companies like Sony, hoping Windows becomes the media-distribution channel of choice. And Microsoft is known for watching out for its business interests at the expense of those of its customers.

What happens when the creators of malware collude with the very companies we hire to protect us from that malware?

We users lose, that’s what happens. A dangerous and damaging rootkit gets introduced into the wild, and half a million computers get infected before anyone does anything.

Who are the security companies really working for? It’s unlikely that this Sony rootkit is the only example of a media company using this technology. Which security company has engineers looking for the others who might be doing it? And what will they do if they find one? What will they do the next time some multinational company decides that owning your computers is a good idea?

These questions are the real story, and we all deserve answers.

EDITED TO ADD (11/17): Slashdotted.

EDITED TO ADD (11/19): Details of Sony’s buyback program. And more GPL code was stolen and used in the rootkit.

Posted on November 17, 2005 at 9:08 AM

Identity Theft Over-Reported

I’m glad to see that someone wrote this article. For a long time now, I’ve been saying that the rate of identity theft has been grossly overestimated: too many things are counted as identity theft that are just traditional fraud. Here’s some interesting data to back that claim up:

Multiple surveys have found that around 20 percent of Americans say they have been beset by identity theft. But what exactly is identity theft?

The Identity Theft and Assumption Deterrence Act of 1998 defines it as the illegal use of someone’s “means of identification”—including a credit card. So if you lose your card and someone else uses it to buy a candy bar, technically you have been the victim of identity theft.

Of course misuse of lost, stolen or surreptitiously copied credit cards is a serious matter. But it shouldn’t force anyone to hide in a cave.

Federal law caps our personal liability at $50, and even that amount is often waived. That’s why surveys have found that about two-thirds of people classified as identity theft victims end up paying nothing out of their own pockets.

The more pernicious versions of identity theft, in which fraudsters use someone else’s name to open lines of credit or obtain government documents, are much rarer.

Consider a February survey for insurer Chubb Corp. of 1,866 people nationwide. Nearly 21 percent said they had been an identity theft victim in the previous year.

But when the questioners asked about specific circumstances—and broadened the time frame beyond just the previous year—the percentages diminished. About 12 percent said a collection agency had demanded payment for purchases they hadn’t made. Some 8 percent said fraudulent checks had been drawn against their accounts.

In both cases, the survey didn’t ask whether a faulty memory or a family member—rather than a shadowy criminal—turned out to be to be the culprit.

It wouldn’t be uncommon. In a 2005 study by Synovate, a research firm, half of self-described victims blamed relatives, friends, neighbors or in-home employees.

When Chubb’s report asked whether people had suffered the huge headache of finding that someone else had taken out loans in their name, 2.4 percent—one in 41 people—said yes.

So what about the claim that 10 million Americans are hit every year, a number often used to pitch credit monitoring services? That statistic, which would amount to about one in 22 adults, also might not be what it seems.

The figure arose in a 2003 report by Synovate commissioned by the Federal Trade Commission. A 2005 update by Synovate put the figure closer to 9 million.

Both totals include misuse of existing credit cards.

Subtracting that, the identity theft numbers were still high but not as frightful: The FTC report determined that fraudsters had opened new accounts or committed similar misdeeds in the names of 3.2 million Americans in the previous year.

The average victim lost $1,180 and wasted 60 hours trying to resolve the problem. Clearly, it’s no picnic.

But there was one intriguing nugget deep in the report.

Some 38 percent of identity theft victims said they hadn’t bothered to notify anyone—not the police, not their credit card company, not a credit bureau. Even when fraud losses purportedly exceeded $5,000, the kept-it-to-myself rate was 19 percent.

Perhaps some people decide that raising a stink over a wrongful charge isn’t worth the trouble. Even so, the finding made the overall validity of the data seem questionable to Fred Cate, an Indiana University law professor who specializes in privacy and security issues.

“That’s not identity theft,” he said. “I’m just confident if you saw a charge that wasn’t yours, you’d contact somebody.”

Identity theft is a serious crime, and it’s a major growth industry in the criminal world. But we do everyone a disservice when we count things as identity theft that really aren’t.

Posted on November 16, 2005 at 1:21 PM36 Comments

Stride-Based Security

Can a cell phone detect if it is stolen by measuring the gait of the person carrying it?

Researchers at the VTT Technical Research Centre of Finland have developed a prototype of a cell phone that uses motion sensors to record a user’s walking pattern of movement, or gait. The device then periodically checks to see that it is still in the possession of its legitimate owner, by measuring the current stride and comparing it against that stored in its memory.

Clever, as long as you realize that there are going to be a lot of false alarms. This seems okay:

If the phone suspects it has fallen into the wrong hands, it will prompt the user for a password if they attempt to make calls or access its memory.

Posted on November 16, 2005 at 6:26 AM29 Comments

Still More on Sony's DRM Rootkit

This story is just getting weirder and weirder (previous posts here and here).

Sony already said that they’re stopping production of CDs with the embedded rootkit. Now they’re saying that they will pull the infected disks from stores and offer free exchanges to people who inadvertently bought them.

Sony BMG Music Entertainment said Monday it will pull some of its most popular CDs from stores in response to backlash over copy-protection software on the discs.

Sony also said it will offer exchanges for consumers who purchased the discs, which contain hidden files that leave them vulnerable to computer viruses when played on a PC.

That’s good news, but there’s more bad news. The patch Sony is distributing to remove the rootkit opens a huge security hole:

The root of the problem is a serious design flaw in Sony’s web-based uninstaller. When you first fill out Sony’s form to request a copy of the uninstaller, the request form downloads and installs a program – an ActiveX control created by the DRM vendor, First4Internet – called CodeSupport. CodeSupport remains on your system after you leave Sony’s site, and it is marked as safe for scripting, so any web page can ask CodeSupport to do things. One thing CodeSupport can be told to do is download and install code from an Internet site. Unfortunately, CodeSupport doesn’t verify that the downloaded code actually came from Sony or First4Internet. This means any web page can make CodeSupport download and install code from any URL without asking the user’s permission.

Even more interesting is that there may be at least half a million infected computers:

Using statistical sampling methods and a secret feature of XCP that notifies Sony when its CDs are placed in a computer, [security researcher Dan] Kaminsky was able to trace evidence of infections in a sample that points to the probable existence of at least one compromised machine in roughly 568,200 networks worldwide. This does not reflect a tally of actual infections, however, and the real number could be much higher.

I say “may be at least” because the data doesn’t smell right to me. Look at the list of infected titles, and estimate what percentage of CD buyers will play them on their computers; does that seem like half a million sales to you? It doesn’t to me, although I readily admit that I don’t know the music business. Their methodology seems sound, though:

Kaminsky discovered that each of these requests leaves a trace that he could follow and track through the internet’s domain name system, or DNS. While this couldn’t directly give him the number of computers compromised by Sony, it provided him the number and location (both on the net and in the physical world) of networks that contained compromised computers. That is a number guaranteed to be smaller than the total of machines running XCP.

His research technique is called DNS cache snooping, a method of nondestructively examining patterns of DNS use. Luis Grangeia invented the technique, and Kaminsky became famous in the security community for refining it.

Kaminsky asked more than 3 million DNS servers across the net whether they knew the addresses associated with the Sony rootkit—connected.sonymusic.com, updates.xcp-aurora.com and license.suncom2.com. He uses a “non-recursive DNS query” that allows him to peek into a server’s cache and find out if anyone else has asked that particular machine for those addresses recently.

If the DNS server said yes, it had a cached copy of the address, which means that at least one of its client computers had used it to look up Sony’s digital-rights-management site. If the DNS server said no, then Kaminsky knew for sure that no Sony-compromised machines existed behind it.

The results have surprised Kaminsky himself: 568,200 DNS servers knew about the Sony addresses. With no other reason for people to visit them, that points to one or more computers behind those DNS servers that are Sony-compromised. That’s one in six DNS servers, across a statistical sampling of a third of the 9 million DNS servers Kaminsky estimates are on the net.

In any case, Sony’s rapid fall from grace is a great example of the power of blogs; it’s been fifteen days since Mark Russinovich first posted about the rootkit. In that time the news spread like a firestorm, first through the blogs, then to the tech media, and then into the mainstream media.

Posted on November 15, 2005 at 3:16 PM57 Comments

Airport Security Against Chemical and Biological Terrorism

There’s a new report from Sandia National Laboratories (written with Lawrence Berkeley National Laboratory) titled “Guidelines to Improve Airport Preparedness Against Chemical and Biological Terrorism.” It’s classified, but there’s an unclassified version available. (Press release. Unclassified report.)

I haven’t read it yet, but it looks interesting.

Posted on November 14, 2005 at 3:19 PM13 Comments

Cold War Software Bugs

Here’s a report that the CIA slipped software bugs to the Soviets in the 1980s:

In January 1982, President Ronald Reagan approved a CIA plan to sabotage the economy of the Soviet Union through covert transfers of technology that contained hidden malfunctions, including software that later triggered a huge explosion in a Siberian natural gas pipeline, according to a new memoir by a Reagan White House official.

A CIA article from 1996 also describes this.

EDITED TO ADD (11/14): Marcus Ranum wrote about this.

Posted on November 14, 2005 at 8:04 AM30 Comments

The Security of Tin Foil Hats

Really:

Abstract: Among a fringe community of paranoids, aluminum helmets serve as the protective measure of choice against invasive radio signals. We investigate the efficacy of three aluminum helmet designs on a sample group of four individuals. Using a $250,000 network analyser, we find that although on average all helmets attenuate invasive radio frequencies in either directions (either emanating from an outside source, or emanating from the cranium of the subject), certain frequencies are in fact greatly amplified. These amplified frequencies coincide with radio bands reserved for government use according to the Federal Communication Commission (FCC). Statistical evidence suggests the use of helmets may in fact enhance the government’s invasive abilities. We theorize that the government may in fact have started the helmet craze for this reason.

And a rebuttal:

A recent MIT study [1] calls into question the effectiveness of Aluminum Foil Deflector Beanies. However, there are serious flaws in this study, not the least of which is a complete mischaracterization of the process of psychotronic mind control. I theorize that the study is, in fact, NWO propaganda designed to spread FUD against deflector beanie technology, and aluminum shielding in general, in order to disembeanie paranoids, leaving them open to mind control.

Posted on November 12, 2005 at 10:43 AM36 Comments

Sky Posse

Counter-terrorist vigilantes, or people just trying to make flying safer?

The Sky Posse is an organization, not affiliated with anyone official, of people who vow to fight back in the event of an airplane hijacking. Members wear cool-looking pins, made to resemble a Western sheriff’s badge, with the slogan “Ready to Roll.”

Kind of silly, but I guess there’s no harm.

Posted on November 11, 2005 at 3:42 PM50 Comments

More on Sony's DRM Rootkit

Here’s the story, edited to add lots of news.

There will be lawsuits. (Here’s the first.) Police are getting involved. There’s a Trojan that uses Sony’s rootkit to hide. And today Sony temporarily halted production of CDs protected with this technology.

Sony really overreached this time. I hope they get slapped down hard for it.

EDITED TO ADD (13 Nov): More information on uninstalling the rootkit. And Microsoft will update its security tools to detect and remove the rootkit. That makes a lot of sense. If Windows crashes because of this—and others of this ilk—Microsoft will be blamed.

Posted on November 11, 2005 at 12:23 PM27 Comments

Ownership of Mag Stripe Readers May be Illegal

Here’s an Illinois bill that:

Provides that it is unlawful to possess, use, or allow to be used, any materials, hardware, or software specifically designed or primarily used for the reading of encrypted language from the bar code or magnetic strip of an official Illinois Identification Card, Disabled Person Identification Card, driver’s license, or permit.

Full text is here.

Posted on November 11, 2005 at 11:45 AM31 Comments

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other—stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install—at least Microsoft Windows system patches—large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AM17 Comments

Fraudulent Stock Transactions

From a Business Week story:

During July 13-26, stocks and mutual funds had been sold, and the proceeds wired out of his account in six transactions of nearly $30,000 apiece. Murty, a 64-year-old nuclear engineering professor at North Carolina State University, could only think it was a mistake. He hadn’t sold any stock in months.

Murty dialed E*Trade the moment its call center opened at 7 a.m. A customer service rep urged him to change his password immediately. Too late. E*Trade says the computer in Murty’s Cary (N.C.) home lacked antivirus software and had been infected with code that enabled hackers to grab his user name and password.

The cybercriminals, pretending to be Murty, directed E*Trade to liquidate his holdings. Then they had the brokerage wire the proceeds to a phony account in his name at Wells Fargo Bank. The New York-based online broker says the wire instructions appeared to be legit because they contained the security code the company e-mailed to Murty to execute the transaction. But the cyberthieves had gained control of Murty’s e-mail, too.

E*Trade recovered some of the money from the Wells Fargo account and returned it to Murty. In October, the Indian-born professor reached what he calls a satisfactory settlement with the firm, which says it did nothing wrong.

That last clause is critical. E*trade insists it did nothing wrong. It executed $174,000 in fraudulent transactions, but it did nothing wrong. It sold stocks without the knowledge or consent of the owner of those stocks, but it did nothing wrong.

Now quite possibly, E*trade did nothing wrong legally. There may very well be a paragraph buried in whatever agreement this guy signed that says something like: “You agree that any trade request that comes to us with the right password, whether it came from you or not, will be processed.” But there’s the market failure. Until we fix that, these losses are an externality to E*Trade. They’ll only fix the problem up to the point where customers aren’t leaving them in droves, not to the point where the customers’ stocks are secure.

Posted on November 10, 2005 at 2:40 PM47 Comments

Military Uses for Silly String

Really:

I’m a former Marine I in Afghanistan. Silly string has served me well in Combat especially in looking for IADs, simply put, booby traps. When you spray the silly string in dark areas, especially when you doing house to house fighting. On many occasions the silly string has saved me and my men’s lives.

And:

When you spray the string it just spreads everywhere and when it sets it lays right on the wire. Even in a dark room the string stands out revealing the trip wire.

Posted on November 10, 2005 at 7:59 AM43 Comments

Sniffing Passwords is Easy

From InfoWorld:

She said about half the hotels use shared network media (i.e., a hub versus an Ethernet switch), so any plain text password you transmit is sniffable by any like-minded person in the hotel. Most wireless access points are shared media as well; even networks requiring a WEP key often allow the common users to sniff each other’s passwords.

She said the average number of passwords collected in an overnight hotel stay was 118, if you throw out the 50 percent of connections that used an Ethernet switch and did not broadcast passwords.

The vast majority, 41 percent, were HTTP-based passwords, followed by e-mail (SMTP, POP2, IMAP) at 40 percent. The last 19 percent were composed of FTP, ICQ, SNMP, SIP, Telnet, and a few other types.

As a security professional, my friend often attends security conferences and teaches security classes. She noted that the number of passwords she collected in these venues was higher on average than in non-security locations. The very people who are supposed to know more about security than anyone appeared to have a higher-than-normal level of remote access back to their companies, but weren’t using any type of password protection.

At one conference, she listened to one of the world’s foremost Cisco security experts as his laptop broadcast 12 different log-in types and passwords during the presentation. Ouch!

I am interested in analyzing that password database. What percentage of those passwords are English words? What percentage are in the common password dictionaries? What percentage use mixed case, or numbers, or punctuation? What’s the frequency distribution of different password lengths?

Real password data is hard to come by. There’s an interesting research paper in that data.

Posted on November 9, 2005 at 2:39 PM48 Comments

Taser Cam

Here’s an excellent use for cameras:

Now, to help better examine how Tasers are used, manufacturer Taser International Inc. has developed a Taser Cam, which company executives hope will illuminate why Tasers are needed—and add another layer of accountability for any officer who would abuse the weapon.

The Taser Cam is an audio and video recorder that attaches to the butt of the gun and starts taping when the weapon is turned on. It continues recording until the weapon is turned off. The Taser doesn’t have to be fired to use the camera.

It’s the same idea as having cameras record all police interrogations, or record all police-car stops. It helps protect the populace against police abuse, and helps protect the police of accusations of abuse.

This is where cameras do good: when they lessen a power imbalance. Imagine if they were continuously recording the actions of elected officials—when they were acting in their official capacity, that is.

Of course, cameras are only as useful as their data. If critical recordings are “lost,” then there’s no accountability. The system is pretty kludgy:

The Taser Cam records in black and white but is equipped with infrared technology to record images in very low light. The camera will have at least one hour of recording time, the company said, and the video can be downloaded to a computer over a USB cable.

How soon before the cameras simply upload their recordings, in real time, to some trusted vault somewhere?

EDITED TO ADD: CNN has a story.

Posted on November 9, 2005 at 8:46 AM34 Comments

Richard Clarke Advised New York City Subway Searches

Now this is a surprise. Richard Clarke advised New York City to perform those pointless subway searches:

Mr. Clarke, a former counterterrorism adviser to two presidents, received widespread attention last year for his criticism of President Bush’s response to the Sept. 11 attacks, detailed in a searing memoir and in security testimony before the 9/11 Commission.

Unknown to the public, until recently, was Mr. Clarke’s role in advising New York City officials in helping to devise the “container inspection program” that the Police Department began in July after two attacks on the transit system in London.

Seems that his goal wasn’t to deter terrorism, but simply to move it from the New York City subways to another target; perhaps the Boston subways?

“Obviously you want to catch people with bombs on their back, but there is a value to a program that doesn’t stop everyone and isn’t compulsory,” he said in a deposition.

Mr. Clarke later added, “The goal here is to impart to the terrorists a sense that there is an enhanced security program, to deter them from going into the New York subway and choosing that as a target.”

Posted on November 8, 2005 at 12:49 PM27 Comments

Howard Schmidt on Software Vulnerabilities

Howard Schmidt was misquoted in the article that spurred my rebuttal.

This essay outlines what he really thinks:

Like it or not, the hard work of developers often takes the brunt of malicious hacker attacks.

Many people know that developers are often under intense pressure to deliver more features on time and under budget. Few developers get the time to review their code for potential security vulnerabilities. When they do get the time, they often don’t have secure-coding training and lack the automated tools to prevent hackers from using hundreds of common exploit techniques to trigger malicious attacks.

So what can software vendors do? In a sense, a big part of the answer is relatively old fashioned; the developers need to be accountable to their employers and provided with incentives, better tools and proper training.

He’s against making vendors liable for defects in their products, unlike every other industry:

I always have been, and continue to be, against any sort of liability actions as long as we continue to see market forces improve software. Unfortunately, introducing vendor liability to solve security flaws hurts everybody, including employees, shareholders and customers, because it raises costs and stifles innovation.

After all, when companies are faced with large punitive judgments, a frequent step is often to cut salaries, increase prices or even reduce employees. This is not good for anyone.

And he closes with:

In the end, what security requires is the same attention any business goal needs. Employers should expect their employees to take pride in and own a certain level of responsibility for their work. And employees should expect their employers to provide the tools and training they need to get the job done. With these expectations established and goals agreed on, perhaps the software industry can do a better job of strengthening the security of its products by reducing software vulnerabilities.

That first sentence, I think, nicely sums up what’s wrong with his argument. If security is to be a business goal, then it needs to make business sense. Right now, it makes more business sense not to produce secure software products than it does to produce secure software products. Any solution needs to address that fundamental market failure, instead of simply wishing it were true.

Posted on November 8, 2005 at 7:34 AM57 Comments

The FBI is Spying on Us

From TalkLeft:

The Washington Post reports that the FBI has been obtaining and reviewing records of ordinary Americans in the name of the war on terror through the use of national security letters that gag the recipients.

Merritt’s entire post is worth reading.

The closing:

The ACLU has been actively litigating the legality of the National Security Letters. Their latest press release is here.

Also, the ACLU is less critical than I am of activity taking place in Congress now where conferees of the Senate and House are working out a compromise version of Patriot Act extension legislation that will resolve differences in versions passed by each in the last Congress. The ACLU reports that the Senate version contains some modest improvements respecting your privacy rights while the House version contains further intrusions. There is still time to contact the conferees. The ACLU provides more information and a sample letter here.

History shows that once new power is granted to the government, it rarely gives it back. Even if you wouldn’t recognize a terrorist if he were standing in front of you, let alone consort with one, now is the time to raise your voice.

EDITED TO ADD: Here’s a good personal story of someone’s FBI file.

EDITED TO ADD: Several people have written to tell me that the CapitolHillBlue website, above, is not reliable. I don’t know one way or the other, but consider yourself warned.

Posted on November 7, 2005 at 3:13 PM22 Comments

Microsoft Calls for National Privacy Law

Here’s some good news from Microsoft:

In an eight-page document released on Capitol Hill today, Microsoft outlined a series of steps it would like to see Congress take to preempt a growing number of state laws that impose varying requirements on the collection, use, storage and disclosure of personal information.

According to the press release:

[Microsoft’s senior vice president and general counsel Brad] Smith described four core principles that Microsoft believes should be the foundation of any federal legislation on data privacy:

  • Create a baseline standard across all organizations and industries for offline and online data collection and storage. This federal standard should pre-empt state laws and, as much as possible, be consistent with privacy laws around the world.
  • Increase transparency regarding the collection, use and disclosure of personal information. This would include a range of notification and access functions, such as simplified, consumer-friendly privacy notices and features that permit individuals to access and manage their personal information collected online.
  • Provide meaningful levels of control over the use and disclosure of personal information. This approach should balance a requirement for organizations to obtain individuals’ consent before using and disclosing information with the need to make the requirements flexible for businesses, while avoiding bombarding consumers with excessive and unnecessary levels of choice.

  • Ensure a minimum level of security for personal information in storage and transit. A federal standard should require organizations to take reasonable steps to secure and protect critical data against unauthorized access, use, disclosure modification and loss of personal information.

Here’s Microsoft’s document, with a bunch more details.

With this kind of thing, the devil is in the details. But it’s definitely a good start. Certainly Microsoft has become more pro-privacy in recent years.

Posted on November 7, 2005 at 12:06 PM27 Comments

Instantaneous Data Grabbing

I think this is a harbinger of the future:

A high roller walks into the casino, ever so mindful of the constant surveillance cameras. Wanting to avoid sales pitches and other unwanted attention, he pays cash at each table and anonymously moves around frequently to discourage people who are trying to track his movements.

After a few hours of losses, he goes to the cashier and asks for a cash advance off of his credit card. The card tells the casino his name, but not much else. As is required by card issuers, the cashier asks for some other identification, such as a driver’s license. That license offers the casino a ton of CRM identification goodies, but the cashier is only supposed to glance at the picture and the name to verify identity and hand the license—and its info treasure trove—back to the gambler.

Not any more, at least if a Minneapolis company called Cash Systems Inc. has anything to say about it. The firm was recently awarded a U.S. patent for a device that can grab all of the data of almost any U.S. driver’s license in seconds and instantly dump it into a casino’s CRM system.

On the one hand, the technology isn’t very interesting; it’s probably just a camera and some OCR software optimized for driver’s licenses. But what is interesting is that the technology is available as a mass-market product.

Where else do you routinely show your ID? Who else might want all that information for marketing purposes?

Posted on November 7, 2005 at 7:45 AM35 Comments

A 24/7 Wireless Tracking Network

It’s at MIT:

MIT’s newly upgraded wireless network—extended this month to cover the entire school—doesn’t merely get you online in study halls, stairwells or any other spot on the 9.4 million square foot campus. It also provides information on exactly how many people are logged on at any given location at any given time.

It even reveals a user’s identity if the individual has opted to make that data public.

MIT researchers did this by developing electronic maps that track across campus, day and night, the devices people use to connect to the network, whether they’re laptops, wireless PDAs or even Wi-Fi equipped cell phones.

WiFi is certainly a good technology for this sort of massive surveillance. It’s an open and well-standardized technology that allows anyone to go into the surveillance business. Bluetooth is a similar technology: open and easy to use. Cell phone technologies, on the other hand, are closed and proprietary. RFID might be the preferred surveillance technology of the future, depending on how open and standardized it becomes.

Whatever the technology, privacy is a serious concern:

While every device connected to the campus network via Wi-Fi is visible on the constantly refreshed electronic maps, the identity of the users is confidential unless they volunteer to make it public.

Those students, faculty and staff who opt in are essentially agreeing to let others track them.

“This raises some serious privacy issues,” Ratti said. “But where better than to work these concerns out but on a research campus?”

Rich Pell, a 21-year-old electrical engineering senior from Spartanburg, S.C., was less than enthusiastic about the new system’s potential for people monitoring. He predicted not many fellow students would opt into that.

“I wouldn’t want all my friends and professors tracking me all the time. I like my privacy,” he said. “I can’t think of anyone who would think that’s a good idea. Everyone wants to be out of contact now and then.”

Posted on November 4, 2005 at 12:44 PM25 Comments

Oracle's Password Hashing

Here’s a paper on Oracle’s password hashing algorithm. It isn’t very good.

In this paper the authors examine the mechanism used in Oracle databases for protecting users’ passwords. We review the algorithm used for generating password hashes, and show that the current mechanism presents a number of weaknesses, making it straightforward for an attacker with limited resources to recover a user’s plaintext password from the hashed value. We also describe how to implement a password recovery tool using off-the-shelf software. We conclude by discussing some possible attack vectors and recommendations to mitigate this risk.

Posted on November 3, 2005 at 1:20 PM23 Comments

The Security of RFID Passports

My fifth column for Wired:

The State Department has done a great job addressing specific security and privacy concerns, but its lack of technical skills is hurting it. The collision-avoidance ID is just one example of where, apparently, the State Department didn’t have enough of the expertise it needed to do this right.

Of course it can fix the problem, but the real issue is how many other problems like this are lurking in the details of its design? We don’t know, and I doubt the State Department knows either. The only way to vet its design, and to convince us that RFID is necessary, would be to open it up to public scrutiny.

The State Department’s plan to issue RFID passports by October 2006 is both precipitous and risky. It made a mistake designing this behind closed doors. There needs to be some pretty serious quality assurance and testing before deploying this system, and this includes careful security evaluations by independent security experts. Right now the State Department has no intention of doing that; it’s already committed to a scheme before knowing if it even works or if it protects privacy.

My previous entries on RFID passports are here, here, and here.

Posted on November 3, 2005 at 8:30 AM73 Comments

Using Security Arguments to Further Agenda

I’ve often said that security discussions are rarely about security. Here’s a story that illustrates that.

A New Jersey mother doesn’t like her child’s school bus stopping at McDonald’s on Friday mornings. Apparently unable to come up with a cogent argument against these stops (which seems odd to me, honestly, as I can think of several), she invokes movie-plot security threats:

“I think they all like it,” Tyler [the mother] said. “They are anywhere from 9th to 12th graders. They don’t really think about the point that it could be a dangerous situation. They just think it’s breakfast.”

Tyler wants the stops to, well, stop before a student is hit by someone speeding into the drive-thru or before a robbery occurs and her son and other students are inside.

Posted on November 2, 2005 at 2:13 PM24 Comments

NIST Hash Workshop Liveblogging (5)

The afternoon started with three brand new hash functions: FORK-256, DHA-256, and VSH. VSH (Very Smooth Hash) was the interesting one; it’s based on factoring and the discrete logarithm problem, like public-key encryption, and not on bit-twiddling like symmetric encryption. I have no idea if it’s any good, but it’s cool to see something so different.

I think we need different. So many of our hash functions look pretty much the same: MD4, MD5, SHA-0, SHA-1, RIPE-MD, HAVAL, SHA-256, SHA-512. And everything is basically a block cipher in Davies-Meyer mode. I want some completely different designs. I want hash functions based on a stream ciphers. I want more functions based on number theory.

The final session was an open discussion about what to do next. There was much debate about how soon we need a new hash function, how long we should rely on SHA-1 or SHA-256, etc.

Hashing is hard. At the ultra-high-level hand-waving level, it takes a lot more clock cycles per message byte to hash than it does to encrypt. No one has any theory to account for this, but it seems like the lack of any secrets in a hash function makes it a harder problem. This may be an artifact of our lack of knowledge, but I think there’s a grain of fundamental truth buried here.

And hash functions are used everywhere. Hash functions are the workhorse of cryptography; they’re sprinkled all over security protocols. They’re used all the time, in all sorts of weird ways, for all sorts of weird purposes. We cryptographers think of them as good hygiene, kind of like condoms.

So we need a fast answer for immediate applications.

We also need “SHA2,” whatever that will look like. And a design competition is the best way to get a SHA2. (Niels Ferguson pointed out that the AES process was the best cryptographic invention of the past decade.)

Unfortunately, we’re in no position to have an AES-like competition to replace SHA right now. We simply don’t know enough about designing hash functions. What we need is research, random research all over the map. Designs beget analyses beget designs beget analyses…. Right now we need a bunch of mediocre hash function designs. We need a posse of hotshot graduate students breaking them and making names for themselves. We need new tricks and new tools. Hash functions are a hot area of research right now, but anything we can do to stoke that will pay off in the future.

NIST is thinking of hosting another hash workshop right after Crypto next year. That would be a good thing.

I need to get to work on a hash function based on Phelix.

Posted on November 1, 2005 at 3:43 PM27 Comments

NIST Hash Workshop Liveblogging (4)

This morning we heard a variety of talks about hash function design. All are esoteric and interesting, and too subtle to summarize here. Hopefully the papers will be online soon; keep checking the conference website.

Lots of interesting ideas, but no real discussion about trade-offs. But it’s the trade-offs that are important. It’s easy to design a good hash function, given no performance constraints. But we need to trade off performance with security. When confronted with a clever idea, like Ron Rivest’s dithering trick, we need to decide if this a good use of time. The question is not whether we should use dithering. The question is whether dithering is the best thing we can do with (I’m making these numbers up) a 20% performance degradation. Is dithering better than adding 20% more rounds? This is the kind of analysis we did when designing Twofish, and it’s the correct analysis here as well.

Bart Preneel pointed out the obvious: if SHA-1 had double the number of rounds, this workshop wouldn’t be happening. If MD5 had double the number of rounds, that hash function would still be secure. Maybe we’ve just been too optimistic about how strong hash functions are.

The other thing we need to be doing is providing answers to developers. It’s not enough to express concern about SHA-256, or wonder how much better the attacks on SHA-1 will become. Developers need to know what hash function to use in their designs. They need an answer today. (SHA-256 is what I tell people.) They’ll need an answer in a year. They’ll need an answer in four years. Maybe the answers will be the same, and maybe they’ll be different. But if we don’t give them answers, they’ll make something up. They won’t wait for us.

And while it’s true that we don’t have any real theory of hash functions, and it’s true that anything we choose will be based partly on faith, we have no choice but to choose.

And finally, I think we need to stimulate research more. Whether it’s a competition or a series of conferences, we need new ideas for design and analysis. Designs beget analyses beget designs beget analyses…. We need a whole bunch of new hash functions to beat up; that’s how we’ll learn to design better ones.

Posted on November 1, 2005 at 11:19 AM14 Comments

Sony Secretly Installs Rootkit on Computers

Mark Russinovich discovered a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system.

At that point I knew conclusively that the rootkit and its associated files were related to the First 4 Internet DRM software Sony ships on its CDs. Not happy having underhanded and sloppily written software on my system I looked for a way to uninstall it. However, I didn’t find any reference to it in the Control Panel’s Add or Remove Programs list, nor did I find any uninstall utility or directions on the CD or on First 4 Internet’s site. I checked the EULA and saw no mention of the fact that I was agreeing to have software put on my system that I couldn’t uninstall. Now I was mad.

Removing the rootkit kills Windows.

Could Sony have violated the the Computer Misuse Act in the UK? If this isn’t clearly in the EULA, they have exceeded their privilege on the customer’s system by installing a rootkit to hide their software.

Certainly Mark has a reasonable lawsuit against Sony in the U.S.

EDITED TO ADD: The Washington Post is covering this story.

Sony lies about their rootkit:

November 2, 2005 – This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers.

Their update does not remove the rootkit, it just gets rid of the $sys$ cloaking.

Ed Felton has a great post on the issue:

The update is more than 3.5 megabytes in size, and it appears to contain new versions of almost all the files included in the initial installation of the entire DRM system, as well as creating some new files. In short, they’re not just taking away the rootkit-like function—they’re almost certainly adding things to the system as well. And once again, they’re not disclosing what they’re doing.

No doubt they’ll ask us to just trust them. I wouldn’t. The companies still assert—falsely—that the original rootkit-like software “does not compromise security” and “[t]here should be no concern” about it. So I wouldn’t put much faith in any claim that the new update is harmless. And the companies claim to have developed “new ways of cloaking files on a hard drive”. So I wouldn’t derive much comfort from carefully worded assertions that they have removed “the … component .. that has been discussed”.

And you can use the rootkit to avoid World of Warcraft spyware.

World of Warcraft hackers have confirmed that the hiding capabilities of Sony BMG’s content protection software can make tools made for cheating in the online world impossible to detect.

.

EDITED TO ADD: F-Secure makes a good point:

A member of our IT security team pointed out quite chilling thought about what might happen if record companies continue adding rootkit based copy protection into their CDs.

In order to hide from the system a rootkit must interface with the OS on very low level and in those areas theres no room for error.

It is hard enough to program something on that level, without having to worry about any other programs trying to do something with same parts of the OS.

Thus if there would be two DRM rootkits on the same system trying to hook same APIs, the results would be highly unpredictable. Or actually, a system crash is quite predictable result in such situation.

EDITED TO ADD: Declan McCullagh has a good essay on the topic. There will be lawsuits.

EDITED TO ADD: The Italian police are getting involved.

EDITED TO ADD: Here’s a Trojan that uses Sony’s rootkit to hide.

EDITED TO ADD: Sony temporarily halts production of CDs protected with this technology.

Posted on November 1, 2005 at 10:17 AM82 Comments

Secret NSA Patents

From The New Scientist:

The hyper-secretive US National Security Agency—the government’s eavesdropping arm—appears to be having its patent applications increasingly blocked by the Pentagon. And the grounds for this are for reasons of national security, reveals information obtained under a freedom of information request.

Most Western governments can prevent the granting (and therefore publishing) of patents on inventions deemed to contain sensitive information of use to an enemy or terrorists. They do so by issuing a secrecy order barring publication and even discussion of certain inventions.

Experts at the US Patent and Trademark Office perform an initial security screening of all patent applications and then army, air force and navy staff at the Pentagon’s Defense Technology Security Administration (DTSA) makes the final decision on what is classified and what is not.

Now figures obtained from the USPTO under a freedom of information request by the Federation of American Scientists show that the NSA had nine of its patent applications blocked in the financial year to March 2005 against five in 2004, and none in each of the three years up to 2003.

EDITED TO ADD: This story is wrong.

Posted on November 1, 2005 at 7:46 AM16 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.