Blog: August 2005 Archives

276 British Spies

The website Cryptome has a list of 276 MI6 agents:

This combines three lists of MI6 officers published here on 13 May 1999 (116 names), 21 August 2005 (74 names), and 27 August 2005 (121 names).

While none of the 311 names appeared on all three lists…35 names appeared on two lists, leaving 276 unique names.

According to Silicon.com:

It is not the first time this kind of information has been published on the internet and Foreign Office policy is to neither confirm nor deny the accuracy of such lists. But a spokesman slammed its publication for potentially putting lives in danger.

On the other hand:

The website is run by John Young, who “welcomes” secret documents for publication and recently said there was a “need to name as many intelligence officers and agents as possible”.

He said: “It is disinformation that naming them places their life in jeopardy. Not identifying them places far more lives in jeopardy from their vile secret operations and plots.”

Discuss.

Posted on August 31, 2005 at 2:28 PM64 Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AM58 Comments

Unintended Information Revelation

Here’s a new Internet data-mining research program with a cool name: Unintended Information Revelation:

Existing search engines process individual documents based on the number of times a key word appears in a single document, but UIR constructs a concept chain graph used to search for the best path connecting two ideas within a multitude of documents.

To develop the method, researchers used the chapters of the 9/11 Commission Report to establish concept ontologies – lists of terms of interest in the specific domains relevant to the researchers: aviation, security and anti-terrorism issues.

“A concept chain graph will show you what’s common between two seemingly unconnected things,” said Srihari. “With regular searches, the input is a set of key words, the search produces a ranked list of documents, any one of which could satisfy the query.

“UIR, on the other hand, is a composite query, not a keyword query. It is designed to find the best path, the best chain of associations between two or more ideas. It returns to you an evidence trail that says, ‘This is how these pieces are connected.'”

The hope is to develop the core algorithms exposing veiled paths through documents generated by different individuals or organisations.

I’m a big fan of research, and I’m glad to see it being done. But I hope there is a lot of discussion and debate before we deploy something like this. I want to be convinced that the false positives don’t make it useless as an intelligence-gathering tool.

Posted on August 30, 2005 at 12:53 PM15 Comments

Tamper-Evident Paper Mailings

We’ve all received them in the mail: envelopes from banks with PINs, access codes, or other secret information. The letters are somewhat tamper-proof, but mostly they’re designed to be tamper-evident: if someone opens the letter and reads the information, you’re going to know. The security devices include fully sealed packaging, and black inks that obscure the secret information if you hold the envelope up to the light.

Researchers from Cambridge University have been looking at the security inherent in these systems, and they’ve written a paper that outlines how to break them:

Abstract. Tamper-evident laser-printed PIN mailers are used by many institutions to issue PINs and other secrets to individuals in a secure manner. Such mailers are created by printing the PIN using a normal laser, but on to special stationery and using a special font. The background of the stationery disguises the PIN so that it cannot be read with the naked eye without tampering. We show that currently deployed PIN mailer technology (used by the major UK banks) is vulnerable to trivial attacks that reveal the PIN without tampering. We describe image processing attacks, where a colour difference between the toner and the stationary “masking pattern” is exploited. We also describe angled light attacks, where the reflective properties of the toner and stationery are exploited to allow the naked eye to separate the PIN from the backing pattern. All laser-printed mailers examined so far have been shown insecure.

According to a researcher website:

It should be noted that we sat on this report for about 9 months, and the various manufacturers all have new products which address to varying degrees the issues raised in the report.

BBC covered the story.

Posted on August 30, 2005 at 7:59 AM20 Comments

Identity Thief Steals House

From Plastic:

James Cook left on a business trip to Florida, and his wife Paula went to Oklahoma to care for her sick mother. When the two returned to Frisco, Texas, several days later, their keys didn’t work. The locks on the house had been changed.

They spent their first night back sleeping in a walk-in closet, with a steel pipe ready to cold-cock any intruders. The next day, they met the man who thought he owned their house, because he had put a US$12,000 down payment to someone named Carlos Ramirez. The Cooks went to the Denton County Courthouse and checked their title. Someone had forged Paula Cook’s maiden name, Paula Smart, and transferred the deed to Carlos Ramirez. Paula’s identity was not only stolen, but the thief also stole her house. Even the police said they’ve never seen a case like this one, but suspect the criminal was able to steal the identity and the house with just Mrs. Cook’s Social Security number, driver’s license number and a copy of her signature.

This is a perfect example of the sort of fraud issue that a national ID card won’t solve. The problem is not that identity credentials are too easy to forge. The problem is that the criminal needed nothing more than “Mrs. Cook’s Social Security number, driver’s license number and a copy of her signature.” And the solution isn’t a harder-to-forge card; the solution is to make the procedure for transferring real-estate ownership more onerous. If the Denton County Courthouse had better transaction authentication procedures, the particulars of identity authentication—a national ID, a state driver’s license, biometrics, or whatever—wouldn’t matter.

If we are ever going to solve identity theft, we need to think about it properly. The problem isn’t misused identity information; the problem is fraudulent transactions.

Posted on August 29, 2005 at 7:42 AM57 Comments

Privacy Risks of Used Cell Phones

Ignore the corporate sleaziness by Cingular for the moment—they sold used cell phones meant for charity—and focus on the privacy implications. Cingular didn’t erase any of the personal information on the used phones they sold.

This reminds me of Simson Garfinkel’s analysis of used hard drives. He found that 90% of them contained old data, some of it very private and interesting.

Erasing data is one of the big problems of the information age. We know how to do it, but it takes time and we mostly don’t bother. And sadly, these kinds of privacy violations are more the norm than the exception. I don’t think it will get better unless Cingular becomes liable for violating its customers’ privacy like that.

EDITED TO ADD: I already wrote about the risks of losing small portable devices.

Posted on August 26, 2005 at 2:58 PM45 Comments

Peggy Noonan and Movie-Plot Terrorist Threats

Peggy Noonan is opposed to the current round of U.S. base closings because, well, basically because she thinks they’ll be useful if the government ever has to declare martial law.

I don’t know anything about military bases, and what should be closed or remain open. What’s interesting to me is that her essay is a perfect example of thinking based on movie-plot threats:

Among the things we may face over the next decade, as we all know, is another terrorist attack on American soil. But let’s imagine the next one has many targets, is brilliantly planned and coordinated. Imagine that there are already 100 serious terror cells in the U.S., two per state. The members of each cell have been coming over, many but not all crossing our borders, for five years. They’re working jobs, living lives, quietly planning.

Imagine they’re planning that on the same day in the not-so-distant future, they will set off nuclear suitcase bombs in six American cities, including Washington, which will take the heaviest hit. Hundreds of thousands may die; millions will be endangered. Lines will go down, and to make it worse the terrorists will at the same time execute the cyberattack of all cyberattacks, causing massive communications failure and confusion. There will be no electricity; switching and generating stations will also have been targeted. There will be no word from Washington; the extent of the national damage will be as unknown as the extent of local damage is clear. Daily living will become very difficult, and for months—food shortages, fuel shortages.

Let’s make it worse. On top of all that, on the day of the suitcase nukings, a half dozen designated cells will rise up and assassinate national, state and local leaders. There will be chaos, disorder, widespread want; law-enforcement personnel, or what remains of them, will be overwhelmed and outmatched.

Impossibly grim? No, just grim. Novelistic? Sure. But if you’d been a novelist on Sept. 10, 2001, and dreamed up a plot in which two huge skyscrapers were leveled, the Pentagon was hit, and the wife of the solicitor general of the United States was desperately phoning him from a commercial jet that had been turned into a missile, you would have been writing something wild and improbable that nonetheless happened a day later.

And all this of course is just one scenario. The madman who runs North Korea could launch a missile attack on the United States tomorrow, etc. There are limitless possibilities for terrible trouble.

This game of “let’s imagine” really does stir up emotions, but it’s not the way to plan national security policy. There’s a movie plot to justify any possible national policy, and another to render that same policy ineffectual.

This of course is pure guessing on my part. I can’t prove it with data.

That’s precisely the problem.

Posted on August 26, 2005 at 11:37 AM59 Comments

U.S. Government Computers Attacked from China

From the Washington Post:

Web sites in China are being used heavily to target computer networks in the Defense Department and other U.S. agencies, successfully breaching hundreds of unclassified networks, according to several U.S. officials.

Classified systems have not been compromised, the officials added. But U.S. authorities remain concerned because, as one official said, even seemingly innocuous information, when pulled together from various sources, can yield useful intelligence to an adversary….

“The scope of this thing is surprisingly big,” said one of four government officials who spoke separately about the incidents, which stretch back as far as two or three years and have been code-named Titan Rain by U.S. investigators. All officials insisted on anonymity, given the sensitivity of the matter.

Whether the attacks constitute a coordinated Chinese government campaign to penetrate U.S. networks and spy on government databanks has divided U.S. analysts. Some in the Pentagon are said to be convinced of official Chinese involvement; others see the electronic probing as the work of other hackers simply using Chinese networks to disguise the origins of the attacks.

Posted on August 26, 2005 at 7:59 AM28 Comments

Actors Playing New York City Policemen

Did you know you could be arrested for carrying a police uniform in New York City?

With security tighter in the Big Apple since Sept. 11, 2001, the union that represents TV and film actors has begun advising its New York-area members to stop buying police costumes or carrying them to gigs, even if their performances require them.

The Screen Actors Guild said in a statement posted on its Web site on Friday that “an apparent shift in city policy” may put actors at risk of arrest if they are stopped while carrying anything that looks too much like a real police uniform.

The odds that an actor might be stopped and questioned on his or her way to work went up this month when police began conducting random searches of passengers’ bags in New York’s subway system. The guild said two of its members had been detained by security personnel at an airport and a courthouse in recent months for possessing police costumes.

This seems like overkill to me. I understand that a police uniform is an authentication device—not a very good one, but one nonetheless—and we want to make it harder for the bad guys to get one. But there’s no reason to prohibit screen or stage actors from having police uniforms if it’s part of their job. This seems similar to the laws surrounding lockpicks: you can be arrested for carrying them without a good reason, but locksmiths are allowed to own the tools of their trade.

Here’s another bit from the article:

Under police department rules, real officers must be on hand any time an actor dons a police costume during a TV or film production.

I guess that’s to prevent the actor from actually impersonating a policeman. But how often does that actually happen? Is this a good use of police manpower?

Does anyone know how other cities and countries handle this?

Posted on August 25, 2005 at 12:52 PM61 Comments

A Socio-Technical Approach to Internet Security

Interesting research grant from the NSF:

Technical security measures are often breached through social means, but little research has tackled the problem of system security in the context of the entire socio-technical system, with the interactions between the social and technical parts integrated into one model. Similar problems exist in the field of system safety, but recently a new accident model has been devised that uses a systems-theoretic approach to understand accident causation. Systems theory allows complex relationships between events and the system as a whole to be taken into account, so this new model permits an accident to be considered not simply as arising from a chain of individual component failures, but from the interactions among system components, including those that have not failed.

This exploratory research will examine how this new approach to safety can be applied to Internet security, using worms as a first example. The long-term goal is to create a general model of trustworthiness that can incorporate both safety and security, along with system modeling tools and analysis methods that can be used to create more trustworthy socio-technical systems. This research provides a unique opportunity to link two research disciplines, safety and security, that have many commonalities but, up to now, relatively little communication or interaction.

Posted on August 25, 2005 at 7:38 AM7 Comments

Cameras in the New York City Subways

New York City is spending $212 million on surveillance technology: 1,000 video cameras and 3,000 motion sensors for the city’s subways, bridges, and tunnels.

Why? Why, given that cameras didn’t stop the London train bombings? Why, when there is no evidence that cameras are effectice at reducing either terrorism and crime, and every reason to believe that they are ineffective?

One reason is that it’s the “movie plot threat” of the moment. (You can hear the echos of the movie plots when you read the various quotes in the news stories.) The terrorists bombed a subway in London, so we need to defend our subways. The other reason is that New York City officials are erring on the side of caution. If nothing happens, then it was only money. But if something does happen, they won’t keep their jobs unless they can show they did everything possible. And technological solutions just make everyone feel better.

If I had $212 million to spend to defend against terrorism in the U.S., I would not spend it on cameras in the New York City subways. If I had $212 million to defend New York City against terrorism, I would not spend it on cameras in the subways. This is nothing more than security theater against a movie plot threat.

On the plus side, the money will also go for a new radio communications system for subway police, and will enable cell phone service in underground stations, but not tunnels.

Posted on August 24, 2005 at 1:10 PM75 Comments

Ambient Radiation Sensors

Here’s a piece of interesting research out of Ohio State: it’s a passive sensor that could be cheaper, better, and less intrusive than technologies like backscatter X-rays:

“Unlike X-ray machines or radar instruments, the sensor doesn’t have to generate a signal to detect objects ­ it spots them based on how brightly they reflect the natural radiation that is all around us every day.”

“It’s basically just a really bad tunnel diode,” he explained. “I thought, heck, we can make a bad diode! We made lots of them back when we were figuring out how to make good ones.”

First millimeter-wave detection systems, and now this. There’s some interesting research in remote sensing going on, and there are sure to be some cool security applications.

Posted on August 24, 2005 at 8:17 AM12 Comments

Bluetooth Spam

Advertisers are beaming unwanted content to Bluetooth phones at a distance of 100 meters.

Sure, it’s annoying, but worse, there are serious security risks. Don’t believe this:

Furthermore, there is no risk of downloading viruses or other malware to the phone, says O’Regan: “We don’t send applications or executable code.” The system uses the phone’s native download interface so they should be able to see the kind of file they are downloading before accepting it, he adds.

This company might not send executable code, but someone else certainly could. And what percentage of people who use Bluetooth phones can recognize “the kind of file they are downloading”?

We’ve already seen two ways to steal data from Bluetooth devices. And we know that more and more sensitive data is being stored on these small devices, increasing the risk. This is almost certainly another avenue for attack.

Posted on August 23, 2005 at 12:24 PM39 Comments

The Kutztown 13

Thirteen Pennsylvania high-school kids—Kutztown 13—are being charged with felonies:

They’re being called the Kutztown 13—a group of high schoolers charged with felonies for bypassing security with school-issued laptops, downloading forbidden internet goodies and using monitoring software to spy on district administrators.

The students, their families and outraged supporters say authorities are overreacting, punishing the kids not for any heinous behavior—no malicious acts are alleged—but rather because they outsmarted the district’s technology workers….

The trouble began last fall after the district issued some 600 Apple iBook laptops to every student at the high school about 50 miles northwest of Philadelphia. The computers were loaded with a filtering program that limited Internet access. They also had software that let administrators see what students were viewing on their screens.

But those barriers proved easily surmountable: The administrative password that allowed students to reconfigure computers and obtain unrestricted Internet access was easy to obtain. A shortened version of the school’s street address, the password was taped to the backs of the computers.

The password got passed around and students began downloading such forbidden programs as the popular iChat instant-messaging tool.

At least one student viewed pornography. Some students also turned off the remote monitoring function and turned the tables on their elders_ using it to view administrators’ own computer screens.

There’s more to the story, though. Here’s some good commentary on the issue:

What the parents don’t mention—but the school did in a press release—is that it wasn’t as if the school came down with the Hammer of God out of nowhere.

These kids were caught and punished for doing this stuff, and their parents informed.

Over and over.

Quoth the release:

“Unfortunately, after repeated warnings and disciplinary actions, a few students continued to misuse the school-issued laptops to varying degrees. The disciplinary actions included detentions, in-school suspensions, loss of Internet access, and loss of computer privileges. After each disciplinary action, parents received either written notification or telephone calls.”

What was the parents’ reaction those disciplinary actions? Some of them complained that—despite signing a document agreeing to the acceptable use policy—the kids should be able to do whatever they wanted to with the free machines.

“We signed it, but we didn’t mean it”?

Yes, the kids should be punished. No, a felony comviction is not the way to punish them.

The problem is that the punishment doesn’t fit the crime. Breaking the rules is what kids do. Society needs to deal with that, yes, but it needs to deal with that in a way that doesn’t ruin lives. Deterrence is critical if we are to ever have a lawful society on the internet, but deterrence has to come from rational prosecution. This simply isn’t rational.

EDITED TO ADD (2 Sep): It seems that charges have been dropped.

Posted on August 22, 2005 at 6:56 AM83 Comments

Airline Security, Trade-offs, and Agenda

All security decisions are trade-offs, and smart security trade-offs are ones where the security you get is worth what you have to give up. This sounds simple, but it isn’t. There are differences between perceived risk and actual risk, differences between perceived security and actual security, and differences between perceived cost and actual cost. And beyond that, there are legitimate differences in trade-off analysis. Any complicated security decision affects multiple players, and each player evaluates the trade-off from his or her own perspective.

I call this “agenda,” and it is one of the central themes of Beyond Fear. It is clearly illustrated in the current debate about rescinding the prohibition against small pointy things on airplanes. The flight attendants are against the change. Reading their comments, you can clearly see their subjective agenda:

“As the front-line personnel with little or no effective security training or means of self defense, such weapons could prove fatal to our members,” Patricia A. Friend, international president of the Association of Flight Attendants, said in a letter to Edmund S. “Kip” Hawley, the new leader of the Transportation Security Administration. “They may not assist in breaking through a flightdeck door, but they could definitely lead to the deaths of flight attendants and passengers”….

The flight attendants, whose union represents 46,000 members, said that easing the ban on some prohibited items could pose a safety risk on board the aircraft and lead to incidents that terrorize passengers even if they do not involve a hijacking.

“Even a plane that is attacked and results in only a few deaths would seriously jeopardize the progress we have all made in restoring confidence of the flying public,” Friend said in her letter. “We urge you to reconsider allowing such dangerous items—which have no place in the cabin of an aircraft in the first place—to be introduced into our workplace.”

The flight attendants are not evaluating the security countermeasure from a global perspective. They’re not trying to figure out what the optimal level of risk is, what sort of trade-offs are acceptable, and what security countermeasures most efficiently achieve that trade-off. They’re looking at the trade-off from their perspective: they get more benefit from the countermeasure than the average flier because it’s their workplace, and the cost of the countermeasure is borne largely by the passengers.

There is nothing wrong with flight attendants evaluating airline security from their own agenda. I’d be surprised if they didn’t. But understanding agenda is essential to understanding how security decisions are made.

Posted on August 19, 2005 at 12:48 PM61 Comments

Infants on the Terrorist Watch List

Imagine you’re in charge of airport security. You have a watch list of terrorist names, and you’re supposed to give anyone on that list extra scrutiny. One day someone shows up for a flight whose name is on that list. They’re an infant.

What do you do?

If you have even the slightest bit of sense, you realize that an infant can’t be a terrorist. So you let the infant through, knowing that it’s a false alarm. But if you have no flexibility in your job, if you have to follow the rules regardless of how stupid they are, if you have no authority to make your own decisions, then you detain the baby.

EDITED TO ADD: I know what the article says about the TSA rules:

The Transportation Security Administration, which administers the lists, instructs airlines not to deny boarding to children under 12—or select them for extra security checks—even if their names match those on a list.

Whether the rules are being followed or ignored is besides my point. The screener is detaining babies because he thinks that’s what the rules require. He’s not permitted to exercise his own common sense.

Security works best when well-trained people have the authority to make decisions, not when poorly-trained people are slaves to the rules (whether real or imaginary). Rules provide CYA security, but not security against terrorism.

Posted on August 19, 2005 at 8:03 AM37 Comments

Zotob and Variants

I’ve been reading the massive press coverage about Zotob (technical details are here, here, and here), and can’t figure out what the big deal is about. Yes, it propagates in Windows 2000 without user intervention, which is always nastier. It uses a Microsoft plug-and-play vulnerability, which is somewhat interesting. But the only reason I can think of that CNN did rolling coverage on it is that CNN was hit by it.

Posted on August 18, 2005 at 7:57 AM47 Comments

New Cryptanalytic Results Against SHA-1

Xiaoyun Wang, one of the team of Chinese cryptographers that successfully broke SHA-0 and SHA-1, along with Andrew Yao and Frances Yao, announced new results against SHA-1 yesterday at Crypto’s rump session. (Actually, Adi Shamir announced the results in their name, since she and her student did not receive U.S. visas in time to attend the conference.)

Shamir presented few details—and there’s no paper—but the time complexity of the new attack is 263. (Their previous result was 269; brute force is 280.) He did say that he expected Wang and her students to improve this result over the next few months. The modifications to their published attack are still new, and more improvements are likely over the next several months. There is no reason to believe that 263 is anything like a lower limit.

But an attack that’s faster than 264 is a significant milestone. We’ve already done massive computations with complexity 264. Now that the SHA-1 collision search is squarely in the realm of feasibility, some research group will try to implement it. Writing working software will both uncover hidden problems with the attack, and illuminate hidden improvements. And while a paper describing an attack against SHA-1 is damaging, software that produces actual collisions is even more so.

The story of SHA-1 is not over. Again, I repeat the saying I’ve heard comes from inside the NSA: “Attacks always get better; they never get worse.”

Meanwhile, NIST is holding a workshop in late October to discuss what the security community should do now. The NIST Hash Function Workshop should be interesting, indeed. (Here is one paper that examines the effect of these attacks on S/MIME, TLS, and IPsec.)

EDITED TO ADD: Here are Xiaoyun Wang’s two papers from Crypto this week: “Efficient Collision Search Attacks on SHA-0” and “Finding Collisions in the Full SHA-1Collision Search Attacks on SHA1.” And here are the rest of her papers.

Posted on August 17, 2005 at 2:06 PM66 Comments

Chinese Cryptographers Denied U.S. Visas

Chinese cryptographer Xiaoyun Wang, the woman who broke SHA-1 last year, was unable to attend the Crypto conference to present her paper on Monday. The U.S. government didn’t give her a visa in time:

On Monday, she was scheduled to explain her discovery in a keynote address to an international group of researchers meeting in California.

But a stand-in had to take her place, because she was not able to enter the country. Indeed, only one of nine Chinese researchers who sought to enter the country for the conference received a visa in time to attend.

Sadly, this is now common:

Although none of the scientists were officially denied visas by the United States Consulate, officials at the State Department and National Academy of Sciences said this week that the situation was not uncommon.

Lengthy delays in issuing visas are now routine, they said, particularly for those involved in sensitive scientific and technical fields.

These delays can make it impossible for some foreign researchers to attend U.S. conferences. There are researchers who need to have their paper accepted before they can apply for a visa. But the paper review and selection process, done by the program committee in the months before the conference, doesn’t finish early enough. Conferences can move the submission and selection deadlines earlier, but that just makes the conference less current.

In Wang’s case, she applied for her visa in early July. So did her student. Dingyi Pei, another Chinese researcher who is organizing Asiacrypt this year, applied for his in early June. (I don’t know about the others.) Wang has not received her visa, and Pei got his just yesterday.

This kind of thing hurts cryptography, and hurts national security. The visa restrictions were designed to protect American advanced technologies from foreigners, but in this case they’re having the opposite effect. We are all more secure because there is a vibrant cryptography research community in the U.S. and the world. By prohibiting Chinese cryptographers from attending U.S. conferences, we’re only hurting ourselves.

NIST is sponsoring a workshop on hash functions (sadly, it’s being referred to as a “hash bash”) in October. I hope Wang gets a visa for that.

Posted on August 17, 2005 at 11:53 AM46 Comments

Cryptographically-Secured Murder Confession

From the Associated Press:

Joseph Duncan III is a computer expert who bragged online, days before authorities believe he killed three people in Idaho, about a tell-all journal that would not be accessed for decades, authorities say.

Duncan, 42, a convicted sex offender, figured technology would catch up in 30 years, “and then the world will know who I really was, and what I really did, and what I really thought,” he wrote May 13.

Police seized Duncan’s computer equipment from his Fargo apartment last August, when they were looking for evidence in a Detroit Lakes, Minn., child molestation case.

At least one compact disc and a part of his hard drive were encrypted well enough that one of the region’s top computer forensic specialists could not access it, The Forum reported Monday.

This is the kind of story that the government likes to use to illustrate the dangers of encryption. How can we allow people to use strong encryption, they ask, if it means not being able to convict monsters like Duncan?

But how is this different than Duncan speaking the confession when no one was able to hear? Or writing it down and hiding it where no one could ever find it? Or not saying anything at all? If the police can’t convict him without this confession—which we only have his word for as existing—then maybe he’s innocent?

Technologies have good and bad uses. Encryption, telephones, cars: they’re all used by both honest citizens and by criminals. For almost all technologies, the good far outweighs the bad. Banning a technology because the bad guys use it, denying everyone else the beneficial uses of that technology, is almost always a bad security trade-off.

EDITED TO ADD: Looking at the details of the encryption, it’s certainly possible that the authorities will break the diary. It probably depends on how random a key Duncan chose, although possibly on whether or not there’s an implementation error in the cryptographic software. If I had more details, I could speculate further.

Posted on August 15, 2005 at 2:17 PM56 Comments

Terrorists, Steganography, and False Alarms

Remember all thost stories about the terrorists hiding messages in television broadcasts? They were all false alarms:

The first sign that something was amiss came a few days before Christmas Eve 2003. The US department of homeland security raised the national terror alert level to “high risk”. The move triggered a ripple of concern throughout the airline industry and nearly 30 flights were grounded, including long hauls between Paris and Los Angeles and subsequently London and Washington.

But in recent weeks, US officials have made a startling admission: the key intelligence that prompted the security alert was seriously flawed. CIA analysts believed they had detected hidden terrorist messages in al-Jazeera television broadcasts that identified flights and buildings as targets. In fact, what they had seen were the equivalent of faces in clouds – random patterns all too easily over-interpreted.

It’s a signal-to-noise issue. If you look at enough noise, you’re going to find signal just by random chance. It’s only signal that rises above random chance that’s valuable.

And the whole notion of terrorists using steganography to embed secret messages was ludicrous from the beginning. It makes no sense to communicate with terrorist cells this way, given the wide variety of more efficient anonymous communications channels.

I first wrote about this in September of 2001.

Posted on August 15, 2005 at 11:03 AM23 Comments

Secure Flight News

According to Wired News, the DHS is looking for someone in Congress to sponsor a bill that eliminates congressional oversight over the Secure Flight program.

The bill would allow them to go ahead with the program regardless of GAO’s assessment. (Current law requires them to meet ten criteria set by Congress; the most recent GAO report said that they did not meet nine of them.) The bill would allow them to use commercial data even though they have not demonstrated its effectiveness. (The DHS funding bill passed by both the House and the Senate prohibits them from using commercial data during passenger screening, because there has been absolutely no test results showing that it is effective.)

In this new bill, all that would be required to go ahead with Secure Flight would be for Secretary Chertoff to say so:

Additionally, the proposed changes would permit Secure Flight to be rolled out to the nation’s airports after Homeland Security chief Michael Chertoff certifies the program will be effective and not overly invasive. The current bill requires independent congressional investigators to make that determination.

Looks like the DHS, being unable to comply with the law, is trying to change it. This is a rogue program that needs to be stopped.

In other news, the TSA has deleted about three million personal records it used for Secure Flight testing. This seems like a good idea, but it prevents people from knowing what data the government had on them—in violation of the Privacy Act.

Civil liberties activist Bill Scannell says it’s difficult to know whether TSA’s decision to destroy records so swiftly is a housecleaning effort or something else.

“Is the TSA just such an incredibly efficient organization that they’re getting rid of things that are no longer needed?” Scannell said. “Or is this a matter of the destruction of evidence?”

Scannell says it’s a fair question to ask in light of revelations that the TSA already violated the Privacy Act last year when it failed to fully disclose the scope of its testing for Secure Flight and its collection of commercial data on individuals.

My previous essay on Secure Flight is here.

Posted on August 15, 2005 at 9:43 AM13 Comments

E-Mail Interception Decision Reversed

Is e-mail in transit communications or data in storage? Seems like a basic question, but the answer matters a lot to the police. A U.S. federal Appeals Court has ruled that the interception of e-mail in temporary storage violates the federal wiretap act, reversing an earlier court opinion.

The case and associated privacy issues are summarized here. Basically, different privacy laws protect electronic communications in transit and data in storage; the former is protected much more than the latter. E-mail stored by the sender or the recipient is obviously data in storage. But what about e-mail on its way from the sender to the receiver? On the one hand, it’s obviously communications on transit. But the other side argued that it’s actually stored on various computers as it wends its way through the Internet; hence it’s data in storage.

The initial court decision in this case held that e-mail in transit is just data in storage. Judge Lipez wrote an inspired dissent in the original opinion. In the rehearing en banc (more judges), he wrote the opinion for the majority which overturned the earlier opinion.

The opinion itself is long, but well worth reading. It’s well reasoned, and reflects extraordinary understanding and attention to detail. And a great last line:

If the issue presented be “garden-variety”… this is a garden in need of a weed killer.

I participated in an Amicus Curiae (“friend of the court”) brief in the case. Here’s another amicus brief by six civil liberties organizations.

There’s a larger issue here, and it’s the same one that the entertainment industry used to greatly expand copyright law in cyberspace. They argued that every time a copyrighted work is moved from computer to computer, or CD-ROM to RAM, or server to client, or disk drive to video card, a “copy” is being made. This ridiculous definition of “copy” has allowed them to exert far greater legal control over how people use copyrighted works.

Posted on August 15, 2005 at 7:59 AM13 Comments

The Devil's Infosec Dictionary

I want “The Devil’s Infosec Dictionary” to be funnier. And I wish the entry that mentions me—”Cryptography: The science of applying a complex set of mathematical algorithms to sensitive data with the aim of making Bruce Schneier exceedingly rich”—were more true.

In any case, I’ll bet the assembled here can come up with funnier infosec dictionary definitions. Post them as comments here, and—if there are enough good ones—I’ll collect them up on a single page.

Posted on August 13, 2005 at 10:48 AM106 Comments

Fingerprinting Paper

This could make an enormous difference in security against forgeries:

The scientists built a laser scanner that sweeps across the surface of paper, cardboard, or plastic, recording all of the unique microscopic imperfections that are a natural part of manufacturing such materials.

This scan serves as a fingerprint which, the scientists said, has two surprising properties: The fingerprints are robust, surviving scorching, dousing in water, crumpling, and scribbling over with pens. And these fingerprints depend on structures that are so complex and so small—on the scale of between one tenth and one ten-thousandth the diameter of a human hair—that nobody on the planet will be able to copy one for the foreseeable future. Unlike other methods such as using holograms or special inks, the fingerprint is already there.

Scientific American has more details:

All nonreflective surfaces are rough on a microscopic level. James D. R. Buchanan and his colleagues at Imperial College London report today in the journal Nature on the potential for this characteristic to “provide strong, in-built, hidden security for a wide range of paper, plastic or cardboard objects.” Using a focused laser to scan a variety of objects, the team measured how the light scattered at four different angles. By calculating how far the light moved from a mean value, and transforming the fluctuations into ones and zeros, the researchers developed a unique fingerprint code for each object. The scanning of two pieces of paper from the same pack yielded two different identifiers, whereas the fingerprint for one sheet stayed the same even after three days of regular use. Furthermore, when the team put the paper through its paces—screwing it into a tight ball, submerging it in cold water, baking it at 180 degrees Celsius, among other abuses—its fingerprint remained easily recognizable.

The team calculates that the odds of two pieces of paper having indistinguishable fingerprints are less than 10-72. For smoother surfaces such as matte-finished plastic cards, the probability increases, but only to 10-20. “Our findings open the way to a new and much simpler approach to authentication and tracking,” co-author Russell Cowburn remarks. “This is a system so secure that not even the inventors would be able to crack it since there is no known manufacturing process for copying surface imperfections at the necessary level of precision.”

To ensure the security of currency, you could fingerprint every bill and store the fingerprints in a large database. Or you can digitally sign the fingerprint and print it on the bill itself. The fingerprint is large enough to use as an encryption key, which opens up a bunch of other security possibilities.

This idea isn’t new. I remember currency anti-counterfeiting research in which fiber-optic bits were added to the paper pulp, and a “fingerprint” was taken using a laser. It didn’t work then, but it was clever.

Posted on August 12, 2005 at 10:30 AM44 Comments

TSA and Spam

A reader sent this to me. He’s corresponding with the TSA about getting his name off the watch list, and was told that he should turn off his e-mail spam filter.

——Original Message——

From: <> [mailto:tsa-donotreply@tsa.dot.gov]

Sent: Monday, August 01, 2005 11:46 AM

To: ((Name Deleted))

Subject: Your e-mail has been received

Please do not respond to this automated response.

Your e-mail has been received by the Transportation Security Administration’s (TSA) Contact Center. Our goal is to respond as quickly as possible. However, at times, high volumes sometimes delay our response. We appreciate your patience. You may also find the answer to your question on our web site at www.tsa.gov .

To ensure that you are able to receive a response from the TSA Contact Center, we recommend that Spam filters be disabled and that your email account have ample space to receive large files and/or attachments.

Posted on August 12, 2005 at 8:15 AM21 Comments

UK Border Security

The Register comments on the government using a border-security failure to push for national ID cards:

The Government spokesman the media could get hold of last weekend, leader of the House of Commons Geoff Hoon, said that the Government was looking into whether there should be “additional” passport checks on Eurostar, and added that the matter showed the need for identity cards because “it’s vitally important that we know who is coming in as well as going out.” Meanwhile the Observer reported plans by ministers to accelerate the introduction of the e-borders system in order to increase border security.

So shall we just sum that up? A terror suspect appears to have fled the country by the simple expedient of walking past an empty desk, and the Government’s reaction is not to put somebody at the desk, or to find out why, during one of the biggest manhunts London has ever seen, it was empty in the first place. No, the Government’s reaction is to explain its abject failure to play with the toys it’s got by calling for bigger, more expensive toys sooner. Asked about passport checks at Waterloo on Monday of this week, the Prime Minister’s spokeswoman said we do have passport checks—which actually we do, sort of. But, as we’ll explain shortly, we also have empty desks to go with them.

Posted on August 11, 2005 at 1:28 PM20 Comments

The MD5 Defense

This is interesting:

A team of Chinese maths enthusiasts have thrown NSW’s speed cameras system into disarray by cracking the technology used to store data about errant motorists.

The NRMA has called for a full audit of the way the state’s 110 enforcement cameras are used after a motorist escaped a conviction by claiming that data was vulnerable to hackers.

A Sydney magistrate, Laurence Lawson, threw out the case because the Roads and Traffic Authority failed to find an expert to testify that its speed camera images were secure.

The motorist’s defence lawyer, Denis Mirabilis, argued successfully that an algorithm known as MD5, which is used to store the time, date, place, numberplate and speed of cars caught on camera, was a discredited piece of technology.

It’s true that MD5 is broken. On the other hand, it’s almost certainly true that the speed cameras were correct. If there’s any lesson here, it’s that theoretical security is important in legal proceedings.

I think that’s a good thing.

Posted on August 11, 2005 at 7:52 AM41 Comments

Xbox Security

Interesting article: “The Hidden Boot Code of the Xbox, or How to fit three bugs in 512 bytes of security code.”

Microsoft wanted to lock out both pirated games and unofficial games, so they built a chain of trust on the Xbox from the hardware to the execution of the game code. Only code authorized by Microsoft could run on the Xbox. The link between hardware and software in this chain of trust is the hidden “MCPX” boot ROM. The article discusses that ROM.

Lots of kindergarten security mistakes.

Posted on August 10, 2005 at 1:00 PM23 Comments

Stealing Imaginary Things

There’s a new Trojan that tries to steal World of Warcraft passwords.

That reminded me about this article, about people paying programmers to find exploits to make virtual money in multiplayer online games, and then selling the proceeds for real money.

And here’s a page about ways people steal fake money in the online game Neopets, including cookie grabbers, fake login pages, fake contests, social engineering, and pyramid schemes.

I regularly say that every form of theft and fraud in the real world will eventually be duplicated in cyberspace. Perhaps every method of stealing real money will eventually be used to steal imaginary money, too.

Posted on August 10, 2005 at 7:36 AM28 Comments

RFID Passport Security Revisited

I’ve written previously (including this op ed in the International Herald Tribune) about RFID chips in passports. An article in today’s USA Today (the paper version has a really good graphic) summarizes the latest State Department proposal, and it looks pretty good. They’re addressing privacy concerns, and they’re doing it right.

The most important feature they’ve included is an access-control system for the RFID chip. The data on the chip is encrypted, and the key is printed on the passport. The officer swipes the passport through an optical reader to get the key, and then the RFID reader uses the key to communicate with the RFID chip. This means that the passport-holder can control who has access to the information on the chip; someone cannot skim information from the passport without first opening it up and reading the information inside. Good security.

The new design also includes a thin radio shield in the cover, protecting the chip when the passport is closed. More good security.

Assuming that the RFID passport works as advertised (a big “if,” I grant you), then I am no longer opposed to the idea. And, more importantly, we have an example of an RFID identification system with good privacy safeguards. We should demand that any other RFID identification cards have similar privacy safeguards.

EDITED TO ADD: There’s more information in a Wired story:

The 64-KB chips store a copy of the information from a passport’s data page, including name, date of birth and a digitized version of the passport photo. To prevent counterfeiting or alterations, the chips are digitally signed….

“We are seriously considering the adoption of basic access control,” [Frank] Moss [the State Department’s deputy assistant secretary for passport services] said, referring to a process where chips remain locked until a code on the data page is first read by an optical scanner. The chip would then also transmit only encrypted data in order to prevent eavesdropping.

So it sounds like this access-control mechanism is not definite. In any case, I believe the system described in the USA Today article is a good one.

Posted on August 9, 2005 at 1:27 PM79 Comments

The Myth of Panic

This New York Times op ed argues that panic is largely a myth. People feel stressed but they behave rationally, and it only gets called “panic” because of the stress.

If our leaders are really planning for panic, in the technical sense, then they are at best wasting resources on a future that is unlikely to happen. At worst, they may be doing our enemies’ work for them – while people are amazing under pressure, it cannot help to have predictions of panic drummed into them by supposed experts.

It can set up long-term foreboding, causing people to question whether they have the mettle to handle terrorists’ challenges. Studies have found that when interpreting ambiguous situations, people look to one another for cues. Panicky warnings can color the cues that people draw from one another when interpreting ambiguous situations, like seeing a South Asian-looking man with a backpack get on a bus.

Nor can it help if policy makers talk about possible draconian measures (like martial law and rigidly policed quarantines) to control the public and deny its right to manage its own affairs. The very planning for such measures can alienate citizens and the authorities from each other.

Whatever its source, the myth of panic is a threat to our welfare. Given the difficulty of using the term precisely and the rarity of actual panic situations, the cleanest solution is for the politicians and the press to avoid the term altogether. It’s time to end chatter about “panic” and focus on ways to support public resilience in an emergency.

Posted on August 9, 2005 at 7:25 AM25 Comments

Wireless Interception Distance Records

Don’t believe wireless distance limitations. Again and again they’re proven wrong.

At DefCon earlier this month, a group was able to set up an unamplified 802.11 network at a distance of 124.9 miles.

The record holders relied on more than just a pair of wireless laptops. The equipment required for the feat, according to the event website, included a “collection of homemade antennas, surplus 12 foot satellite dishes, home-welded support structures, scaffolds, ropes and computers”.

Bad news for those of us who rely on physical distance to secure our wireless networks.

Even more important, the world record for communicating with a passive RFID device was set at 69 feet. (Pictures here.) Remember that the next time someone tells you that it’s impossible to read RFID identity cards at a distance.

Whenever you hear a manufacturer talk about a distance limitation for any wireless technology—wireless LANs, RFID, Bluetooth, anything—assume he’s wrong. If he’s not wrong today, he will be in a couple of years. Assume that someone who spends some money and effort building more sensitive technology can do much better, and that it will take less money and effort over the years. Technology always gets better; it never gets worse. If something is difficult and expensive now, it will get easier and cheaper in the future.

Posted on August 8, 2005 at 1:37 PM35 Comments

Orlando Airport's CLEAR Program

Orlando Airport is piloting a new pre-screening program called CLEAR. The idea is that you pay $80 a year and subject yourself to a background check, and then you can use a faster security line at airports.

I’ve already written about this idea, back when Steven Brill first started talking about it:

My primary security concerns surrounding this system stem from what it’s trying to do. In his writings and speaking, Brill is very careful to explain that these are not “trusted traveler cards.” He calls them “verified identity cards.” But the only purpose of his card is to divide people into two lines—a fast line and a slow line, a “search less” line and a “search more” line, or whatever….

The reality is that the existence of the card creates a third, and very dangerous, category: bad guys with the card. Timothy McVeigh would have been able to get one of these cards. The DC sniper and the Unabomber would have been able to get this card. Any terrorist mole who hasn’t done anything yet and is being saved for something big would be able to get this card. Some of the 9/11 terrorists would have been able to get this card. These are people who are deemed trustworthy by the system even though they are not.

And even worse, the system lets terrorists test the system beforehand. Imagine you’re in a terrorist cell. Twelve of you apply for the card, but only four of you get it. Those four not only have a card that lets them go through the easy line at security checkpoints; they also know that they’re not on any terrorist watch lists. Which four do you think will be going on the mission? By “pre-approving” trust, you’re building a system that is easier to exploit.

Nothing in this program is different from what I wrote about last year. According to their website:

Your Membership will be continuously reviewed by TSA’s ongoing Security Threat Assessment Process. If your security status changes, your Membership will be immediately deactivated and you will receive a notification email of your status change as well as a refund of the unused portion of your annual enrollment fee.

Think about it. For $80 a year, any potential terrorist can be automatically notified if the Department of Homeland Security is on to him. Such a deal.

Posted on August 8, 2005 at 8:03 AM36 Comments

Low-Tech Loitering Countermeasure

Amazingly, this works:

To clear out undesirables, opera and classical music have been piped into Canadian parks, Australian railway stations, 7-Eleven parking lots and, most recently, London Underground stops.

According to most reports, it works. Figures from the British capital released in January showed robberies in the subway down by 33 percent, assaults on staff by 25 percent and vandalism of trains and stations by 37 percent. Sources in other locales have reported fewer muggings and drug deals. London authorities now plan to expand the playing of Mozart, Vivaldi, Handel and opera (sung by Pavarotti) from three tube stations to an additional 35.

It’s not new:

But as Kahle points out, “It’s well known within the industry that classical music discourages teen loitering. It was first used by 7-11 stores across the country over a decade ago.”

Note that this does not reduce loitering, but moves it around. But if you’re the owner of a 7-Eleven, you don’t care if kids are loitering at the store down the block. You just don’t want them loitering at your store.

Posted on August 6, 2005 at 7:46 AM31 Comments

London Bombing Details

Interesting details about the bombs used in the 7/7 London bombings:

The NYPD officials said investigators believe the bombers used a peroxide-based explosive called HMDT, or hexamethylene triperoxide diamine. HMDT can be made using ordinary ingredients like hydrogen peroxide (hair bleach), citric acid (a common food preservative) and heat tablets (sometimes used by the military for cooking).

HMDT degrades at room temperature, so the bombers preserved it in a way that offered an early warning sign, said Michael Sheehan, deputy commissioner of counterterrorism at the nation’s largest police department.

“In the flophouse where this was built in Leeds, they had commercial grade refrigerators to keep the materials cool,” Sheehan said, describing the setup as “an indicator of a problem.”

Among the other details cited by Sheehan:

The bombers transported the explosives in beverage coolers tucked in the backs of two cars to the outskirts of London.

Investigators believe the three bombs that exploded in the subway were detonated by cell phones that had alarms set to 8:50 a.m.

For those of you upset that the police divulged the recipe—citric acid, hair bleach, and food heater tablets—the details are already out there.

And here are some images of home-made explosives seized in the various raids after the bombings.

Normally this kind of information would be classified, but presumably the London (and U.S.) governments feel that the more people that know about this, the better. Anyone owning a commercial-grade refrigerator without a good reason should expect a knock on his door.

Posted on August 5, 2005 at 4:03 PM39 Comments

New Windows Vulnerability

There’s a new Windows 2000 vulnerability:

A serious flaw has been discovered in a core component of Windows 2000, with no possible work-around until it gets fixed, a security company said.

The vulnerability in Microsoft’s operating system could enable remote intruders to enter a PC via its Internet Protocol address, Marc Maiffret, chief hacking officer at eEye Digital Security, said on Wednesday. As no action on the part of the computer user is required, the flaw could easily be exploited to create a worm attack, he noted.

What may be particularly problematic with this unpatched security hole is that a work-around is unlikely, he said.

“You can’t turn this (vulnerable) component off,” Maiffret said. “It’s always on. You can’t disable it. You can’t uninstall.”

Don’t fail to notice the sensationalist explanation from eEye. This is what I call a “publicity attack” (note that the particular example in that essay is wrong): it’s an attempt by eEye Digital Security to get publicity for their company. Yes, I’m sure it’s a bad vulnerability. Yes, I’m sure Microsoft should have done more to secure their systems. But eEye isn’t blameless in this; they’re searching for vulnerabilities that make good press releases.

Posted on August 5, 2005 at 2:25 PM12 Comments

U.S. Crypto Export Controls

Rules on exporting cryptography outside the United States have been renewed:

President Bush this week declared a national emergency based on an “extraordinary threat to the national security.”

This might sound like a code-red, call-out-the-national-guard, we-lost-a-suitcase-nuke type of alarm, but in reality it’s just a bureaucratic way of ensuring that the Feds can continue to control the export of things like computer hardware and encryption products.

And it happens every year or so.

If Bush didn’t sign that “national emergency” paperwork, then the Commerce Department’s Bureau of Industry and Security would lose some of its regulatory power. That’s because Congress never extended the Export Administration Act after it lapsed (it’s complicated).

President Clinton did the same thing. Here’s a longer version of his “national emergency” executive order from 1994.

As a side note, encryption export rules have been dramatically relaxed since the oppressive early days of Janet “Evil PCs” Reno, Al “Clipper Chip” Gore, and Louis “ban crypto” Freeh. But they still exist. Here’s a summary.

To be honest, I don’t know what the rules are these days. I think there is a blanket exemption for mass-market software products, but I’m not sure. I haven’t a clue what the hardware requirements are. But certainly something is working right; we’re seeing more strong encryption in more software—and not just encryption software.

Posted on August 5, 2005 at 7:17 AM26 Comments

Shoot-to-Kill Revisited

I’ve already written about the police “shoot-to-kill” policy in the UK in response to the terrorist bombings last month, explaining why it’s a bad security trade-off. Now the International Association of Chiefs of Police have issued new guidelines that also recommend a shoot-to-kill policy.

What might cause a police officer to think you’re a suicide bomber, and then shoot you in the head?

The police organization’s behavioral profile says such a person might exhibit “multiple anomalies,” including wearing a heavy coat or jacket in warm weather or carrying a briefcase, duffel bag or backpack with protrusions or visible wires. The person might display nervousness, an unwillingness to make eye contact or excessive sweating. There might be chemical burns on the clothing or stains on the hands. The person might mumble prayers or be “pacing back and forth in front of a venue.”

Is that all that’s required?

The police group’s guidelines also say the threat to officers does not have to be “imminent,” as police training traditionally teaches. Officers do not have to wait until a suspected bomber makes a move, another traditional requirement for police to use deadly force. An officer just needs to have a “reasonable basis” to believe that the suspect can detonate a bomb, the guidelines say.

Does anyone actually think they’re safer if a policy like this is put into effect?

EDITED TO ADD: For reference:

The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no Warrants shall issue, but upon probable cause, supported by Oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.

But what does a 215-year-old document know?

Posted on August 4, 2005 at 3:08 PM125 Comments

Caches of Explosives Hidden in Moscow

Here’s a post-Cold War risk that I hadn’t considered before:

Construction workers involved in building a new hotel just across from the Kremlin were surprised to find 250 kg of TNT buried deep beneath the old Moskva Hotel that had just been demolished to make way for a new one. Police astonished Muscovites further when they said that the 12 boxes of explosives lodged in the basement could have been there for half a century.

And now, new evidence points to the possibility that Moscow could be dotted with such explosive caches—planted by the secret police in the early days of World War II.

Posted on August 4, 2005 at 7:58 AM27 Comments

More Lynn/Cisco Information

There’s some new information on last week’s Lynn/Cisco/ISS story: Mike Lynn gave an interesting interview to Wired. Here’s some news about the FBI’s investigation. And here’s a video of Cisco/ISS ripping pages out of the BlackHat conference proceedings.

Someone is setting up a legal defense fund for Lynn. Send donations via PayPal to Abaddon@IO.com. (Does anyone know the URL?) According to BoingBoing, donations not used to defend Lynn will be donated to the EFF.

Copies of Lynn’s talk have popped up on the Internet, but some have been removed due to legal cease-and-desist letters from ISS attorneys, like this one. Currently, Lynn’s slides are here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here. (The list is from BoingBoing.) Note that the presentation above is not the same as the one Lynn gave at BlackHat. The presentation at BlackHat didn’t have the ISS logo at the bottom, as the one on the Internet does. Also, the critical code components were blacked out. (Photographs of Lynn’s actual presentation slides were available here, but have been removed due to legal threats from ISS.)

There have been a bunch of commentary and analyses on the whole story. Business Week completely missed the point. Larry Seltzer at eWeek is more balanced.

Hackers are working overtime to reconstruct Lynn’s attack and write an exploit. This, of course, means that we’re in much more danger of there being a worm that makes use of this vulnerability.

The sad thing is that we could have avoided this. If Cisco and ISS had simply let Lynn present his work, it would have been just another obscure presentation amongst the sea of obscure presentations that is BlackHat. By attempting to muzzle Lynn, the two companies ensured that 1) the vulnerability was the biggest story of the conference, and 2) some group of hackers would turn the vulnerability into exploit code just to get back at them.

EDITED TO ADD: Jennifer Granick is Lynn’s attorney, and she has blogged about what happened at BlackHat and DefCon. And photographs of the slides Lynn actually used for his talk are here (for now, at least). Is it just me, or does it seem like ISS is pursuing this out of malice? With Cisco I think it was simple stupidity, but I think it’s malice with ISS.

EDITED TO ADD: I don’t agree with Irs Winkler’s comments, either.

EDITED TO ADD: ISS defends itself.

EDITED TO ADD: More commentary.

EDITED TO ADD: Nice rebuttal to Winkler’s essay.

Posted on August 3, 2005 at 1:31 PM28 Comments

Technological Parenting

Salon has an interesting article about parents turning to technology to monitor their children, instead of to other people in their community.

“What is happening is that parents now assume the worst possible outcome, rather than seeing other adults as their allies,” says Frank Furedi, a professor of sociology at England’s University of Kent and the author of “Paranoid Parenting.” “You never hear stories about asking neighbors to care for kids or coming together as community. Instead we become insular, privatized communities, and look for
technological solutions to what are really social problems.” Indeed, while our parents’ generation was taught to “honor thy neighbor,” the mantra for today’s kids is “stranger danger,” and the message is clear—expect the worst of anyone unfamiliar—anywhere, and at any time.

This is security based on fear, not reason. And I think people who act this way make their families less safe.

EDITED TO ADD: Here’s a link to the book Paranoid Parenting.

Posted on August 3, 2005 at 8:38 AM42 Comments

Eavesdropping on Bluetooth Automobiles

This is impressive:

This new toool is called The Car Whisperer and allows people equipped with a Linux Laptop and a directional antenna to inject audio to, and record audio from bypassing cars that have an unconnected Bluetooth handsfree unit running. Since many manufacturers use a standard passkey which often is the only authentication that is needed to connect.

This tool allows to interact with other drivers when traveling or maybe used in order to talk to that pushy Audi driver right behind you 😉 . It also allows to eavesdrop conversations in the inside of the car by accessing the microphone.

EDITED TO ADD: Another article.

Posted on August 2, 2005 at 1:41 PM32 Comments

RFID Cards for U.S. Visitors

The Department of Homeland Security is testing a program to issue RFID identity cards to visitors entering the U.S.

They’ll have to carry the wireless devices as a way for border guards to access the electronic information stored inside a document about the size of a large index card.

Visitors to the U.S. will get the card the first time they cross the border and will be required the carry the document on subsequent crossings to and from the States.

Border guards will be able to access the information electronically from 12 metres away to enable those carrying the devices to be processed more quickly.

According to the DHS:

The technology will be tested at a simulated port this spring. By July 31, 2005, the testing will begin at the ports of Nogales East and Nogales West in Arizona; Alexandria Bay in New York; and, Pacific Highway and Peace Arch in Washington. The testing or “proof of concept” phase is expected to continue through the spring of 2006.

I know nothing about the details of this program or about the security of the cards. Even so, the long-term implications of this kind of thing are very chilling.

Posted on August 2, 2005 at 6:39 AM63 Comments

Hacking Hotel Infrared Systems

From Wired:

A vulnerability in many hotel television infrared systems can allow a hacker to obtain guests’ names and their room numbers from the billing system.

It can also let someone read the e-mail of guests who use web mail through the TV, putting business travelers at risk of corporate espionage. And it can allow an intruder to add or delete charges on a hotel guest’s bill or watch pornographic films and other premium content on their hotel TV without paying for it….

“No one thinks about the security risks of infrared because they think it’s used for minor things like garage doors and TV remotes,” Laurie said. “But infrared uses really simple codes, and they don’t put any kind of authentication (in it)…. If the system was designed properly, I shouldn’t be able to do what I can do.”

Posted on August 1, 2005 at 1:21 PM24 Comments

Plagiarism and Academia: Personal Experience

A paper published in the December 2004 issue of the SIGCSE Bulletin, “Cryptanalysis of some encryption/cipher schemes using related key attack,” by Khawaja Amer Hayat, Umar Waqar Anis, and S. Tauseef-ur-Rehman, is the same as a paper that John Kelsey, David Wagner, and I published in 1997.

It’s clearly plagiarism. Sentences have been reworded or summarized a bit and many typos have been introduced, but otherwise it’s the same paper. It’s copied, with the same section, paragraph, and sentence structure—right down to the same mathematical variable names. It has the same quirks in the way references are cited. And so on.

We wrote two papers on the topic; this is the second. They don’t list either of our papers in their bibliography. They do have a lurking reference to “[KSW96]” (the first of our two papers) in the body of their introduction and design principles, presumably copied from our text; but a full citation for “[KSW96]” isn’t in their bibliography. Perhaps they were worried that one of the referees would read the papers listed in their bibliography, and notice the plagiarism.

The three authors are from the International Islamic University in Islamabad, Pakistan. The third author, S. Tauseef-Ur-Rehman, is a department head (and faculty member) in the Telecommunications Engineering Department at this Pakistani institution. If you believe his story—which is probably correct—he had nothing to do with the research, but just appended his name to a paper by two of his students. (This is not unusual; it happens all the time in universities all over the world.) But that doesn’t get him off the hook. He’s still responsible for anything he puts his name on.

And we’re not the only ones. The same three authors plagiarized this paper by French cryptographer Serge Vaudenay and others.

I wrote to the editor of the SIGCSE Bulletin, who removed the paper from their website and demanded official letters of admission and apology. (The apologies are at the bottom of this page.) They said that they would ban them from submitting again, but have since backpedaled. Mark Mandelbaum, Director of the Office of Publications at ACM, now says that ACM has no policy on plagiarism and that nothing additional will be done. I’ve also written to Springer-Verlag, the publisher of my original paper.

I don’t blame the journals for letting these papers through. I’ve refereed papers, and it’s pretty much impossible to verify that a piece of research is original. We’re largely self-policing.

Mostly, the system works. These three have been found out, and should be fired and/or expelled. Certainly ACM should ban them from submitting anything, and I am very surprised at their claim that they have no policy with regards to plagiarism. Academic plagiarism is serious enough to warrant that level of response. I don’t know if the system works in Pakistan, though. I hope it does. These people knew the risks when they did it. And then they did it again.

If I sound angry, I’m not. I’m more amused. I’ve heard of researchers from developing countries resorting to plagiarism to pad their CVs, but I’m surprised see it happen to me. I mean, really; if they were going to do this, wouldn’t it have been smarter to pick a more obscure author?

And it’s nice to know that our work is still considered relevant eight years later.

EDITED TO ADD: Another paper, “Analysis of Real-time Transport Protocol Security,” by Junaid Aslam, Saad Rafique and S. Tauseef-ur-Rehman”, has been plagiarized from this original: Real-time Transport Protocol (RTP) security,” by Ville Hallivuori.

EDITED TO ADD: Ron Boisvert, the Co-Chair of the ACM Publications Board, has said this:

1. ACM has always been a champion for high ethical standards among computing professionals. Respecting intellectual property rights is certainly a part of this, as is clearly reflected in the ACM Code of Ethics.

2. ACM has always acted quickly and decisively to deal with allegations of plagarism related to its publications, and remains committed to doing so in the future.

3. In the past, such incidents of plagarism were rare. However, in recent years the number of such incidents has grown considerably. As a result, the ACM Publications Board has recently begun work to develop a more explicit policy on plagarism. In doing so we hope to lay out (a) what constitutes plagarism, as well as various levels of plagarism, (b) ACM procedures for handling allegations of plagarism, and (c) specific penalties which will be leveled against those found to have committed plagarism at each of the identified levels. When this new “policy” is in place, we hope to widely publicize it in order to draw increased attention to this growing problem.

EDITED TO ADD: There’s a news story with some new developments.

EDITED TO ADD: Over the past couple of weeks, I have been getting repeated e-mails from people, presumably faculty and administrators of the International Islamic University, to close comments in this blog entry. The justification usually given is that there is an official investigation underway so there’s no longer any reason for comments, or that Tauseef has been fired so there’s no longer any reason for comments, or that the comments are harmful to the reputation of the university or the country.

I have responded that I will not close comments on this blog entry. I have, and will continue to, delete posts that are incoherent or hostile (there have been examples of both).

Blog comments are anonymous. There is no way for me to verify the identity of posters, and I don’t. I have, and will continue to, remove any posts purporting to come from a person it does not come, but generally the only way I can figure that out is if the real person e-mails me and asks.

Otherwise, consider this a forum for anonymous free speech. The comments here are unvetted and unverified. They might be true, and they might be false. Readers are expected to understand that, and I believe for the most part they do.

In the United States, we have a saying that the antidote for bad speech is more speech. I invite anyone who disagrees with the comments on the page to post their own opinions.

Posted on August 1, 2005 at 6:07 AM465 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.