Blog: March 2005 Archives

ID Theft is Inescapable

The Register says what I’ve been saying all along:

While this is nothing new, there is an important observation here that’s worth emphasizing: none of these cases involved online transactions.

Many people innocently believe that they’re safe from credit card fraud and identity theft in the brick and mortar world. Nothing could be farther from the truth. The vast majority of incidents can be traced to skimming, dumpster diving, and just plain stupidity among those who “own” our personal data.

Only a small fraction of such incidents result from online transactions. Every time you pay by check, use a debit or credit card, or fill out an application for insurance, housing, credit, employment, or education, you lose control of sensitive data.

In the US, a merchant is at liberty to do anything he pleases with the information, and this includes selling it to a third party without your knowledge or permission, or entering it into a computerized database, possibly with lax access controls, and possibly connected to the Internet.

Sadly, Congress’s response has been to increase the penalties for identity theft, rather than to regulate access to, and use of, personal data by merchants, marketers, and data miners. Incredibly, the only person with absolutely no control over the collection, storage, security, and use of such sensitive information is its actual owner.

For this reason, it’s literally impossible for an individual to prevent identity theft and credit card fraud, and it will remain impossible until Congress sees fit to regulate the privacy invasion industry.

Posted on March 30, 2005 at 7:35 AM35 Comments

GAO's Report on Secure Flight

Sunday I blogged about Transportation Security Administration’s Secure Flight program, and said that the Government Accountability Office will be issuing a report this week.

Here it is.

The AP says:

The government’s latest computerized airline passenger screening program doesn’t adequately protect travelers’ privacy, according to a congressional report that could further delay a project considered a priority after the Sept. 11 attacks.

Congress last year passed a law that said the Transportation Security Administration could spend no money to implement the program, called Secure Flight, until the Government Accountability Office reported that it met 10 conditions. Those include privacy protections, accuracy of data, oversight, cost and safeguards to ensure the system won’t be abused or accessed by unauthorized people.

The GAO found nine of the 10 conditions hadn’t yet been met and questioned whether Secure Flight would ultimately work.

Some tidbits:

  • TSA plans to include the capability for criminal checks within Secure Flight (p. 12).
  • The timetable has slipped by four months (p. 17).
  • TSA might not be able to get personally identifiable passenger data in PNRs because of costs to the industry and lack of money (p.18).
  • TSA plans to have intelligence analysts staffed within TSA to identify false positives (p.33).
  • The DHS Investment Review Board has withheld approval from the “Transportation Vetting Platform” (p.39).
  • TSA doesn’t know how much the program will cost (p.51).
  • Final privacy rule to be issued in April (p. 56).

Any of you who read the report, please post other interesting tidbits as comments.

As you all probably know, I am a member of a working group to help evaluate the privacy of Secure Flight. While I believe that a program to match airline passengers against terrorist watch lists is a colossal waste of money that isn’t going to make us any safer, I said “…assuming that we need to implement a program of matching airline passengers with names on terrorism watch lists, Secure Flight is a major improvement—in almost every way—over what is currently in place.” I still believe that, but unfortunately I am prohibited by NDA from describing the improvements. I wish someone at TSA would get himself in front of reporters and do so.

Posted on March 28, 2005 at 7:03 PM5 Comments

Camouflage in Octopodes

From Nature.com:

Two tiny species of tropical octopus have demonstrated a remarkable disappearing trick. They adopt a two-armed “walk” that frees up their remaining six limbs to camouflage them as they slink away from trouble.

I have a fondness for security countermeasures in the natural world. As people, we try to figure out the most effective countermeasure for a given attack. Evolution works differently. A species tries different countermeasures at random, and stops at the first one that just barely works.

(I found this on BoingBoing.)

Posted on March 28, 2005 at 12:38 AM21 Comments

TSA Lied About Protecting Passenger Data

According to the AP:

The Transportation Security Administration misled the public about its role in obtaining personal information about 12 million airline passengers to test a new computerized system that screens for terrorists, according to a government investigation.

The report, released Friday by Homeland Security Department Acting Inspector General Richard Skinner, said the agency misinformed individuals, the press and Congress in 2003 and 2004. It stopped short of saying TSA lied.

I’ll say it: the TSA lied.

Here’s the report. It’s worth reading. And when you read it, keep in mind that it’s written by the DHS’s own Inspector General. I presume a more independent investigator would be even more severe. Not that the report isn’t severe, mind you.

Another AP article has more details:

The report cites several occasions where TSA officials made inaccurate statements about passenger data:

  • In September 2003, the agency’s Freedom of Information Act staff received hundreds of requests from Jet Blue passengers asking if the TSA had their records. After a cursory search, the FOIA staff posted a notice on the TSA Web site that it had no JetBlue passenger data. Though the FOIA staff found JetBlue passenger records in TSA’s possession in May, the notice stayed on the Web site for more than a year.
  • In November 2003, TSA chief James Loy incorrectly told the Governmental Affairs Committee that certain kinds of passenger data were not being used to test passenger prescreening.
  • In September 2003, a technology magazine reporter asked a TSA spokesman whether real data were used to test the passenger prescreening system. The spokesman said only fake data were used; the responses “were not accurate,” the report said.

There’s much more. The report reveals that TSA ordered Delta Air Lines to turn over passenger data in February 2002 to help the Secret Service determine whether terrorists or their associates were traveling in the vicinity of the Salt Lake City Olympics.

It also reveals that TSA used passenger data from JetBlue in the spring of 2003 to figure out how to change the number of people who would be selected for more screening under the existing system.

The report says that one of the TSA’s contractors working on passenger prescreening, Lockheed Martin, used a data sample from ChoicePoint.

The report also details how outside contractors used the data for their own purposes. And that “the agency neglected to inquire whether airline passenger data used by the vendors had been returned or destroyed.” And that “TSA did not consistently apply privacy protections in the course of its involvement in airline passenger data transfers.”

This is major stuff. It shows that the TSA lied to the public about its use of personal data again and again and again.

Right now the TSA is in a bit of a bind. It is prohibited by Congress from fielding Secure Flight until it meets a series of criteria. The Government Accountability Office is expected to release a report this week that details how the TSA has not met these criteria.

I’m not sure the TSA cares. It’s already announced plans to roll out Secure Flight.

With little fanfare, the Transportation Security Administration late last month announced plans to roll out in August its highly contentious Secure Flight program. Considered by some travel industry experts a foray into operational testing, rather than a viable implementation, the program will begin, in limited release, with two airlines not yet named by TSA.

My own opinions of Secure Flight are well-known. I am participating in a Working Group to help evaluate the privacy of Secure Flight. (I’ve blogged about it here and here.) We’ve met three times, and it’s unclear if we’ll ever meet again or if we’ll ever produce the report we’re supposed to. Near as I can tell, it’s all a big mess right now.

Edited to add: The GAO report is online (PDF format).

Posted on March 27, 2005 at 12:34 PM31 Comments

Anonymity and the Internet

From Slate:

Anonymice on Anonymity Wendy.Seltzer.org (“Musings of a techie lawyer”) deflates the New York Times‘ breathless Saturday (March 19) piece about the menace posed by anonymous access to Wi-Fi networks (“Growth of Wireless Internet Opens New Path for Thieves” by Seth Schiesel). Wi-Fi pirates around the nation are using unsecured hotspots to issue anonymous death threats, download child pornography, and commit credit card fraud, Schiesel writes. Then he plays the terrorist card.

But unsecured wireless networks are nonetheless being looked at by the authorities as a potential tool for furtive activities of many sorts, including terrorism. Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Never mind the pod of qualifiers swimming through in those two sentences—”being looked at”; “potential tool”; “not aware of specific cases”; “might”—look at the sourcing. “Two federal law enforcement officials said on condition of anonymity. …” Seltzer points out the deep-dish irony of the Times citing anonymous sources about the imagined threats posed by anonymous Wi-Fi networks. Anonymous sources of unsubstantiated information, good. Anonymous Wi-Fi networks, bad.

This is the post from wendy.seltzer.org:

The New York Times runs an article in which law enforcement officials lament, somewhat breathlessly, that open wifi connections can be used, anonymously, by wrongdoers. The piece omits any mention of the benefits of these open wireless connections—no-hassle connectivity anywhere the “default” community network is operating, and anonymous browsing and publication for those doing good, too.

Without a hint of irony, however:

Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Yes, even law enforcement needs anonymity sometimes.

Open WiFi networks are a good thing. Yes, they allow bad guys to do bad things. But so do automobiles, telephones, and just about everything else you can think of. I like it when I find an open wireless network that I can use. I like it when my friends keep their home wireless network open so I can use it.

Scare stories like the New York Times one don’t help any.

Posted on March 25, 2005 at 12:49 PM20 Comments

Personal Information and Identity Theft

From BBC:

The chance to win theatre tickets is enough to make people give away their identity, reveals a survey.

Of those taking part 92% revealed details such as mother’s maiden name, first school and birth date.

Fraud due to impersonation—commonly called “identity theft”—works for two reasons. One, identity information is easy to obtain. And two, identity information is easy to use to commit fraud.

Studies like this show why attacking the first reason is futile; there are just too many ways to get the information. If we want to reduce the risks associated with identity theft, we have to make identity information less valuable. Too much of our security is based on identity, and it’s not working.

Posted on March 25, 2005 at 8:09 AM15 Comments

Voting and IDs

Very interesting story this morning on NPR, about new voter ID requirements in the state of Georgia.

Controversy Surrounds Georgia Voter ID Proposal

Minorities and the elderly in Georgia strongly oppose a proposal to require photo IDs for voters at the polls. Both groups say the plan, if enacted, would restrict their access to the polls. Supporters say they don’t want to stop anyone from voting, just safeguard the integrity of the election process. Susanna Capelouto of Georgia Public Broadcasting reports.

Those who advocate photo IDs at polling places forget that not everyone has one. Not everyone flies on airplanes. Not everyone has a drivers license. If a photo ID is required to vote, it had better be 1) free, and 2) easily available everywhere to everyone. Otherwise it’s a poll tax.

Posted on March 24, 2005 at 4:05 PM33 Comments

The Silliness of Secrecy

This is a great article on some of the ridiculous effects of government secrecy. (Unfortunately, you have to register to read it.)

Ever since Sept. 11, 2001, the federal government has advised airplane pilots against flying near 100 nuclear power plants around the country or they will be forced down by fighter jets. But pilots say there’s a hitch in the instructions: aviation security officials refuse to disclose the precise location of the plants because they
consider that “SSI”—Sensitive Security Information.

“The message is; ‘please don’t fly there, but we can’t tell you where there is,'” says Melissa Rudinger of the Aircraft Owners and Pilots Association, a trade group representing 60% of American pilots.

Determined to find a way out of the Catch-22, the pilots’ group sat down with a commercial mapping company, and in a matter of days plotted the exact geographical locations of the plants from data found on the Internet and in libraries. It made the information available to its 400,000 members on its Web site—until officials from the Transportation Security Administration asked them to take the information down. “Their concern was that [terrorists] mining the Internet could use it,” Ms. Rudinger says.

And:

For example, when a top Federal Aviation Administration official testified last year before the 9/11 commission, his remarks were
broadcast live nationally. But when the administration included a transcript in a recent report on threats to commercial airliners, the testimony was heavily edited. “How do you redact something that
is part of the public record?” asked Rep. Carolyn Maloney, (D., N.Y.) at a recent hearing on the problems of government
overclassification. Among the specific words blacked out were the seemingly innocuous phrase: “we are hearing this, this, this, this
and this.”

Government officials could not explain why the words were withheld, other than to note that they were designated SSI.

Posted on March 24, 2005 at 9:48 AM21 Comments

Social Engineering and the IRS

Social engineering is still very effective:

More than one-third of Internal Revenue Service (IRS) employees and managers
who were contacted by Treasury Department inspectors posing as computer technicians provided their computer login and changed their password, a government report said Wednesday.

This is a problem that two-factor authentication would significantly mitigate.

Posted on March 22, 2005 at 9:54 AM12 Comments

Radiation Detectors in Ports

According to Reuters:

The United States is stepping up investment in radiation detection devices at its ports to thwart attempts to smuggle a nuclear device or dirty bomb into the country, a Senate committee heard on Wednesday.

Robert Bonner, commissioner of U.S. Customs and Border Protection, told a Senate subcommittee on homeland security that since the first such devices were installed in May 2000, they had picked up over 10,000 radiation hits in vehicles or cargo shipments entering the country. All proved harmless.

It amazes me that 10,000 false alarms—instances where the security system failed—are being touted as proof that the system is working.

As an example of how the system was working, Bonner said on Jan. 26, 2005, a machines got a hit from a South Korean vessel at the Los Angeles seaport. The radiation turned out to be emanating from the ship’s fire extinguishing system and was no threat to safety.

That sounds like an example of how the system is not working to me. Sometimes I wish that those in charge of security actually understood security.

Posted on March 16, 2005 at 7:51 AM48 Comments

The Doghouse: Xavety

It’s been a long time since I doghoused any encryption products. CHADSEA (Chaotic Digital Signature, Encryption, and Authentication) isn’t as funny as some of the others, but it’s no less deserving.

Read their “Testing the Encryption Algorithm” section: “In order to test the reliability and statistical independency of the encryption, several different tests were performed, like signal-noise tests, the ENT test suite (Walker, 1998), and the NIST Statistical Test Suite (Ruhkin et al., 2001). These tests are quite comprehensive, so the description of these tests are subject of separate publications, which are also available on this website. Please, see the respective links.”

Yep. All they did to show that their algorithm was secure was a bunch of statistical tests. Snake oil for sure.

Posted on March 15, 2005 at 11:00 AM6 Comments

The Failure of Two-Factor Authentication

Two-factor authentication isn’t our savior. It won’t defend against phishing. It’s not going to prevent identity theft. It’s not going to secure online accounts from fraudulent transactions. It solves the security problems we had ten years ago, not the security problems we have today.

The problem with passwords is that they’re too easy to lose control of. People give them to other people. People write them down, and other people read them. People send them in e-mail, and that e-mail is intercepted. People use them to log into remote servers, and their communications are eavesdropped on. They’re also easy to guess. And once any of that happens, the password no longer works as an authentication token because you can’t be sure who is typing that password in.

Two-factor authentication mitigates this problem. If your password includes a number that changes every minute, or a unique reply to a random challenge, then it’s harder for someone else to intercept. You can’t write down the ever-changing part. An intercepted password won’t be good the next time it’s needed. And a two-factor password is harder to guess. Sure, someone can always give his password and token to his secretary, but no solution is foolproof.

These tokens have been around for at least two decades, but it’s only recently that they have gotten mass-market attention. AOL is rolling them out. Some banks are issuing them to customers, and even more are talking about doing it. It seems that corporations are finally waking up to the fact that passwords don’t provide adequate security, and are hoping that two-factor authentication will fix their problems.

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

Here are two new active attacks we’re starting to see:

  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.

  • Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

The real threat is fraud due to impersonation, and the tactics of impersonation will change in response to the defenses. Two-factor authentication will force criminals to modify their tactics, that’s all.

Recently I’ve seen examples of two-factor authentication using two different communications paths: call it “two-channel authentication.” One bank sends a challenge to the user’s cell phone via SMS and expects a reply via SMS. If you assume that all your customers have cell phones, then this results in a two-factor authentication process without extra hardware. And even better, the second authentication piece goes over a different communications channel than the first; eavesdropping is much, much harder.

But in this new world of active attacks, no one cares. An attacker using a man-in-the-middle attack is happy to have the user deal with the SMS portion of the log-in, since he can’t do it himself. And a Trojan attacker doesn’t care, because he’s relying on the user to log in anyway.

Two-factor authentication is not useless. It works for local login, and it works within some corporate networks. But it won’t work for remote authentication over the Internet. I predict that banks and other financial institutions will spend millions outfitting their users with two-factor authentication tokens. Early adopters of this technology may very well experience a significant drop in fraud for a while as attackers move to easier targets, but in the end there will be a negligible drop in the amount of fraud and identity theft.

This essay will appear in the April issue of Communications of the ACM.

Posted on March 15, 2005 at 7:54 AM132 Comments

Tracking Bot Networks

This is a fascinating piece of research on bot networks: networks of compromised computers that can be remotely controlled by an attacker. The paper details how bots and bot networks work, who uses them, how they are used, and how to track them.

From the conclusion:

In this paper we have attempted to demonstrate how honeynets can help us understand how botnets work, the threat they pose, and how attackers control them. Our research shows that some attackers are highly skilled and organized, potentially belonging to well organized crime structures. Leveraging the power of several thousand bots, it is viable to take down almost any website or network instantly. Even in unskilled hands, it should be obvious that botnets are a loaded and powerful weapon. Since botnets pose such a powerful threat, we need a variety of mechanisms to counter it.

Decentralized providers like Akamai can offer some redundancy here, but very large botnets can also pose a severe threat even against this redundancy. Taking down of Akamai would impact very large organizations and companies, a presumably high value target for certain organizations or individuals. We are currently not aware of any botnet usage to harm military or government institutions, but time will tell if this persists.

In the future, we hope to develop more advanced honeypots that help us to gather information about threats such as botnets. Examples include Client honeypots that actively participate in networks (e.g. by crawling the web, idling in IRC channels, or using P2P-networks) or modify honeypots so that they capture malware and send it to anti-virus vendors for further analysis. As threats continue to adapt and change, so must the security community.

Posted on March 14, 2005 at 10:46 AM9 Comments

Satellite Tracking Data Made Secret

Here’s another example of harmful government secrecy, ostensibly implemented as security against terrorism.

How an adversary might damage a spacecraft more than 100 miles up and moving at five miles per second—eight times faster than a rifle bullet—was not specified.

Good question, though.

But unclassified military or civilian communications satellites could, in theory, be jammed. And an adversary could use the unclassified data to know when a commercial imaging satellite, possibly operating under contract to the Department of Defense, would be flying overhead.

It might even be possible, through the process of elimination, for knowledgeable amateurs to ferret out the orbit of a classified spacecraft by comparing actual observations with the list of known, unclassified satellites.

Clearly I need to write a longer essay on “movie-plot” threats, and the wisdom of spending money and effort defending against them.

Posted on March 12, 2005 at 10:31 AM18 Comments

Speech-Activated Password Resets

This is a clever idea from Microsoft.

We know that people forget their passwords all the time, and I’ve already written about how secret questions as a backup password are a bad idea. Here’s a system where a voiceprint acts as a backup password. It’s a biometric password, which makes it good. Presumably the system prompts the user as to what to say, so the user can’t forget his voice password. And it’s hard to hack. (Yes, it’s possible to hack. But so is the password.)

But the real beauty of this system is that it doesn’t require a customer support person to deal with the user. I’ve seen statistics showing that 25% of all help desk calls are by people who forget their password, they cost something like $20 a call, and they take an average of 10 minutes. A system like this provides good security and saves money.

Posted on March 11, 2005 at 1:22 PM28 Comments

Melbourne Water-Supply Security Risk

Here’s a scary hacking target: the remote-control system for Melbourne’s water supply. According to TheAge:

Remote access to the Brooklyn pumping station and the rest of the infrastructure means the entire network can be controlled from any of seven main Melbourne Water sites, or by key staff such as Mr Woodland from home via a secure internet connection using Citrix’s Metaframe or a standard web browser.

SCADA systems are hard to hack, but SSL connections—at least, that’s what I presume they mean by “secure internet connection”—are much easier.

(Seen on Benambra.)

Posted on March 11, 2005 at 9:17 AM8 Comments

ChoicePoint Says "Please Regulate Me"

According to ChoicePoint’s most recent 8-K filing:

Based on information currently available, we estimate that approximately 145,000 consumers from 50 states and other territories may have had their personal information improperly accessed as a result of the recent Los Angeles incident and certain other instances of unauthorized access to our information products. Approximately 35,000 of these consumers are California residents, and approximately 110,000 are residents of other states. These numbers were determined by conducting searches of our databases that matched searches conducted by customers who we believe may have had unauthorized access to our information products on or after July 1, 2003, the effective date of the California notification law. Because our databases are constantly updated, our search results will never be identical to the search results of these customers.

Catch that? ChoicePoint actually has no idea if only 145,000 customers were affected by its recent security debacle. But it’s not doing any work to determine if more than 145,000 customers were affected—or if any customers before July 1, 2003 were affected—because there’s no law compelling it to do so.

I have no idea why ChoicePoint has decided to tape a huge “Please Regulate My Industry” sign to its back, but it’s increasingly obvious that it has. There’s a class-action shareholders’ lawsuit, but I don’t think that will be enough.

And, by the way, Choicepoint’s database is riddled with errors.

Posted on March 9, 2005 at 2:54 PM40 Comments

Secrecy and Security

In my previous entry, I wrote about the U.S. government’s SSI classification. I meant it as to be an analysis of the procedures of secrecy, not an analysis of secrecy as security.

I’ve previously written about the relationship between secrecy and security. I think secrecy hurts security in all but a few well-defined circumstances.

In recent years, the U.S. government has pulled a veil of secrecy over much of its inner workings, using security against terrorism as an excuse. The Director of the National Security Archive recently gave excellent testimony on the topic. This is worth reading both for this general conclusions and for his specific data.

The lesson of 9/11 is that we are losing protection by too much secrecy. The risk is that by keeping information secret, we make ourselves vulnerable. The risk is that when we keep our vulnerabilities secret, we avoid fixing them. In an open society, it is only by exposure that problems get fixed. In a distributed information networked world, secrecy creates risk—risk of inefficiency, ignorance, inaction, as in 9/11. As the saying goes in the computer security world, when the bug is secret, then only the vendor and the hacker know—and the larger community can neither protect itself nor offer fixes.

Posted on March 9, 2005 at 7:46 AM8 Comments

Sensitive Security Information (SSI)

For decades, the U.S. government has had systems in place for dealing with military secrets. Information is classified as either Confidential, Secret, Top Secret, or one of many “compartments” of information above Top Secret. Procedures for dealing with classified information were rigid: classified topics could not be discussed on unencrypted phone lines, classified information could not be processed on insecure computers, classified documents had to be stored in locked safes, and so on. The procedures were extreme because the assumed adversary was highly motivated, well-funded, and technically adept: the Soviet Union.

You might argue with the government’s decision to classify this and not that, or the length of time information remained classified, but if you assume the information needed to remain secret, than the procedures made sense.

In 1993, the U.S. government created a new classification of information—Sensitive Security Information—that was exempt from the Freedom of Information Act. The information under this category, as defined by a D.C. court, was limited to information related to the safety of air passengers. This was greatly expanded in 2002, when Congress deleted two words, “air” and “passengers,” and changed “safety” to “security.” Currently, there’s a lot of information covered under this umbrella.

The rules for SSI information are much more relaxed than the rules for traditional classified information. Before someone can have access to classified information, he must get a government clearance. Before someone can have access to SSI, he simply must sign an NDA. If someone discloses classified information, he faces criminal penalties. If someone discloses SSI, he faces civil penalties.

SSI can be sent unencrypted in e-mail; a simple password-protected file is enough. A person can take SSI home with him, read it on an airplane, and talk about it in public places. People entrusted with SSI information shouldn’t disclose it to those unauthorized to know it, but it’s really up to the individual to make sure that doesn’t happen. It’s really more like confidential corporate information than government military secrets.

The U.S. government really had no choice but to establish this classification level, given the kind of information they needed to work with. for example, the terrorist “watch” list is SSI. If the list falls into the wrong hands, it would be bad for national security. But think about the number of people who need access to the list. Every airline needs a copy, so they can determine if any of their passengers are on the list. That’s not just domestic airlines, but foreign airlines as well—including foreign airlines that may not agree with American foreign policy. Police departments, both within this country and abroad, need access to the list. My guess is that more than 10,000 people have access to this list, and there’s no possible way to give all them a security clearance. Either the U.S. government relaxes the rules about who can have access to the list, or the list doesn’t get used in the way the government wants.

On the other hand, the threat is completely different. Military classification levels and procedures were developed during the Cold War, and reflected the Soviet threat. The terrorist adversary is much more diffuse, much less well-funded, much less technologically advanced. SSI rules really make more sense in dealing with this kind of adversary than the military rules.

I’m impressed with the U.S. government SSI rules. You can always argue about whether a particular piece of information needs to be kept secret, and how classifications like SSI can be used to conduct government in secret. But if you take secrecy as an assumption, SSI defines a reasonable set of secrecy rules against a new threat.

Background on SSI

TSA’s regulation on the protection of SSI

Controversies surrounding SSI

My essay explaining why secrecy is often bad for security

Posted on March 8, 2005 at 10:37 AM47 Comments

Remote Physical Device Fingerprinting

Here’s the abstract:

We introduce the area of remote physical device fingerprinting, or fingerprinting a physical device, as opposed to an operating system or class of devices, remotely, and without the fingerprinted device’s known cooperation. We accomplish this goal by exploiting small, microscopic deviations in device hardware: clock skews. Our techniques do not require any modification to the fingerprinted devices. Our techniques report consistent measurements when the measurer is thousands of miles, multiple hops, and tens of milliseconds away from the fingerprinted device, and when the fingerprinted device is connected to the Internet from different locations and via different access technologies. Further, one can apply our passive and semi-passive techniques when the fingerprinted device is behind a NAT or firewall, and also when the device’s system time is maintained via NTP or SNTP. One can use our techniques to obtain information about whether two devices on the Internet, possibly shifted in time or IP addresses, are actually the same physical device. Example applications include: computer forensics; tracking, with some probability, a physical device as it connects to the Internet from different public access points; counting the number of devices behind a NAT even when the devices use constant or random IP IDs; remotely probing a block of addresses to determine if the addresses correspond to virtual hosts, e.g., as part of a virtual honeynet; and unanonymizing anonymized network traces.

And an article. Really nice work.

Posted on March 7, 2005 at 3:02 PM21 Comments

Flaw in Pin-Tumbler Locks

This paper by Barry Wels and Rop Gonggrijp describes a security flaw in pin tumbler locks. The so called “bump-key” method will open a wide range of high security locks in little time, without damaging them.

It’s about time physical locks be subjected to the same open security analysis that computer security systems have been. I would expect some major advances in technology as a result of all this work.

Posted on March 7, 2005 at 7:27 AM23 Comments

Banning Matches and Lighters on Airplanes

According to the Washington Post:

When Congress voted last year to prohibit passengers from bringing lighters and matches aboard commercial airplanes, it sounded like a reasonable idea for improving airline security.

But as airports and government leaders began discussing how to create flame-free airport terminals, the task became more complicated. Would newsstands and other small airport stores located beyond the security checkpoint have to stop selling lighters? Would airports have to ban smoking and close smoking lounges? How would security screeners detect matches in passengers’ pockets or carry-on bags when they don’t contain metal to set off the magnetometers? And what about arriving international travelers, who might have matches and lighters with them as they walk through the terminal?

It’s the silly security season out there. Given all of the things to spend money on to improve security, how this got to the top of anyone’s list is beyond me.

Posted on March 4, 2005 at 3:00 PM41 Comments

Garbage Cans that Spy on You

From The Guardian:

Though he foresaw many ways in which Big Brother might watch us, even George Orwell never imagined that the authorities would keep a keen eye on your bin.

Residents of Croydon, south London, have been told that the microchips being inserted into their new wheely bins may well be adapted so that the council can judge whether they are producing too much rubbish.

I call this kind of thing “embedded government”: hardware and/or software technology put inside of a device to make sure that we conform to the law.

And there are security risks.

If, for example, computer hackers broke in to the system, they could see sudden reductions in waste in specific households, suggesting the owners were on holiday and the house vacant.

To me, this is just another example of those implementing policy not being the ones who bear the costs. How long would the policy last if it were made clear to those implementing it that they would be held personally liable, even if only via their departmental budgets or careers, for any losses to residents if the database did get hacked?

Posted on March 4, 2005 at 10:32 AM21 Comments

Flaw in Winkhaus Blue Chip Lock

The Winkhaus Blue Chip Lock is a very popular, and expensive, 128-bit encrypted door lock. When you insert a key, there is a 128-bit challenge/response exchange between the key and the lock, and when the key is authorized it will pull a small pin down through some sort of solenoid switch. This allows you to turn the lock.

Unfortunately, it has a major security flaw. If you put a strong magnet near the lock, you can also pull this pin down, without authorization—without damage or any evidence.

The worst part is that Winkhaus is in denial about the problem, and is hoping it will just go away by itself. They’ve known about the flaw for at least six months, and have done nothing. They haven’t told any of their customers. If you ask them, they’ll say things like “it takes a very special magnet.”

From what I’ve heard, the only version that does not have this problem is the model without a built-in battery. In this model, the part with the solenoid switch is aimed on the inside instead of the outside. The internal battery is a weak spot, since you need to lift a small lid to exchange it. So this side can never face the “outside” of the door, since anyone could remove the batteries. With an external power supply you do not have this problem, since one side of the lock is pure metal.

A video demonstration is available here.

Posted on March 2, 2005 at 3:00 PM21 Comments

Sensitive Information on Used Hard Drives

A research team bought over a hundred used hard drives for about a thousand dollars, and found more than half still contained personal and commercially sensitive information—some of it blackmail material.

People have repeated this experiment again and again, in a variety of countries, and the results have been pretty much the same. People don’t understand the risks of throwing away hard drives containing sensitive information.

What struck me about this story was the wide range of dirt they were able to dig up: insurance company records, a school’s file on its children, evidence of an affair, and so on. And although it cost them a grand to get this, they still had a grand’s worth of salable computer hardware at the end of their experiment.

Posted on March 2, 2005 at 9:40 AM28 Comments

Choicepoint's CISO Speaks

Richard Baich, Choicepoint’s CISO, is interviewed on SearchSecurity.com:

This is not an information security issue. My biggest concern is the impact this has on the industry from the standpoint that people are saying ChoicePoint was hacked. No we weren’t. This type of fraud happens every day.

Nice spin job, but it just doesn’t make sense. This isn’t a computer hack in the traditional sense, but it’s a social engineering hack of their system. Information security controls were compromised, and confidential information was leaked.

It’s created a media frenzy; this has been mislabeled a hack and a security breach. That’s such a negative impression that suggests we failed to provide adequate protection. Fraud happens every day. Hacks don’t.

So, Choicepoint believes that providing adequate protection doesn’t include preventing this kind of attack.

I’m sure he’s exaggerating when he says that “this type of fraud happens every day” and “frauds happens every day,” but if it’s true then Choicepoint has a huge information security problem.

Posted on March 1, 2005 at 10:45 AM17 Comments

Identity Theft out of Golf Lockers

When someone goes golfing in Japan, he’s given a locker in which to store his valuables. Generally, and at the golf course in question, these are electronic combination locks. The user selects a code himself and locks his valuables. Of course, there’s a back door—a literal one—to the lockers, in case someone forgets his unlock code. Furthermore, the back door allows the administrator of these lockers to read all the codes to all the lockers.

Here’s the scam: A group of thieves worked in conjunction with the locker administrator to open the lockers, copy the golfers’ debit cards, and replace them in their wallets and in their lockers before they were done golfing. In many cases, the golfers used the same code to lock their locker as their bank card PIN, so the thieves got those as well. Then the thieves stole a lot of money from multiple ATMs.

Several factors make this scam even worse. One, unlike the U.S., ATM cards in Japan have no limit. You can literally withdraw everything out of the account. Two, the victims don’t know anything until they find out they have no money when they use their card somewhere. Three, the victims, since they play golf at these expensive courses, are
usually very rich. And four, unlike the United States, Japanese banks do not guarantee loss due to theft.

Posted on March 1, 2005 at 9:20 AM15 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.