Blog: September 2005 Archives

NSA Watch

Three things.

U.S. Patent #6,947,978:

Method for geolocating logical network addresses

Abstract: Method for geolocating logical network addresses on electronically switched dynamic communications networks, such as the Internet, using the time latency of communications to and from the logical network address to determine its location. Minimum round-trip communications latency is measured between numerous stations on the network and known network addressed equipment to form a network latency topology map. Minimum round-trip communications latency is also measured between the stations and the logical network address to be geolocated. The resulting set of minimum round-trip communications latencies is then correlated with the network latency topology map to determine the location of the network address to be geolocated.

Fact Sheet NSA Suite B Cryptography“:

The entire suite of cryptographic algorithms is intended to protect both classified and unclassified national security systems and information. Because Suite B is a also subset of the cryptographic algorithms approved by the National Institute of Standards, Suite B is also suitable for use throughout government. NSA’s goal in presenting Suite B is to provide industry with a common set of cryptographic algorithms that they can use to create products that meet the needs of the widest range of US Government (USG) needs.

The Case for Elliptic Curve Cryptography“:

Elliptic Curve Cryptography provides greater security and more efficient performance than the first generation public key techniques (RSA and Diffie-Hellman) now in use. As vendors look to upgrade their systems they should seriously consider the elliptic curve alternative for the computational and bandwidth advantages they offer at comparable security.

Posted on September 30, 2005 at 7:31 AM20 Comments

Surveillance Via Cell Phones

It captures criminals:

Today, even murderers carry cell phones.

They may have left no witnesses, fingerprints or DNA. But if a murderer makes calls on a cell phone around the time of the crime (and they often do), they leave behind a trail of records that show not only who they called and at what time, but where they were when the call was made.

The cell phone records, which document what tower a caller was nearest when he dialed, can put a suspect at the scene of the crime with as much accuracy as an eyewitness. In urban areas crowded with cell towers, the records can pinpoint someone’s location within a few blocks.

Should a suspect tell detectives he was in another part of town the night of the murder, records from cell phone towers can smash his alibi, giving detectives leverage in an interview.

I am fine with the police using this tool, as long as the warrant process is there to ensure that they don’t abuse the tool.

Posted on September 29, 2005 at 11:36 AM48 Comments

Jamming Aircraft Navigation Near Nuclear Power Plants

The German government want to jam aircraft navigation equipment near nuclear power plants.

This certainly could help if terrorists want to fly an airplane into a nuclear power plant, but it feels like a movie-plot threat to me. On the other hand, this could make things significantly worse if an airplane flies near the nuclear power plant by accident. My guess is that the latter happens far more often than the former.

Posted on September 29, 2005 at 6:40 AM43 Comments

The Doghouse: CryptIt

It’s been far too long since I’ve had one of these.

CryptIt looks like just another one-time pad snake-oil product:

Most file encryptions use methods that mathematically hash a password to a much larger number and rely on the time taken to reverse this process to prevent unauthorised decryption. Providing the key length is 128 bits or greater this method works well for most purposes, but since these methods do have predictable patterns they can be cracked. CPUs are increasing in speed at a fast rate and these encryption methods can be beaten given luck and/or enough computers. XorIt uses the XOR encryption method (also known as Vernam encryption) that can have keys the same size as the file to be encrypted. Thus, if you are encrypting a 5MB file, then you can have what is in effect a 40 Million bit key! This is virtually unbreakable by any computer, especially when you consider that the file must also be checked with each combination to see if it is decrypted. To put is another way, since XorIt gives no pass/fail results brute force methods are difficult to implement. In fact, if you use a good key file that is the same size or larger than the source and do not reuse the key file then it it impossible to decrypt the file, no matter how fast the computer is. Furthermore, the key file can be anything – a program, a swap file, an image of your cat or even a music file.

Amazingly enough, some people still believe in this sort of nonsense. Before defending them, please read my essay on snake oil.

Posted on September 28, 2005 at 1:25 PM

The Beginnings of a U.S. Government DNA Database

From the Washington Post:

Suspects arrested or detained by federal authorities could be forced to provide samples of their DNA that would be recorded in a central database under a provision of a Senate bill to expand government collection of personal data.

The controversial measure was approved by the Senate Judiciary Committee last week and is supported by the White House, but has not gone to the floor for a vote. It goes beyond current law, which allows federal authorities to collect and record samples of DNA only from those convicted of crimes. The data are stored in an FBI-maintained national registry that law enforcement officials use to aid investigations, by comparing DNA from criminals with evidence found at crime scenes.

[…]

The provision, co-sponsored by Kyl and Sen. John Cornyn (R-Tex.), does not require the government to automatically remove the DNA data of people who are never convicted. Instead, those arrested or detained would have to petition to have their information removed from the database after their cases were resolved.

Posted on September 27, 2005 at 11:31 AM53 Comments

Forging Low-Value Paper Certificates

Both Subway and Cold Stone Creamery have discontinued their frequent-purchaser programs because the paper documentation is too easy to forge. (The article says that forged Subway stamps are for sale on eBay.)

It used to be that the difficulty of counterfeiting paper was enough security for these sorts of low-value applications. Now that desktop publishing and printing is common, it’s not. Subway is implementing a system based on magnetic stripe cards instead. Anyone care to guess how long before that’s hacked?

Posted on September 27, 2005 at 7:43 AM27 Comments

Fingerprint-Lock Failure in a Prison

So much for high-tech security:

Prison officers have been forced to abandon a new security system and return to the use of keys after the cutting-edge technology repeatedly failed.

The system, which is thought to have cost over £3 million, used fingerprint recognition to activate the locking system at the high-security Glenochil Prison near Tullibody, Clackmannanshire.

After typing in a PIN code, prison officers had to place their finger on a piece of glass. Once the print was recognised, they could then lock and unlock prison doors.

However, problems arose after a prisoner demonstrated to wardens that he could get through the system at will. Other prisoners had been doing the same for some time.

Unfortunately, the article doesn’t say how the prisoners hacked the system. Perhaps they lifed fingerprints off readers with transparent tape. Or perhaps the valid latent fingerprints left on the readers by wardens could be activated somehow.

I would really like some more details here. Does it really make sense to have a tokenless access system in a prison? I don’t know enough to answer that question.

Posted on September 26, 2005 at 4:03 PM48 Comments

Man Arrested for Being A Computer Nerd

In this disturbing story, a man is arrested in the London subways as a terrorist because, well because he was acting like a computer nerd.

At least the police didn’t shoot to kill.

EDITED TO ADD: This picture was supposedly taken in the London Tube a few weeks after the first set of bombings.

EDITED TO ADD: Snopes says that the picture is a fake.

Posted on September 26, 2005 at 12:12 PM52 Comments

Secure Flight News

The TSA is not going to use commercial databases in its initial roll-out of Secure Flight, its airline screening program that matches passengers with names on the Watch List and No-Fly List. I don’t believe for a minute that they’re shelving plans to use commercial data permanently, but at least they’re delaying the process.

In other news, the report (also available here, here, and here) of the Secure Flight Privacy/IT Working Group is public. I was a member of that group, but honestly, I didn’t do any writing for the report. I had given up on the process, sick of not being able to get any answers out of TSA, and believed that the report would end up in somebody’s desk drawer, never to be seen again. I was stunned when I learned that the ASAC made the report public.

There’s a lot of stuff in the report, but I’d like to quote the section that outlines the basic questions that the TSA was unable to answer:

The SFWG found that TSA has failed to answer certain key questions about Secure Flight: First and foremost, TSA has not articulated what the specific goals of Secure Flight are. Based on the limited test results presented to us, we cannot assess whether even the general goal of evaluating passengers for the risk they represent to aviation security is a realistic or feasible one or how TSA proposes to achieve it. We do not know how much or what kind of personal information the system will collect or how data from various sources will flow through the system.

Until TSA answers these questions, it is impossible to evaluate the potential privacy or security impact of the program, including:

  • Minimizing false positives and dealing with them when they occur.
  • Misuse of information in the system.
  • Inappropriate or illegal access by persons with and without permissions.
  • Preventing use of the system and information processed through it for purposes other than airline passenger screening.

The following broadly defined questions represent the critical issues we believe TSA must address before we or any other advisory body can effectively evaluate the privacy and security impact of Secure Flight on the public.

  1. What is the goal or goals of Secure Flight? The TSA is under a Congressional mandate to match domestic airline passenger lists against the consolidated terrorist watch list. TSA has failed to specify with consistency whether watch list matching is the only goal of Secure Flight at this stage. The Secure Flight Capabilities and Testing Overview, dated February 9, 2005 (a non-public document given to the SFWG), states in the Appendix that the program is not looking for unknown terrorists and has no intention of doing so. On June 29, 2005, Justin Oberman (Assistant Administrator, Secure Flight/Registered Traveler) testified to a Congressional committee that “Another goal proposed for Secure Flight is its use to establish “Mechanisms for…violent criminal data vetting.” Finally, TSA has never been forthcoming about whether it has an additional, implicit goal the tracking of terrorism suspects (whose presence on the terrorist watch list does not necessarily signify intention to commit violence on a flight).

    While the problem of failing to establish clear goals for Secure Flight at a given point in time may arise from not recognizing the difference between program definition and program evolution, it is clearly an issue the TSA must address if Secure Flight is to proceed.

  2. What is the architecture of the Secure Flight system? The Working Group received limited information about the technical architecture of Secure Flight and none about how software and hardware choices were made. We know very little about how data will be collected, transferred, analyzed, stored or deleted. Although we are charged with evaluating the privacy and security of the system, we saw no statements of privacy policies and procedures other than Privacy Act notices published in the Federal Register for Secure Flight testing. No data management plan either for the test phase or the program as implemented was provided or discussed.
  3. Will Secure Flight be linked to other TSA applications? Linkage with other screening programs (such as Registered Traveler, Transportation Worker Identification and Credentialing (TWIC), and Customs and Border Patrol systems like U.S.-VISIT) that may operate on the same platform as Secure Flight is another aspect of the architecture and security question. Unanswered questions remain about how Secure Flight will interact with other vetting programs operating on the same platform; how it will ensure that its policies on data collection, use and retention will be implemented and enforced on a platform that also operates programs with significantly different policies in these areas; and how it will interact with the vetting of passengers on international flights?
  4. How will commercial data sources be used? One of the most controversial elements of Secure Flight has been the possible uses of commercial data. TSA has never clearly defined two threshold issues: what it means by “commercial data” and how it might use commercial data sources in the implementation of Secure Flight. TSA has never clearly distinguished among various possible uses of commercial data, which all have different implications.

    Possible uses of commercial data sometimes described by TSA include: (1) identity verification or authentication; (2) reducing false positives by augmenting passenger records indicating a possible match with data that could help distinguish an innocent passenger from someone on a watch list; (3) reducing false negatives by augmenting all passenger records with data that could suggest a match that would otherwise have been missed; (4) identifying sleepers, which itself includes: (a) identifying false identities; and (b) identifying behaviors indicative of terrorist activity. A fifth possibility has not been discussed by TSA: using commercial data to augment watch list entries to improve their fidelity. Assuming that identity verification is part of Secure Flight, what are the consequences if an identity cannot be verified with a certain level of assurance?

    It is important to note that TSA never presented the SFWG with the results of its commercial data tests. Until these test results are available and have been independently analyzed, commercial data should not be utilized in the Secure Flight program.

  5. Which matching algorithms work best? TSA never presented the SFWG with test results showing the effectiveness of algorithms used to match passenger names to a watch list. One goal of bringing watch list matching inside the government was to ensure that the best available matching technology was used uniformly. The SFWG saw no evidence that TSA compared different products and competing solutions. As a threshold matter, TSA did not describe to the SFWG its criteria for determining how the optimal matching solution would be determined. There are obvious and probably not-so-obvious tradeoffs between false positives and false negatives, but TSA did not explain how it reconciled these concerns.
  6. What is the oversight structure and policy for Secure Flight? TSA has not produced a comprehensive policy document for Secure Flight that defines oversight or governance responsibilities.

The members of the working group, and the signatories to the report, are Martin Abrams, Linda Ackerman, James Dempsey, Edward Felten, Daniel Gallington, Lauren Gelman, Steven Lilenthal, Anna Slomovic, and myself.

My previous posts about Secure Flight, and my involvement in the working group, are here, here, here, here, here, and here.

And in case you think things have gotten better, there’s a new story about how the no-fly list cost a pilot his job:

Cape Air pilot Robert Gray said he feels like he’s living a nightmare. Two months after he sued the federal government for refusing to let him take flight training courses so he could fly larger planes, he said yesterday, his situation has only worsened.

When Gray showed up for work a couple of weeks ago, he said Cape Air told him the government had placed him on its no-fly list, making it impossible for him to do his job. Gray, a Belfast native and British citizen, said the government still won’t tell him why it thinks he’s a threat.

“I haven’t been involved in any kind of terrorism, and I never committed any crime,” said Gray, 35, of West Yarmouth. He said he has never been arrested and can’t imagine what kind of secret information the government is relying on to destroy his life.

Remember what the no-fly list is. It’s a list of people who are so dangerous that they can’t be allowed to board an airplane under any circumstances, yet so innocent that they can’t be arrested—even under the provisions of the PATRIOT Act.

EDITED TO ADD: The U.S. Department of Justice Inspector General released a report last month on Secure Flight, basically concluding that the costs were out of control, and that the TSA didn’t know how much the program would cost in the future.

Here’s an article about some of the horrible problems people who have mistakenly found themselves on the no-fly list have had to endure. And another on what you can do if you find yourself on a list.

EDITED TO ADD: EPIC has received a bunch of documents about continued problems with false positives.

Posted on September 26, 2005 at 7:14 AM15 Comments

Hurricane Security and Airline Security Collide

Here’s a story (quote is from the second page) where airline security is actually doing harm:

Long lines and chaos snarled evacuees when they tried to catch flights out from two of Houston’s airports. After about 100 federal security screeners failed to report to work Thursday, scores of passengers missed flights and waited for hours at sparsely monitored X-ray machines and luggage conveyors. Transportation Security Administration officials were at a loss for an explanation and scrambled to send in a team of replacement workers from Cleveland.

This isn’t an easy call, but sometimes the smartest thing to do in an emergency is to suspend security rules. Unfortunately, sometimes the bad guys count on that.

If I were in charge, I would have let people onto the airplanes. The trade-off makes sense to me.

Posted on September 23, 2005 at 9:10 PM43 Comments

Searching Google for Unpublished Data

We all know that Google can be used to find all sorts of sensitive data, but here’s a new twist on that:

A Spanish astronomer has admitted he accessed internet telescope logs of another astronomer’s observations of a giant object orbiting beyond Neptune ­but denies doing anything wrong.

Jose-Luis Ortiz of the Institute of Astrophysics of Andalusia in Granada told New Scientist that it was “perfectly legitimate” because he found the logs on a publicly available website via a Google search. But Mike Brown, the Caltech astronomer whose logs Ortiz uncovered, claims that accessing the information was at least “unethical” and may, if Ortiz misused the data, have crossed the line into scientific fraud.

Posted on September 23, 2005 at 1:43 PM30 Comments

Verizon Monitoring Customers for Disney

This seems like a really bad idea.

Stepping up the battle against entertainment piracy, Verizon Communications Co. have entered a long-term programming deal that calls for the phone company to send a warning to Internet users suspected of pirating Disney’s content on its broadband services.

Under the deal, one of the first of its kind in the television industry, Disney will contact Verizon when the company suspects a Verizon customer of illegally downloading content. Without divulging names or addresses to Disney, Verizon will then alert the customer that he or she might be violating the law. Disney will be able to identify suspicious customers through an Internet coding system.

EDITED TO ADD: If you can’t read the Wall Street Journal link, another article.

Posted on September 23, 2005 at 7:24 AM51 Comments

Judge Roberts, Privacy, and the Future

My second essay for Wired was published today. It’s about the future privacy rulings of the Supreme Court:

Recent advances in technology have already had profound privacy implications, and there’s every reason to believe that this trend will continue into the foreseeable future. Roberts is 50 years old. If confirmed, he could be chief justice for the next 30 years. That’s a lot of future.

Privacy questions will arise from government actions in the “War on Terror”; they will arise from the actions of corporations and individuals. They will include questions of surveillance, profiling and search and seizure. And the decisions of the Supreme Court on these questions will have a profound effect on society.

Posted on September 22, 2005 at 12:28 PM57 Comments

Cameras Catch Dry Run of 7/7 London Terrorists

Score one for security cameras:

Newly released CCTV footage shows the 7 July London bombers staged a practice run nine days before the attack.

Detectives reconstructed the bombers’ movements after studying thousands of hours of film as part of the probe into the blasts which killed 52 people.

CCTV images show three of the bombers entering Luton station, before travelling to King’s Cross station where they are also pictured.

Officers are keen to find out if the men met anyone else on the day.

See also The New York Times.

Security cameras certainly aren’t useless. I just don’t think they’re worth it.

Posted on September 21, 2005 at 12:50 PM45 Comments

Automobile Identity Theft

This scam was uncovered in Israel:

  1. Thief rents a car.
  2. An identical car, legitimately owned, is found and its “identity” stolen.
  3. The stolen identity is applied to the rented car and is then offered for sale in a newspaper ad.
  4. Innocent buyer purchases the car from the thief as a regular private party sale.
  5. After a few days the thief steals the car back from the buyer and returns it to the rental shop.

What ended up happening is that the “new” owners claimed compensation for the theft and most of the damage was absorbed by the insurers.

Clever.

Posted on September 21, 2005 at 7:45 AM26 Comments

Major Security on a Minor Ferry

Is a ferry that transports 3000 cars a day (during the busy season) a national security risk?

Thousands of motorists who use the Jamestown-Scotland Ferry can expect more stringent screenings this week, when the state adds armed guards and thorough car searches.

More info here

New, increased security measures are coming to the Jamestown-Scotland Ferry. Beginning July 1, security guards at the ferry will conduct random screening of passengers and their vehicles in an effort to prevent dangerous substances and devices from boarding the ferry. Commuters should prepare for a possible increase in the amount of time it takes to board the ferry once the screenings are in place; however, the ferries will depart on time according to schedule.

In accordance with the Maritime Transportation Security Act, VDOT will post security guards at the base of the bridge on each side of the James River to screen those traveling the ferry. Screening activities will vary and can include checking picture IDs of the driver and passengers, and inspection of the vehicle, including under the hood, trunk and undercarriage. Guards may also check the cargo areas of cars, trucks, campers and trailers.

The frequency and depth of screening at the Jamestown-Scotland Ferry will change with the Maritime Security level, which is set by the United States Coast Guard. In order to board the ferry, drivers and passengers must consent to the screening process.

How many ferries like this are in the U.S.? How many other potential targets of the same magnitude are there in the U.S.? How much would it cost to secure them all?

This just isn’t the way to go about it.

Posted on September 20, 2005 at 6:46 AM40 Comments

DUI Cases Thrown Out Due to Closed-Source Breathalyzer

Really:

Hundreds of cases involving breath-alcohol tests have been thrown out by Seminole County judges in the past five months because the test’s manufacturer will not disclose how the machines work.

I think this is huge. (Think of the implications for voting systems, for one.) And it’s the right decision. Throughout history, the government has had to make the choice: prosecute, or keep your investigative methods secret. They couldn’t have both. If they wanted to keep their methods secret, they had to give up on prosecution.

People have the right to confront their accuser. And people have the right to a public trial. This is the correct decision, and we are all safer because of it.

Posted on September 16, 2005 at 6:46 AM72 Comments

Research in Behavioral Risk Analysis

I very am interested in this kind of research:

Network Structure, Behavioral Considerations and Risk Management in Interdependent Security Games

Interdependent security (IDS) games model situations where each player has to determine whether or not to invest in protection or security against an uncertain event knowing that there is some chance s/he will be negatively impacted by others who do not follow suit. IDS games capture a wide variety of collective risk and decision-making problems that include airline security, corporate governance, computer network security and vaccinations against diseases. This research project will investigate the marriage of IDS models with network formation models developed from social network theory and apply these models to problems in network security. Behavioral and controlled experiments will examine how human participants actually make choices under uncertainty in IDS settings. Computational aspects of IDS models will also be examined. To encourage and induce individuals to invest in cost-effective protection measures for IDS problems, we will examine several risk management strategies designed to foster cooperative behavior that include providing risk information, communication with others, economic incentives, and tipping strategies.

The proposed research is interdisciplinary in nature and should serve as an exciting focal point for researchers in computer science, decision and management sciences, economics, psychology, risk management, and policy analysis. It promises to advance our understanding of decision-making under risk and uncertainty for problems that are commonly faced by individuals, organizations, and nations. Through advances in computational methods one should be able to apply IDS models to large-scale problems. The research will also focus on weak links in an interdependent system and suggest risk management strategies for reducing individual and societal losses in the interconnected world in which we live.

Posted on September 15, 2005 at 7:05 AM11 Comments

Privacy Enhanced Computer Display

From the Mitsuibshi Research Laboratories:

The privacy-enhanced computer display uses a ferroelectric shutter glasses and a special device driver to produce a computer display which can be read only by the desired recipient, and not by an onlooker. The display alternately displays the desired information in one field, then the inverse image of the desired information in the next field, at up to 120 Hz refresh. The ferroelectric shutter glasses allow only the desired information to be viewed, while the inverse image causes unauthorized viewers to perceive only a flickering gray image, caused by the persistence of vision in the human visual system. It is also possible to use the system to “underlay” a private message on a public display system.

Posted on September 13, 2005 at 1:22 PM39 Comments

Snooping on Text by Listening to the Keyboard

Fascinating research out of Berkeley. Ed Felten has a good summary:

Li Zhuang, Feng Zhou, and Doug Tygar have an interesting new paper showing that if you have an audio recording of somebody typing on an ordinary computer keyboard for fifteen minutes or so, you can figure out everything they typed. The idea is that different keys tend to make slightly different sounds, and although you don’t know in advance which keys make which sounds, you can use machine learning to figure that out, assuming that the person is mostly typing English text. (Presumably it would work for other languages too.)

Read the rest.

The paper is on the Web. Here’s the abstract:

We examine the problem of keyboard acoustic emanations. We present a novel attack taking as input a 10-minute sound recording of a user typing English text using a keyboard, and then recovering up to 96% of typed characters. There is no need for a labeled training recording. Moreover the recognizer bootstrapped this way can even recognize random text such as passwords: In our experiments, 90% of 5-character random passwords using only letters can be generated in fewer than 20 attempts by an adversary; 80% of 10-character passwords can be generated in fewer than 75 attempts. Our attack uses the statistical constraints of the underlying content, English language, to reconstruct text from sound recordings without any labeled training data. The attack uses a combination of standard machine learning and speech recognition techniques, including cepstrum features, Hidden Markov Models, linear classification, and feedback-based incremental learning.

Posted on September 13, 2005 at 8:13 AM73 Comments

Israeli Barrier Around Gaza

Putting aside geopolitics for a minute (whether I call it a “wall” or a “fence” is a political decision, for example), it’s interesting to read the technical security details about the barrier the Israelis built around Gaza:

Remote control machine guns, robotic jeeps, a double fence, ditches and pillboxes along with digitally-linked commanders are all part of the IDF’s new 60-kilometer layered protection around the Gaza Strip.

[…]

The army has set up a large swath of land around the Strip for placing barbed wire coils, an electronic fence, and two patrol roads named Hoovers Alef and Hoovers Bet. There will also be a third patrol road a few hundred meters from the fence. All the land was “purchased” from the border settlements by the Defense Ministry. The army said it would allow farmers to work some of the land if possible.

Besides the barriers, the army has relocated over 50 cement pillboxes from their location inside the Gaza Strip to the new border. Some of these will be equipped with 50-caliber machine guns with laser sights that can be fired from control rooms equipped with monitors and radar along the border.

[…]

The IDF is also taking into account that the Palestinians may try to dig tunnels under the fence, but would not elaborate on steps it was taking to thwart such action.

In Beyond Fear pages 207-8, I wrote about the technical details of the Berlin Wall. This is far more sophisticated.

Posted on September 12, 2005 at 11:32 AM62 Comments

Katrina and Security

I had an op ed published in the Minneapolis Star-Tribune today.

Toward a Truly Safer Nation
Published September 11, 2005

Leaving aside the political posturing and the finger-pointing, how did our nation mishandle Katrina so badly? After spending tens of billions of dollars on homeland security (hundreds of billions, if you include the war in Iraq) in the four years after 9/11, what did we do wrong? Why were there so many failures at the local, state and federal levels?

These are reasonable questions. Katrina was a natural disaster and not a terrorist attack, but that only matters before the event. Large-scale terrorist attacks and natural disasters differ in cause, but they’re very similar in aftermath. And one can easily imagine a Katrina-like aftermath to a terrorist attack, especially one involving nuclear, biological or chemical weapons.

Improving our disaster response was discussed in the months after 9/11. We were going to give money to local governments to fund first responders. We established the Department of Homeland Security to streamline the chains of command and facilitate efficient and effective response.

The problem is that we all got caught up in “movie-plot threats,” specific attack scenarios that capture the imagination and then the dollars. Whether it’s terrorists with box cutters or bombs in their shoes, we fear what we can imagine. We’re searching backpacks in the subways of New York, because this year’s movie plot is based on a terrorist bombing in the London subways.

Funding security based on movie plots looks good on television, and gets people reelected. But there are millions of possible scenarios, and we’re going to guess wrong. The billions spent defending airlines are wasted if the terrorists bomb crowded shopping malls instead.

Our nation needs to spend its homeland security dollars on two things: intelligence-gathering and emergency response. These two things will help us regardless of what the terrorists are plotting, and the second helps both against terrorist attacks and national disasters.

Katrina demonstrated that we haven’t invested enough in emergency response. New Orleans police officers couldn’t talk with each other after power outages shut down their primary communications system—and there was no backup. The Department of Homeland Security, which was established in order to centralize federal response in a situation like this, couldn’t figure out who was in charge or what to do, and actively obstructed aid by others. FEMA did no better, and thousands died while turf battles were being fought.

Our government’s ineptitude in the aftermath of Katrina demonstrates how little we’re getting for all our security spending. It’s unconscionable that we’re wasting our money fingerprinting foreigners, profiling airline passengers, and invading foreign countries while emergency response at home goes underfunded.

Money spent on emergency response makes us safer, regardless of what the next disaster is, whether terrorist-made or natural.

This includes good communications on the ground, good coordination up the command chain, and resources—people and supplies—that can be quickly deployed wherever they’re needed.

Similarly, money spent on intelligence-gathering makes us safer, regardless of what the next disaster is. Against terrorism, that includes the NSA and the CIA. Against natural disasters, that includes the National Weather Service and the National Earthquake Information Center.

Katrina deftly illustrated homeland security’s biggest challenge: guessing correctly. The solution is to fund security that doesn’t rely on guessing. Defending against movie plots doesn’t make us appreciably safer. Emergency response does. It lessens the damage and suffering caused by disasters, whether man-made, like 9/11, or nature-made, like Katrina.

Posted on September 11, 2005 at 8:00 AM74 Comments

Criminals Learn Forensic Science

Criminals are adapting to advances in forensic science:

There is an increasing trend for criminals to use plastic gloves during break-ins and condoms during rapes to avoid leaving their DNA at the scene. Dostie describes a murder case in which the assailant tried to wash away his DNA using shampoo. Police in Manchester in the UK say that car thieves there have started to dump cigarette butts from bins in stolen cars before they abandon them. “Suddenly the police have 20 potential people in the car,” says Rutty.

The article also talks about forensic-science television shows changing the expectations of jurors.

“Jurors who watch CSI believe that those scenarios, where forensic scientists are always right, are what really happens,” says Peter Bull, a forensic sedimentologist at the University of Oxford. It means that in court, juries are not impressed with evidence presented in cautious scientific terms.

Detective sergeant Paul Dostie, of Mammoth Lakes Police Department, California, found the same thing when he conducted a straw poll of forensic investigators and prosecutors. “They all agree that jurors expect more because of CSI shows,” he says. And the “CSI effect” goes beyond juries, says Jim Fraser, director of the Centre for Forensic Science at the University of Strathclyde, UK. “Oversimplification of interpretations on CSI has led to false expectations, especially about the speed of delivery of forensic evidence,” he says.

Posted on September 9, 2005 at 7:16 AM36 Comments

Movie-Plot Threats

Wired.com just published an essay by me: “Terrorists Don’t Do Movie Plots.”

Sometimes it seems like the people in charge of homeland security spend too much time watching action movies. They defend against specific movie plots instead of against the broad threats of terrorism.

We all do it. Our imaginations run wild with detailed and specific threats. We imagine anthrax spread from crop dusters. Or a contaminated milk supply. Or terrorist scuba divers armed with almanacs. Before long, we’re envisioning an entire movie plot, without Bruce Willis saving the day. And we’re scared.

Psychologically, this all makes sense. Humans have good imaginations. Box cutters and shoe bombs conjure vivid mental images. “We must protect the Super Bowl” packs more emotional punch than the vague “we should defend ourselves against terrorism.”

The 9/11 terrorists used small pointy things to take over airplanes, so we ban small pointy things from airplanes. Richard Reid tried to hide a bomb in his shoes, so now we all have to take off our shoes. Recently, the Department of Homeland Security said that it might relax airplane security rules. It’s not that there’s a lessened risk of shoes, or that small pointy things are suddenly less dangerous. It’s that those movie plots no longer capture the imagination like they did in the months after 9/11, and everyone is beginning to see how silly (or pointless) they always were.

I’m now doing a bi-weekly column for them. I will post a link to the essays when they appear on the Wired.com site, and will reprint them in the next Crypto-Gram.

Posted on September 8, 2005 at 6:57 AM23 Comments

A U.S. National Firewall

This seems like a really bad idea:

Government has the right—even the responsibility—to see that its laws and regulations are enforced. The Internet is no exception. When the Internet is being used on American soil, it should comply with American law. And if it doesn’t, then the government should be able to step in and filter the illegal sites and activities.

Posted on September 7, 2005 at 3:53 PM48 Comments

Shoulder Surfing Keys

Here’s a criminal who “stole” keys, the physical metal ones, by examining images of them being used:

He surreptitiously videotaped letter carriers as they opened the boxes, zooming in on their keys. Lau used those images to calculate measurements for the grooves in the keys and created brass duplicates.

[…]

“The FBI is not aware of anything else like this,” bureau spokeswoman Jerri Williams said.

Technology causes security imbalances. Sometimes those imbalances favor the defender, and sometimes they favor the attacker. What we have here is a new application of a technology by an attacker.

Very clever.

Posted on September 7, 2005 at 11:35 AM29 Comments

Lance Armstrong Accused of Doping

Lance Armstrong has been accused of using a banned substance while racing the Tour de France. From a security perspective, this isn’t very interesting. Blood and urine tests are used to detect banned substances all the time. But what is interesting is that the urine sample was from 1999, and the test was done in 2005.

Back in 1999, there was no test for the drug EPO. Now there is. Someone took a old usine sample—who knew that they stored old urine samples?—and ran the new test.

This ability of a security mechanism to go back in time is interesting, and similar to police exhuming dead bodies for new forensic analysis, or a new cryptographic technique permitting decades-old encrypted messages to be read.

It also has some serious ramifications for athletes considering using banned substances. Not only do they have to evade any tests that exist today, but they have to at least think about how they could evade any tests that might be invented in the future. You could easily imagine athletes being stripped of their records, medals, and titles decades in the future after past transgressions are discovered.

On the other hand, athletes accused of using banned substances in the past have limited means by which to defend themselves. Perhaps they will start storing samples of their own blood and urine in escrow, year after year, so they’d have well-stored and untainted bodily fluids with which to refute charges of past transgressions.

Posted on September 7, 2005 at 6:32 AM

Identity Cards Don't Help

Emily Finch, of the University of East Anglia, has researched criminals and how they adapt their fraud techniques to identity cards, especially the “chip and PIN” system that is currently being adapted in the UK. Her analysis: the security measures don’t help:

“There are various strategies that fraudsters use to get around the pin problem,” she said. “One of the things that is very clear is that it is a difficult matter for a fraudster to get hold of somebody’s card and then find out the pin.

“So the focus has been changed to finding the pin first, which is very, very easy if you are prepared to break social convention and look when people type the number in at the point of sale.”

Reliance in the technology actually reduces security, because people stop paying attention:

“One of the things we found quite alarming was how much the human element has been taken out of point-of-sale transactions,” Dr Finch said. “Point-of-sale staff are told to look away when people put their pin number in; so they don’t check at all.”

[…]

Some strategies relied on trust. Another fraudster trick was to produce a stolen card and pretend to misremember the number and search for it on a piece of paper.

Imagine, she said, someone searching for a piece of paper and saying, “Oh yes, that’s my signature”; there would be instant suspicion.

But there was utter trust in the new technology to pick up a fraudulent transaction, and criminals exploited this trust to get around the problem of having to enter a pin number.

“You go in, you put the card in, you type any number because you don’t know what it is. It won’t go through. The fraudster—because fraudsters are so good with people—says, ‘Oh, it’s no good, I haven’t got the hang of this yet. I could have sworn that was my number… I’ve probably got it confused with my other card.’

“They chat for a bit. The sales assistant, who is either disinterested or sympathetic, falls back on the old system, and swipes the card through.

“Because a relationship of empathy has already been established, and because they have already become accustomed to averting their gaze when people put pin numbers in, they don’t check the signature at all.

“So fraud is actually easier. There is very little vigilance at the point of sale any more. Fraudsters know this and they are taking advantage of it.”

I’ve been saying this kind of thing for a while, and it’s nice to read about some research that backs it up.

Other articles on the research are here, here, and here.

Posted on September 6, 2005 at 4:07 PM37 Comments

Security Lessons of the Response to Hurricane Katrina

There are many, large and small, but I want to mention two that I haven’t seen discussed elsewhere.

1. The aftermath of this tragedy reflects on how poorly we’ve been spending our homeland security dollars. Again and again, I’ve said that we need to invest in 1) intelligence gathering, and 2) emergency response. These two things will help us regardless of what the terrorists are plotting, and the second helps in the event of a natural disaster. (In general, the only difference between a manmade disaster and a natural one is the cause. After a disaster occurs, it doesn’t matter.) The response by DHS and FEMA was abysmal, and demonstrated how little we’ve been getting for all our security spending. It’s unconscionable that we’re wasting our money on national ID cards, airline passenger profiling, and foreign invasions rather than emergency response at home: communications, training, transportation, coordination.

2. Redundancy, and to a lesser extent, inefficiency, are good for security. Efficiency is brittle. Redundancy results in less-brittle systems, and provides defense in depth. We need multiple organizations with overlapping capabilities, all helping in their own way: FEMA, DHS, the military, the Red Cross, etc. We need overcapacity, in water pumping capabilities, communications, emergency supplies, and so on. I wrote about this back in 2001, in opposition to the formation of the Department of Homeland Security. The government’s response to Katrina demonstrates this yet again.

Posted on September 6, 2005 at 12:15 PM104 Comments

Hogwarts Security

From Karl Lembke:

In the latest Harry Potter book, we see Hogwarts implementing security precautions in order to safeguard its students and faculty.

One step that was taken was that all the students were searched – wanded, in fact – to detect any harmful magic. In addition, all mail coming in or out was checked for harmful magic.

In spite of these precautions, two students are nearly killed by cursed items.

One of the items was a poisoned bottle of mead, which made it onto school grounds and into a professor’s office.

It turned out that packages sent from various addresses in the nearby town were not checked. The addresses were trusted, and anything received from them was considered safe. When a key person was compromised (in this case, by a mind-control spell), the trusted address was no longer trustworthy, and a gaping hole in security was created.

Of course, since everyone knew everything was checked on its way into the school, no one felt the need to take any special precautions.

The moral of the story is, inadequate security can be worse than no security at all.

And while we’re on the subject, can you really render a powerful wizard helpless simply by taking away his wand? And is taking away a powerful wizard’s wand simply as easy as doing something to him at the same time he is doing something else?

One, this means that you’re dead if you’re outnumbered. All it would take it two synchronized wizards, both of much lower power level, to defeat a powerful wizard. And two, it means that you’re dead if you’re taking by surprise or distracted.

This seems like an enormous hole in magical defenses, one that wizards would have worked feverishly to close up generations ago.

EDITED TO ADD: Here’s a page on trust in the series.

Posted on September 4, 2005 at 3:27 PM48 Comments

The Keys to the Sydney Subway

Global secrets are generally considered poor security. The problems are twofold. One, you cannot apply any granularity to the security system; someone either knows the secret or does not. And two, global secrets are brittle. They fail badly; if the secret gets out, then the bad guys have a pretty powerful secret.

This is the situation right now in Sydney, where someone stole the master key that gives access to every train in the metropolitan area, and also starts them.

Unfortunately, this isn’t a thief who got lucky. It happened twice, and it’s possible that the keys were the target:

The keys, each of which could start every train, were taken in separate robberies within hours of each other from the North Shore Line although police believed the thefts were unrelated, a RailCorp spokeswoman said.

The first incident occurred at Gordon station when the driver of an empty train was robbed of the keys by two balaclava-clad men shortly after midnight on Sunday morning.

The second theft took place at Waverton Station on Sunday night when a driver was robbed of a bag, which contained the keys, she said.

So, what can someone do with the master key to the Sydney subway? It’s more likely a criminal than a terrorist, but even so it’s definitely a serious issue:

A spokesman for RailCorp told the paper it was taking the matter “very seriously,” but would not change the locks on its trains.

Instead, as of Sunday night, it had increased security around its sidings, with more patrols by private security guards and transit officers.

The spokesman said a “range of security measures” meant a train could not be stolen, even with the keys.

I don’t know if RailCorp should change the locks. I don’t know the risk: whether that “range of security measures” only protects against train theft—an unlikely scenario, if you ask me—or other potential scenarios as well. And I don’t know how expensive it would be to change the locks.

Another problem with global secrets is that it’s expensive to recover from a security failure.

And this certainly isn’t the first time a master key fell into the wrong hands:

Mr Graham said there was no point changing any of the metropolitan railway key locks.

“We could change locks once a week but I don’t think it reduces in any way the security threat as such because there are 2000 of these particular keys on issue to operational staff across the network and that is always going to be, I think, an issue.”

A final problem with global secrets is that it’s simply too easy to lose control of them.

Moral: Don’t rely on global secrets.

Posted on September 1, 2005 at 8:06 AM32 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.