Blog: June 2005 Archives

Diebold Opti-Scan Voting Machine

An analysis of Diebold’s Opti-Scan (paper ballot) voting machine.

Computer expert Harri Hursti gained control over Leon County memory cards, which handle the vote-reporting from the precincts. Dr. Herbert Thompson, a security expert, took control of the Leon County central tabulator by implanting a trojan horse-like script.

Two programmers can become a lone programmer, says Hursti, who has figured out a way to control the entire central tabulator by way of a single memory card swap, and also how to make tampered polling place tapes match tampered central tabulator results. This more complex approach is untested, but based on testing performed May 26, Hursti says he has absolutely no reason to believe it wouldn’t work.

Three memory card tests demonstrated successful manipulation of election results, and showed that 1990 and 2002 FEC-required safeguards are being violated in the Diebold version 1.94 opti-scan system.

Posted on June 30, 2005 at 7:57 AM19 Comments

Sandia's New Wireless Technology

When dumb PR agents happen to good organizations:

Sandia Develops Secure Ultrawideband Wireless Network

The newly developed ultrawideband network, said the researchers at Sandia, is compatible with existing Internet protocols, which means that current Internet applications will be able to use standard transmission techniques and even high-level encryption up to and beyond 256 bits….

The newly developed network, said the researchers, is compatible with existing Internet protocols, which means that current Internet applications will be able to use standard transmission techniques and even high-level encryption up to and beyond 256 bits, which is currently double the amount considered essential for secure Internet transactions.

Wow. 256 is a lot of bits. I wonder where they put them all.

Posted on June 29, 2005 at 12:54 PM22 Comments

Wired on Identity Theft

This is a good editorial from Wired on identity theft.

Following are the fixes we think Congress should make:

Require businesses to secure data and levy fines against those who don’t. Congress has mandated tough privacy and security standards for companies that handle health and financial data. But the rules for credit agencies are woefully inadequate. And they don’t cover other businesses and organizations that handle sensitive personal information, such as employers, academic institutions and data brokers. Congress should mandate strict privacy and security standards for anyone who handles sensitive information, and apply tough financial penalties against companies that fail to comply.

Require companies to encrypt all sensitive customer data. Any standard created to protect data should include technical requirements to scramble the data—both in storage and during transit when data is transferred from one place to another. Recent incidents involving unencrypted Bank of America and CitiFinancial data tapes that went missing while being transferred to backup centers make it clear that companies think encryption is necessary only in certain circumstances.

Keep the plan simple and provide authority and funds to the FTC to ensure legislation is enforced. Efforts to secure sensitive data in the health and financial industries led to laws so complicated and confusing that few have been able to follow them faithfully. And efforts to monitor compliance have been inadequate. Congress should develop simpler rules tailored to each specific industry segment, and give the FTC the necessary funding to enforce them.

Keep Social Security numbers for Social Security. Social Security numbers appear on medical and voter-registration forms as well as on public records that are available through a simple internet search. This makes it all too easy for a thief to obtain the single identifying number that can lead to financial ruin for victims. Americans need a different unique identifying number specifically for credit records, with guarantees that it will never be used for authentication purposes.

Force credit agencies to scrutinize credit-card applications and verify the identity of credit-card applicants. Giving Americans easy access to credit has superseded all other considerations in the cutthroat credit-card business, helping thieves open accounts in victims’ names. Congress needs to bring sane safeguards back into the process of approving credit—even if it means adding costs and inconveniencing powerful banking and financial interests.

Extend fraud alerts beyond 90 days. The Fair Credit Reporting Act allows anyone who suspects that their personal information has been stolen to place a fraud alert on their credit record. This currently requires a creditor to take “reasonable” steps to verify the identity of anyone who applies for credit in the individual’s name. It also requires the creditor to contact the individual who placed the fraud alert on the account if they’ve provided their phone number. Both conditions apply for 90 days. Of course, nothing prevents identity thieves from waiting until the short-lived alert period expires before taking advantage of stolen information. Congress should extend the default window for credit alerts to a minimum of one year.

Allow individuals to freeze their credit records so that no one can access the records without the individuals’ approval. The current credit system opens credit reports to almost anyone who requests them. Individuals should be able to “freeze” their records and have them opened to others only when the individual contacts a credit agency and requests that it release a report to a specific entity.

Require opt-in rather than opt-out permission before companies can share or sell data. Many businesses currently allow people to decline inclusion in marketing lists, but only if customers actively request it. This system, known as opt-out, inherently favors companies by making it more difficult for consumers to escape abusive data-sharing practices. In many cases, consumers need to wade through confusing instructions, and send a mail-in form in order to be removed from pre-established marketing lists. The United States should follow an opt-in model, where companies would be forced to collect permission from individuals before they can traffic in personal data.

Require companies to notify consumers of any privacy breaches, without preventing states from enacting even tougher local laws. Some 37 states have enacted or are considering legislation requiring businesses to notify consumers of data breaches that affect them. A similar federal measure has also been introduced in the Senate. These are steps in the right direction. But the federal bill has a major flaw: It gives companies an easy out in the case of massive data breaches, where the number of people affected exceeds 500,000, or the cost of notification would exceeds $250,000. In those cases, companies would not be required to notify individuals, but could comply simply by posting a notice on their websites. Congress should close these loopholes. In addition, any federal law should be written to ensure that it does not pre-empt state notification laws that take a tougher stance.

As I’ve written previously, this won’t solve identity theft. But it will make it harder and protect the privacy of everyone. These are good recommendations.

Posted on June 29, 2005 at 7:18 AM59 Comments

Your ISP May Be Spying on You

From News.com:

The U.S. Department of Justice is quietly shopping around the explosive idea of requiring Internet service providers to retain records of their customers’ online activities.

Data retention rules could permit police to obtain records of e-mail chatter, Web browsing or chat-room activity months after Internet providers ordinarily would have deleted the logs—that is, if logs were ever kept in the first place. No U.S. law currently mandates that such logs be kept.

I think the big idea here is that the Internet makes a massive surveillance society so easy. And data storage will only get cheaper.

Posted on June 28, 2005 at 8:16 AM62 Comments

Interview with Marcus Ranum

There’s some good stuff in this interview.

There’s enough blame for everyone.

Blame the users who don’t secure their systems and applications.

Blame the vendors who write and distribute insecure shovel-ware.

Blame the sleazebags who make their living infecting innocent people with spyware, or sending spam.

Blame Microsoft for producing an operating system that is bloated and has an ineffective permissions model and poor default configurations.

Blame the IT managers who overrule their security practitioners’ advice and put their systems at risk in the interest of convenience. Etc.

Truly, the only people who deserve a complete helping of blame are the hackers. Let’s not forget that they’re the ones doing this to us. They’re the ones who are annoying an entire planet. They’re the ones who are costing us billions of dollars a year to secure our systems against them. They’re the ones who place their desire for fun ahead of everyone on earth’s desire for peace and [the] right to privacy.

Posted on June 27, 2005 at 1:14 PM54 Comments

Seagate's Full Disk Encryption

Seagate has introduced a hard drive with full-disk encryption.

The 2.5-inch drive offers full encryption of all data directly on the drive through a software key that resides on a portion of the disk nobody but the user can access. Every piece of data that crosses the interface encrypted without any intervention by the user, said Brian Dexheimer, executive vice president for global sales and marketing at the Scotts Valley, Calif.-based company.

Here’s the press release, and here’s the product spec sheet. Ignore the “TDEA 192” nonsense. It’s a typo; the product uses triple-DES, and the follow-on product will use AES.

Posted on June 27, 2005 at 7:24 AM48 Comments

The Adaptability of Iraqi Insurgents

This Newsweek article on the insurgents in Iraq includes an interesting paragraph on how they adapt to American military defenses.

Counterinsurgency experts are alarmed by how fast the other side’s tactics can evolve. A particularly worrisome case is the ongoing arms race over improvised explosive devices. The first IEDs were triggered by wires and batteries; insurgents waited on the roadside and detonated the primitive devices when Americans drove past. After a while, U.S. troops got good at spotting and killing the triggermen when bombs went off. That led the insurgents to replace their wires with radio signals. The Pentagon, at frantic speed and high cost, equipped its forces with jammers to block those signals, accomplishing the task this spring. The insurgents adapted swiftly by sending a continuous radio signal to the IED; when the signal stops or is jammed, the bomb explodes. The solution? Track the signal and make sure it continues. Problem: the signal is encrypted. Now the Americans are grappling with the task of cracking the encryption on the fly and mimicking it—so far, without success. Still, IED casualties have dropped, since U.S. troops can break the signal and trigger the device before a convoy passes. That’s the good news. The bad news is what the new triggering system says about the insurgents’ technical abilities.

The CIA is worried that Iraq is becoming a far more effective breeding ground for terrorists than Afghanistan ever was, because they get real-world experience with urban terrorist-style combat.

Edited to add: Link fixed.

Posted on June 25, 2005 at 7:30 AM44 Comments

SHA Cryptanalysis Paper Online

In February, I wrote about a group of Chinese researchers who broke the SHA-1 hash function. That posting was based on short notice from the researchers. Since then, many people have written me asking about the research and the actual paper, some questioning the validity of the research because of the lack of documentation.

The paper did exist; I saw a copy. They will present it at the Crypto conference in August. I believe they didn’t post it because Crypto requires that submitted papers not be previously published, and they misunderstood that to mean that it couldn’t be widely distributed in any way.

Now there’s a copy of the paper on the web. You can read “Finding Collisions in the Full SHA-1,” by Xiaoyun Wang, Yiqun Lisa Yin, and Hongbo Yu, here.

Posted on June 24, 2005 at 12:46 PM18 Comments

Indian Call Center Sells Personal Information

There was yet another incident where call center staffer was selling personal data. The data consisted of banking details of British customers, and was sold by people at an outsourced call center in India.

I predict a spate of essays warning us of the security risks of offshore outsourcing. That’s stupid; this has almost nothing to do with offshoring. It’s no different than the Lembo case, and that happened in the safe and secure United States.

There are security risks to outsourcing, and there are security risks to offshore outsourcing. But the risk illustrated in this story is the risk of malicious insiders, and that is mostly independent of outsourcing. Lousy wages, lack of ownership, a poor work environment, and so on can all increase the risk of malicious insiders, but that’s true regardless of who owns the call center or in what currency the salary is paid in. Yes, it’s harder to prosecute across national boundaries, but the deterrence here is more contractual than criminal.

The problem here is people, not corporate or national boundaries.

Posted on June 24, 2005 at 9:35 AM34 Comments

Talking to Strangers

In Beyond Fear I wrote: “Many children are taught never to talk to strangers, an extreme precaution with minimal security benefit.”

In talks, I’m even more direct. I think “don’t talk to strangers” is just about the worst possible advice you can give a child. Most people are friendly and helpful, and if a child is in distress, asking the help of a stranger is probably the best possible thing he can do.

This advice would have helped Brennan Hawkins, the 11-year-old boy who was lost in the Utah wilderness for four days.

The parents said Brennan had seen people searching for him on horse and ATV, but avoided them because of what he had been taught.

“He stayed on the trail, he avoided strangers,” Jody Hawkins said. “His biggest fear, he told me, was that someone would steal him.”

They said they hadn’t talked to Brennan and his four siblings about what they should do about strangers if they were lost. “This may have come to a faster conclusion had we discussed that,” Toby Hawkins said.

In a world where good guys are common and bad guys are rare, assuming a random person is a good guy is a smart security strategy. We need to help children develop their natural intuition about risk, and not give them overbroad rules.

Also in Beyond Fear, I wrote:

As both individuals and a society, we can make choices about our security. We can choose more security or less security. We can choose greater impositions on our lives and freedoms, or fewer impositions. We can choose the types of risks and security solutions we’re willing to tolerate and decide that others are unacceptable.

As individuals, we can decide to buy a home alarm system to make ourselves more secure, or we can save the money because we don’t consider the added security to be worth it. We can decide not to travel because we fear terrorism, or we can decide to see the world because the world is wonderful. We can fear strangers because they might be attackers, or we can talk to strangers because they might become friends.

Posted on June 23, 2005 at 2:40 PM68 Comments

Dell Protects the Homeland

Stupidity is rampant:

I purchased a Dell server today for work, through our account representative at Dell. At the end of the order process, just before confirmation, the Dell representative said: “Federal law requires that we ask what will this server be used for?”

I asked, incredulously, “Why the hell does the federal government care?” to which the Dell representative replied “PATRIOT Act.”

I certainly feel a lot safer knowing that terrorist are on their honor to tell the truth when buying servers from Dell.

I think anyone who says “homework” is obviously lying, and should be turned in to the authorities.

Posted on June 23, 2005 at 12:00 PM56 Comments

CardSystems Exposes 40 Million Identities

The personal information of over 40 million people has been hacked. The hack occurred at CardSystems Solutions, a company that processes credit card transactions. The details are still unclear. The New York Times reports that “data from roughly 200,000 accounts from MasterCard, Visa and other card issuers are known to have been stolen in the breach,” although 40 million were vulnerable. The theft was an intentional malicious computer hacking activity: the first in all these recent personal-information breaches, I think. The rest were accidental—backup tapes gone walkabout, for example—or social engineering hacks. Someone was after this data, which implies that’s more likely to result in fraud than those peripatetic backup tapes.

CardSystems says that they found the problem, while MasterCard maintains that they did; the New York Times agrees with MasterCard. Microsoft software may be to blame. And in a weird twist, CardSystems admitted they weren’t supposed to keep the data in the first place.

The official, John M. Perry, chief executive of CardSystems Solutions…said the data was in a file being stored for “research purposes” to determine why certain transactions had registered as unauthorized or uncompleted.

Yeah, right. Research = marketing, I’ll bet.

This is exactly the sort of thing that Visa and MasterCard are trying very hard to prevent. They have imposed their own security requirements on companies—merchants, processors, whoever—that deal with credit card data. Visa has instituted a Cardholder Information Security Program (CISP). MasterCard calls its program Site Data Protection (SDP). These have been combined into a single joint security standard, PCI, which also includes Discover, American Express, JCB, and Diners Club. (More on Visa’s PCI program.)

PCI requirements encompass network security, password management, stored-data encryption, access control, monitoring, testing, policies, etc. And the credit-card companies are backing these requirements up with stiff penalties: cash fines of up to $100,000, increased transaction fees, orand termination of the account. For a retailer that does most of its business via credit cards, this is an enormous incentive to comply.

These aren’t laws, they’re contractual business requirements. They’re not imposed by government; the credit card companies are mandating them to protect their brand.

Every credit card company is terrified that people will reduce their credit card usage. They’re worried that all of this press about stolen personal data, as well as actual identity theft and other types of credit card fraud, will scare shoppers off the Internet. They’re worried about how their brands are perceived by the public. And they don’t want some idiot company ruining their reputations by exposing 40 million cardholders to the risk of fraud. (Or, at least, by giving reporters the opportunity to write headlines like “CardSystems Solutions hands over 40M credit cards to hackers.”)

So independent of any laws or government regulations, the credit card companies are forcing companies that process credit card data to increase their security. Companies have to comply with PCI or face serious consequences.

Was CardSystems in compliance? They should have been in compliance with Visa’s CISP by 30 September 2004, and certainly they were in the highest service level. (PCI compliance isn’t required until 30 June 2005—about a week from now.) The reality is more murky.

After the disclosure of the security breach at CardSystems, varying accounts were offered about the company’s compliance with card association standards.

Jessica Antle, a MasterCard spokeswoman, said that CardSystems had never demonstrated compliance with MasterCard’s standards. “They were in violation of our rules,” she said.

It is not clear whether or when MasterCard intervened with the company in the past to insure compliance, but MasterCard said Friday that it had now given CardSystems “a limited amount of time” to do so.

Asked about compliance with Visa’s standards, a Visa spokeswoman, Rosetta Jones, said, “This particular processor was not following Visa’s security requirements when we found out there was a potential data compromise.”

Earlier, Mr. Perry of CardSystems said his company had been audited in December 2003 by an unspecified independent assessor and had received a seal of approval from the Visa payment associations in June 2004.

All of this demonstrates some limitations of any certification system. One, companies can take advantage of interpersonal and intercompany politics to get themselves special treatment with respect to the policies. And two, all audits rely to a great extent on self-assessment and self-disclosure. If a company is willing to lie to an auditor, it’s unlikely that it will get caught.

Unless they get really caught, like this incident.

Self-reporting only works if the punishment exceeds the crime. The reason people accurately declare what they bring into the country on their customs forms, for example, is because the penalties for lying are far more expensive than paying any duty owed.

If the credit card industry wants their PCI requirements taken seriously, they need to make an example out of CardSystems. They need to revoke whatever credit card processing license CardSystems has, to the maximum extent possible by whatever contracts they have in place. Only by making CardSystems a demonstration of what happens to someone who doesn’t comply will everyone else realize that they had better comply.

(CardSystems should also face criminal prosecution, but that’s unlikely in today’s business-friendly political environment.)

I have great hopes for PCI. I like security solutions that involve contracts between companies more than I like government intervention. Often the latter is required, but the former is more effective. Here’s PCI’s chance to demonstrate their effectiveness.

Posted on June 23, 2005 at 8:55 AM22 Comments

Organized Retail Theft

There are two distinct shoplifting threats: petty shoplifting and Organized Retail Theft.

Organized retail theft (ORT) is a growing problem throughout the United States, affecting a wide-range of retail establishments, including supermarkets, chain drug stores, independent pharmacies, mass merchandisers, convenience stores, and discount operations. It has become the most pressing security problem confronting retailers. ORT losses are estimated to run as high as $15 billion annually in the supermarket industry alone ­ and $34 billion across all retail. ORT crime is separate and distinct from petty shoplifting in that it involves professional theft rings that move quickly from community to community and across state lines to steal large amounts of merchandise that is then repackaged and sold back into the marketplace. Petty shoplifting, as defined, is limited to items stolen for personal use or consumption.

Their list of 50 most shoplifted items consists of small, expensive things with long shelf life: over-the-counter drugs, mostly.

#1 Advil tablet 50 ct

#2 Advil tablet 100 ct

#3 Aleve caplet 100 ct

#4 EPT Pregnancy Test single

#5 Gillette Sensor 10 ct

#6 Kodak 200 24 exp

#7 Similac w/iron powder – case

#8 Similac w/iron powder – single can

#9 Preparation H 12 ct

#10 Primatene tablet 24 ct

Found on BoingBoing.

Posted on June 22, 2005 at 1:06 PM24 Comments

DNA Identification

Here’s an interesting application of DNA identification. Instead of searching for your DNA at the crime scene, they search for the crime-scene DNA on you.

The system, called Sentry, works by fitting a box containing a powder spray above a doorway which, once primed, goes into alert mode if the door is opened.

It then sprays the powder when there is movement in the doorway again.

The aim is to catch a burglar in the act as stolen items are being removed.

The intruder is covered in the bright red powder, which glows under ultraviolet (UV) light and can only be removed with heavy scrubbing.

However, the harmless synthetic DNA contained in the powder sinks into the skin and takes several days, depending on the person’s metabolism, to work its way out.

Posted on June 22, 2005 at 8:39 AM28 Comments

Speeding Ticket Avoidance

This is a very popular security-related field, and one that every driver is at least somewhat interested in.

This site is run by an ex-policeman, and feels authoritative. He places a lot of emphasis on education; installing a fancy radar detector isn’t doing to do much for you unless you know how to use it correctly.

Here’s a product that seems to counter the threat of aerial license-plate scanners.

This spray claims to make your license plate invisible to cameras. I have no idea if it works.

One final note: the ex-cop is offering a $5,000 reward for the first person who can point him to a passive laser jammer that works.

Posted on June 21, 2005 at 9:15 AM91 Comments

Write Down Your Password

Microsoft’s Jesper Johansson urged people to write down their passwords.

This is good advice, and I’ve been saying it for years.

Simply, people can no longer remember passwords good enough to reliably defend against dictionary attacks, and are much more secure if they choose a password too complicated to remember and then write it down. We’re all good at securing small pieces of paper. I recommend that people write their passwords down on a small piece of paper, and keep it with their other valuable small pieces of paper: in their wallet.

Posted on June 17, 2005 at 8:40 AM158 Comments

Password Safe

Password Safe is a free Windows password-storage utility. These days, anyone who is on the Web regularly needs too many passwords, and it’s impossible to remember them all. I have long advocated writing them all down on a piece of paper and putting it in your wallet.

I designed Password Safe as another solution. It’s a small program that encrypts all of your passwords using one passphrase. The program is easy to use, and isn’t bogged down by lots of unnecessary features. Security through simplicity.

Password Safe 2.11 is now available.

Currently, Password Safe is an open source project at SourceForge, and is run by Rony Shapiro. Thank you to him and to all the other programmers who worked on the project.

Note that my Password Safe is not the same as this, this, this, or this PasswordSafe. (I should have picked a more obscure name for the program.)

It is the same as this, for the PocketPC.

Posted on June 15, 2005 at 1:35 PM41 Comments

White Powder Anthrax Hoaxes

Earlier this month, there was an anthrax scare at the Indonesian embassy in Australia. Someone sent them some white powder in an envelope, which was scary enough. Then it tested positive for bacillus. The building was decontaminated, and the staff was quarantined for twelve hours. By then, tests came back negative for anthrax.

A lot of thought went into this false alarm. The attackers obviously knew that their white powder would be quickly tested for the presence of a bacterium of the bacillus family (of which anthrax is a member), but that the bacillus would have to be cultured for a couple of days before a more exact identification could be made. So even without any anthrax, they managed to cause two days of terror.

At a guess, this incident had something to do with Schapelle Corby (yet another security related story). Corby was arrested in Bali for smuggling drugs into the country. Her defense, widely believed in Australia, was that she was an unwitting dupe of the real drug smugglers. Supposedly, the smugglers work as airport baggage handlers and slip packages into checked baggage and remove them at the far end before reclaim. In any case, Bali has very strict drug laws and Corby was recently convicted in what Australians consider a miscarriage of justice. There have been news reports saying that there is no connection, but it just seems too obvious.

In an interesting side note, the media have revealed for the first time that 360 “white powder” incidents have taken place since 11 September 2001. This news had been suppressed by the government, which had issued D notices to the media for all such incidents. So there has been one such incident approximately every four days—an astonishing number, given Australia’s otherwise low crime rate.

Posted on June 14, 2005 at 2:41 PM20 Comments

Defining "Access" in Cyberspace

I’ve been reading a lot of law journal articles. It’s interesting to read legal analyses of some of the computer security problems I’ve been wrestling with.

This is a fascinating paper on the concepts of “access” and “authorized access” in cyberspace. The abstract:

In the last twenty-five years, the federal government and all fifty states have enacted new criminal laws that prohibit unauthorized access to computers. These new laws attempt to draw a line between criminality and free conduct in cyberspace. No one knows what it means to access a computer, however, nor when access becomes unauthorized. The few courts that have construed these terms have offered divergent interpretations, and no scholars have yet addressed the problem. Recent decisions interpreting the federal statute in civil cases suggest that any breach of contract with a computer owner renders use of that computer an unauthorized access. If applied to criminal cases, this approach would broadly criminalize contract law on the Internet, potentially making millions of Americans criminals for the way they write e-mail and surf the Web.

This Article presents a comprehensive inquiry into the meaning of unauthorized access statutes. It begins by explaining why legislatures enacted unauthorized access statutes, and why early beliefs that such statutes solved the problem of computer misuse have proved remarkably naïve. Next, the Article explains how the courts have construed these statutes in an overly broad way that threatens to criminalize a surprising range of innocuous conduct involving computers. In the final section, the Article offers a normative proposal for interpreting access and authorization. This section argues that courts should reject a contract theory of authorization, and should narrow the scope of unauthorized access statutes to circumvention of code-based restrictions on computer privileges. The section justifies this proposal on several grounds. First, the proposal will best mediate the line between securing privacy and protecting the liberty of Internet users. Second, the proposal mirrors criminal law’s traditional treatment of crimes that contain a consent element. Third, the proposed approach is consistent with the basic theories of punishment. Fourth, the proposed interpretation avoids possible constitutional difficulties that may arise under the broader constructions that courts recently have favored.

It’s a long paper, but I recommend reading it if you’re interested in the legal concepts.

Posted on June 14, 2005 at 7:16 AM9 Comments

Torah Security

According to Jewish law, Torahs must be identical. When you make a copy, you cannot change or add a single character. That means you can’t write “Property of….” You can’t add a serial number. You can’t make any kind of identifying marks.

This turns out to be a problem when Torahs are stolen; it’s impossible to identify that they’re stolen goods.

Now there’s a method of identifying Torahs without violating Jewish law:

Called the Universal Torah Registry, the system works like this: A synagogue mails in a form with their contact information and the number of Torahs they want to place in the system, and the registry sends back a computer-coded template for each scroll. The 3.5- by 8-inch template resembles an IBM punch card, with eight holes arranged so their position relative to one another describes a unique identification number in a proprietary code.

A rabbi uses the template to perforate the coded pattern into the margins of the scroll with a tiny needle. To keep an enterprising thief from swapping the perforated segment with a section from another stolen scroll in some kind of twisted Torah chop shop, the registry recommends applying the code to 10 different segments of the scroll. Pollack says the code contains self-authentication features that keep a thief from invalidating it by just adding an extra hole in an arbitrary location.

Posted on June 13, 2005 at 1:28 PM32 Comments

Orlando Trusted Traveler Program

I’ve already written about what a bad idea trusted traveler programs are. The basic security intuition is that when you create two paths through security—an easy path and a hard path—you invite the bad guys to take the easy path. So the security of the sort process must make up for the security lost in the sorting. Trusted traveler fails this test; there are so many ways for the terrorists to get trusted traveler cards that the system makes it too easy for them to avoid the hard path through security.

The trusted traveler programs at various U.S. airports are all run by the TSA. A new program in Orlando Airport is run by the company Verified Identity Pass Inc.

I’ve already written about this company and what it’s doing.

And I’ve already written about the fallacy of confusing identification with security.

Posted on June 12, 2005 at 8:57 AM25 Comments

Intel Quietly Adds DRM to CPUs

The new Pentium D will contain technology that can be used to support DRM.

Intel is denying it, but it sounds like they’re weaseling:

According to Intel VP Donald Whiteside, it is “an incorrect assertion that Intel has designed-in embedded DRM technologies into the Pentium D processor and the Intel 945 Express Chipset family.” Whiteside insists they are simply working with vendors who use DRM to “design their products to be compatible with the Intel platforms.”

Posted on June 11, 2005 at 7:51 AM34 Comments

Risks of Pointy Knives

An article in the British Medical Journal recommends that long pointy knives be banned because they’re a stabbing risk.

Of course it’s ridiculous. (I wrote about this kind of thing two days ago, in the context of cell phones on airplanes. Banning something with good uses just because there are also bad uses is rarely a good security trade-off.)

But the researchers actually have a point—so to speak—when they say that there’s no good reason for long knives to be pointy. From the BBC:

The researchers said there was no reason for long pointed knives to be publicly available at all.

They consulted 10 top chefs from around the UK, and found such knives have little practical value in the kitchen.

None of the chefs felt such knives were essential, since the point of a short blade was just as useful when a sharp end was needed.

I do a lot of cooking, and have all my life. I never use a long knife to stab. I never use the point of a chef’s knife, or the point of any other long knife. I rarely stab at all, and when I do, I’m using a small utility knife or a petty knife.

Okay, then. Why are so many large knives pointy? Carving knives aren’t pointy. Bread knives aren’t pointy. I can rock my chef’s knife just as easily on a rounded end.

Anyone know?

Posted on June 10, 2005 at 1:17 PM105 Comments

Backscatter X-Ray Technology

Backscatter X-ray technology is a method of using X rays to see inside objects. The science is complicated, but the upshot is that you can see people naked:

The application of this new x-ray technology to airport screening uses high energy x-rays that are more likely to scatter than penetrate materials as compared to lower-energy x-rays used in medical applications. Although this type of x-ray is said to be harmless it can move through other materials, such as clothing.

A passenger is scanned by rastering or moving a single high energy x-ray beam rapidly over their form. The signal strength of detected backscattered x-rays from a known position then allows a highly realistic image to be reconstructed. Since only Compton scattered x-rays are used, the registered image is mainly that of the surface of the object/person being imaged. In the case of airline passenger screening it is her nude form. The image resolution of the technology is high, so details of the human form of airline passengers present privacy challenges.

EPIC’s “Spotlight on Security” page is an excellent resource on this issue.

The TSA has recently announced a proposal to use these machines to screen airport passengers.

I’m not impressed with this security trade-off. Yes, backscatter X-ray machines might be able to detect things that conventional screening might miss. But I already think we’re spending too much effort screening airplane passengers at the expense of screening luggage and airport employees…to say nothing of the money we should be spending on non-airport security.

On the other side, these machines are expensive and the technology is incredibly intrusive. I don’t think that people should be subjected to strip searches before they board airplanes. And I believe that most people would be appalled by the prospect of security screeners seeing them naked.

I believe that there will be a groundswell of popular opposition to this idea. Aside from the usual list of pro-privacy and pro-liberty groups, I expect fundamentalist Christian groups to be appalled by this technology. I think we can get a bevy of supermodels to speak out against the invasiveness of the search.

News article

Posted on June 9, 2005 at 1:04 PM71 Comments

Public Disclosure of Personal Data Loss

Citigroup announced that it lost personal data on 3.9 million people. The data was on a set of backup tapes that were sent by UPS (a package delivery service) from point A and never arrived at point B.

This is a huge data loss, and even though it is unlikely that any bad guys got their hands on the data, it will have profound effects on the security of all our personal data.

It might seem that there has been an epidemic of personal-data losses recently, but that’s an illusion. What we’re seeing are the effects of a California law that requires companies to disclose losses of thefts of personal data. It’s always been happening, only now companies have to go public with it.

As a security expert, I like the California law for three reasons. One, data on actual intrusions is useful for research. Two, alerting individuals whose data is lost or stolen is a good idea. And three, increased public scrutiny leads companies to spend more effort protecting personal data.

Think of it as public shaming. Companies will spend money to avoid the PR cost of public shaming. Hence, security improves.

This works, but there’s an attenuation effect going on. As more of these events occur, the press is less likely to report them. When there’s less noise in the press, there’s less public shaming. And when there’s less public shaming, the amount of money companies are willing to spend to avoid it goes down.

This data loss has set a new bar for reporters. Data thefts affecting 50,000 individuals will no longer be news. They won’t be reported.

The notification of individuals also has an attenuation effect. I know people in California who have a dozen notices about the loss of their personal data. When no identity theft follows, people start believing that it isn’t really a problem. (In the large, they’re right. Most data losses don’t result in identity theft. But that doesn’t mean that it’s not a problem.)

Public disclosure is good. But it’s not enough.

Posted on June 8, 2005 at 4:45 PM24 Comments

Risks of Cell Phones on Airplanes

Everyone—except those who like peace and quiet—thinks it’s a good idea to allow cell phone calls on airplanes, and are working out the technical details. But the U.S. government is worried that terrorists might make telephone calls from airplanes.

If the mobile phone ban were lifted, law enforcement authorities worry an attacker could use the device to coordinate with accomplices on the ground, on another flight or seated elsewhere on the same plane.

If mobile phone calls are to be allowed during flights, the law enforcement agencies urged that users be required to register their location on a plane before placing a call and that officials have fast access to call identification data.

“There is a short window of opportunity in which action can be taken to thwart a suicidal terrorist hijacking or remedy other crisis situations on board an aircraft,” the agencies said.

This is beyond idiotic. Again and again, we hear the argument that a particular technology can be used for bad things, so we have to ban or control it. The problem is that when we ban or control a technology, we also deny ourselves some of the good things it can be used for. Security is always a trade-off. Almost all technologies can be used for both good and evil; in Beyond Fear, I call them “dual use” technologies. Most of the time, the good uses far outweigh the evil uses, and we’re much better off as a society embracing the good uses and dealing with the evil uses some other way.

We don’t ban cars because bank robbers can use them to get away faster. We don’t ban cell phones because drug dealers use them to arrange sales. We don’t ban money because kidnappers use it. And finally, we don’t ban cryptography because the bad guys it to keep their communications secret. In all of these cases, the benefit to society of having the technology is much greater than the benefit to society of controlling, crippling, or banning the technology.

And, of course, security countermeasures that force the attackers to make a minor modification in their tactics aren’t very good trade-offs. Banning cell phones on airplanes only makes sense if the terrorists are planning to use cell phones on airplanes, and will give up and not bother with their attack because they can’t. If their plan doesn’t involve air-to-ground communications, or if it doesn’t involve air travel at all, then the security measure is a waste. And even worse, we denied ourselves all the good uses of the technology in the process.

Security officials are also worried that personal phone use could increase the risk that remotely-controlled bomb will be used to down an airliner. But they acknowledged simple radio-controlled explosive devices have been used in the past on planes and the first line of defence was security checks at airports.

Still, they said that “the departments believe that the new possibilities generated by airborne passenger connectivity must be recognized.”

That last sentence got it right. New possibilities, both good and bad.

Posted on June 8, 2005 at 2:40 PM45 Comments

TSA Abuse of Power

Woman accidentally leaves a knife in her carry-on luggage, where it’s discovered by screeners.

She says screeners refused to give her paperwork or documentation of her violation, documentation of the pending fine, or a copy of the photograph of the knife.

“They said ‘no’ and they said it’s a national security issue. And I said what about my constitutional rights? And they said ‘not at this point … you don’t have any’.”

Posted on June 7, 2005 at 4:10 PM122 Comments

U.S. Medical Privacy Law Gutted

In the U.S., medical privacy is largely governed by a 1996 law called HIPAA. Among many other provisions, HIPAA regulates the privacy and security surrounding electronic medical records. HIPAA specifies civil penalties against companies that don’t comply with the regulations, as well as criminal penalties against individuals and corporations who knowingly steal or misuse patient data.

The civil penalties have long been viewed as irrelevant by the health care industry. Now the criminal penalties have been gutted:

An authoritative new ruling by the Justice Department sharply limits the government’s ability to prosecute people for criminal violations of the law that protects the privacy of medical records.

The criminal penalties, the department said, apply to insurers, doctors, hospitals and other providers—but not necessarily their employees or outsiders who steal personal health data.

In short, the department said, people who work for an entity covered by the federal privacy law are not automatically covered by that law and may not be subject to its criminal penalties, which include a $250,000 fine and 10 years in prison for the most serious violations.

This is a complicated issue. Peter Swire worked extensively on this bill as the President’s Chief Counselor for Privacy, and I am going to quote him extensively. First, a story about someone who was convicted under the criminal part of this statute.

In 2004 the U.S. Attorney in Seattle announced that Richard Gibson was being indicted for violating the HIPAA privacy law. Gibson was a phlebotomist ­ a lab assistant ­ in a hospital. While at work he accessed the medical records of a person with a terminal cancer condition. Gibson then got credit cards in the patient’s name and ran up over $9,000 in charges, notably for video game purchases. In a statement to the court, the patient said he “lost a year of life both mentally and physically dealing with the stress” of dealing with collection agencies and other results of Gibson’s actions. Gibson signed a plea agreement and was sentenced to 16 months in jail.

According to this Justice Department ruling, Gibson was wrongly convicted. I presume his attorney is working on the matter, and I hope he can be re-tried under our identity theft laws. But because Gibson (or someone else like him) was working in his official capacity, he cannot be prosecuted under HIPAA. And because Gibson (or someone like him) was doing something not authorized by his employer, the hospital cannot be prosecuted under HIPAA.

The healthcare industry has been opposed to HIPAA from the beginning, because it puts constraints on their business in the name of security and privacy. This ruling comes after intense lobbying by the industry at the Department of Heath and Human Services and the Justice Department, and is the result of an HHS request for an opinion.

From Swire’s analysis the Justice Department ruling.

For a law professor who teaches statutory interpretation, the OLC opinion is terribly frustrating to read. The opinion reads like a brief for one side of an argument. Even worse, it reads like a brief that knows it has the losing side but has to come out with a predetermined answer.

I’ve been to my share of HIPAA security conferences. To the extent that big health is following the HIPAA law—and to a large extent, they’re waiting to see how it’s enforced—they are doing so because of the criminal penalties. They know that the civil penalties aren’t that large, and are a cost of doing business. But the criminal penalties were real. Now that they’re gone, the pressure on big health to protect patient privacy is greatly diminished.

Again Swire:

The simplest explanation for the bad OLC opinion is politics. Parts of the health care industry lobbied hard to cancel HIPAA in 2001. When President Bush decided to keep the privacy rule—quite possibly based on his sincere personal views—the industry efforts shifted direction. Industry pressure has stopped HHS from bringing a single civil case out of the 13,000 complaints. Now, after a U.S. Attorney’s office had the initiative to prosecute Mr. Gibson, senior officials in Washington have clamped down on criminal enforcement. The participation of senior political officials in the interpretation of a statute, rather than relying on staff attorneys, makes this political theory even more convincing.

This kind of thing is bigger than the security of the healthcare data of Americans. Our administration is trying to collect more data in its attempt to fight terrorism. Part of that is convincing people—both Americans and foreigners—that this data will be protected. When we gut privacy protections because they might inconvenience business, we’re telling the world that privacy isn’t one of our core concerns.

If the administration doesn’t believe that we need to follow its medical data privacy rules, what makes you think they’re following the FISA rules?

Posted on June 7, 2005 at 12:15 PM17 Comments

Accuracy of Commercial Data Brokers

PrivacyActivism has released a study of ChoicePoint and Acxiom, two of the U.S.’s largest data brokers. The study looks at accuracy of information and responsiveness to requests for reports.

It doesn’t look good.

From the press release:

100% of the eleven participants in the study discovered errors in background check reports provided by ChoicePoint. The majority of participants found errors in even the most basic biographical information: name, social security number, address and phone number (in 67% of Acxiom reports, 73% of ChoicePoint reports). Moreover, over 40% of participants did not receive their reports from Acxiom—and the ones who did had to wait an average of three months from the time they requested their information until they
received it.

I spoke with Deborah Pierce, the Executive Director of PrivacyActivism. She made a couple of interesting points.

First, it was very difficult for them to find a legal way to do this study. There are no mechanisms for any kind of oversight of the industry. They had to find companies who were doing background checks on employees anyway, and who felt that participating in this study with PrivacyActivism was important. Then those companies asked their employees if they wanted to anonymously participate in the study.

Second, they were surprised at just how bad the data is. The most shocking error was that two people out of eleven were listed as corporate directors of companies that they had never heard of. This can’t possibly be statistically meaningful, but it is certainly scary.

Posted on June 7, 2005 at 7:45 AM20 Comments

Attack Trends: 2004 and 2005

Counterpane Internet Security, Inc., monitors more than 450 networks in 35 countries, in every time zone. In 2004 we saw 523 billion network events, and our analysts investigated 648,000 security “tickets.” What follows is an overview of what’s happening on the Internet right now, and what we expect to happen in the coming months.

In 2004, 41 percent of the attacks we saw were unauthorized activity of some kind, 21 percent were scanning, 26 percent were unauthorized access, 9 percent were DoS (denial of service), and 3 percent were misuse of applications.

Over the past few months, the two attack vectors that we saw in volume were against the Windows DCOM (Distributed Component Object Model) interface of the RPC (remote procedure call) service and against the Windows LSASS (Local Security Authority Subsystem Service). These seem to be the current favorites for virus and worm writers, and we expect this trend to continue.

The virus trend doesn’t look good. In the last six months of 2004, we saw a plethora of attacks based on browser vulnerabilities (such as GDI-JPEG image vulnerability and IFRAME) and an increase in sophisticated worm and virus attacks. More than 1,000 new worms and viruses were discovered in the last six months alone.

In 2005, we expect to see ever-more-complex worms and viruses in the wild, incorporating complex behavior: polymorphic worms, metamorphic worms, and worms that make use of entry-point obscuration. For example, SpyBot.KEG is a sophisticated vulnerability assessment worm that reports discovered vulnerabilities back to the author via IRC channels.

We expect to see more blended threats: exploit code that combines malicious code with vulnerabilities in order to launch an attack. We expect Microsoft’s IIS (Internet Information Services) Web server to continue to be an attractive target. As more and more companies migrate to Windows 2003 and IIS 6, however, we expect attacks against IIS to decrease.

We also expect to see peer-to-peer networking as a vector to launch viruses.

Targeted worms are another trend we’re starting to see. Recently there have been worms that use third-party information-gathering techniques, such as Google, for advanced reconnaissance. This leads to a more intelligent propagation methodology; instead of propagating scattershot, these worms are focusing on specific targets. By identifying targets through third-party information gathering, the worms reduce the noise they would normally make when randomly selecting targets, thus increasing the window of opportunity between release and first detection.

Another 2004 trend that we expect to continue in 2005 is crime. Hacking has moved from a hobbyist pursuit with a goal of notoriety to a criminal pursuit with a goal of money. Hackers can sell unknown vulnerabilities—”zero-day exploits”—on the black market to criminals who use them to break into computers. Hackers with networks of hacked machines can make money by selling them to spammers or phishers. They can use them to attack networks. We have started seeing criminal extortion over the Internet: hackers with networks of hacked machines threatening to launch DoS attacks against companies. Most of these attacks are against fringe industries—online gambling, online computer gaming, online pornography—and against offshore networks. The more these extortions are successful, the more emboldened the criminals will become.

We expect to see more attacks against financial institutions, as criminals look for new ways to commit fraud. We also expect to see more insider attacks with a criminal profit motive. Already most of the targeted attacks—as opposed to attacks of opportunity—originate from inside the attacked organization’s network.

We also expect to see more politically motivated hacking, whether against countries, companies in “political” industries (petrochemicals, pharmaceuticals, etc.), or political organizations. Although we don’t expect to see terrorism occur over the Internet, we do expect to see more nuisance attacks by hackers who have political motivations.

The Internet is still a dangerous place, but we don’t foresee people or companies abandoning it. The economic and social reasons for using the Internet are still far too compelling.

This essay originally appeared in the June 2005 issue of Queue.

Posted on June 6, 2005 at 1:02 PM44 Comments

Counterfeiting in the Sudan

It’s an NPR audio story: “Peace Also Brings New Currency to Southern Sudan.”

Sudanese currency is printed on plain paper with very inconsistent color and image quality, and has no security features—not even serial numbers. How does that work?

While [he] concedes the bills are poorly printed, he’s not worried about counterfeiting. This is because anyone who does it will be put in front of a firing squad and shot.

That’s one way to solve the problem.

Posted on June 6, 2005 at 7:46 AM19 Comments

Attack on the Bluetooth Pairing Process

There’s a new cryptographic result against Bluetooth. Yaniv Shaked and Avishai Wool of Tel Aviv University in Israel have figured out how to recover the PIN by eavesdropping on the pairing process.

Pairing is an important part of Bluetooth. It’s how two devices—a phone and a headset, for example—associate themselves with one another. They generate a shared secret that they use for all future communication. Pairing is why, when on a crowded subway, your Bluetooth devices don’t link up with all the other Bluetooth devices carried by everyone else.

According to the Bluetooth specification, PINs can be 8-128 bits long. Unfortunately, most manufacturers have standardized on a four decimal-digit PIN. This attack can crack that 4-digit PIN in less than 0.3 sec on an old Pentium III 450MHz computer, and in 0.06 sec on a Pentium IV 3Ghz HT computer.

At first glance, this attack isn’t a big deal. It only works if you can eavesdrop on the pairing process. Pairing is something that occurs rarely, and generally in the safety of your home or office. But the authors have figured out how to force a pair of Bluetooth devices to repeat the pairing process, allowing them to eavesdrop on it. They pretend to be one of the two devices, and send a message to the other claiming to have forgotten the link key. This prompts the other device to discard the key, and the two then begin a new pairing session.

Taken together, this is an impressive result. I can’t be sure, but I believe it would allow an attacker to take control of someone’s Bluetooth devices. Certainly it allows an attacker to eavesdrop on someone’s Bluetooth network.

News story here.

Posted on June 3, 2005 at 10:19 AM41 Comments

Billions Wasted on Anti-Terrorism Security

Recently there have been a bunch of news articles about how lousy counterterrorism security is in the United States, how billions of dollars have been wasted on security since 9/11, and how much of what was purchased doesn’t work as advertised.

The first is from the May 8 New York Times (available at the website for pay, but there are copies here and here):

After spending more than $4.5 billion on screening devices to monitor the nation’s ports, borders, airports, mail and air, the federal government is moving to replace or alter much of the antiterrorism equipment, concluding that it is ineffective, unreliable or too expensive to operate.

Many of the monitoring tools—intended to detect guns, explosives, and nuclear and biological weapons—were bought during the blitz in security spending after the attacks of Sept. 11, 2001.

In its effort to create a virtual shield around America, the Department of Homeland Security now plans to spend billions of dollars more. Although some changes are being made because of technology that has emerged in the last couple of years, many of them are planned because devices currently in use have done little to improve the nation’s security, according to a review of agency documents and interviews with federal officials and outside experts.

From another part of the article:

Among the problems:

  • Radiation monitors at ports and borders that cannot differentiate between radiation emitted by a nuclear bomb and naturally occurring radiation from everyday material like cat litter or ceramic tile.
  • Air-monitoring equipment in major cities that is only marginally effective because not enough detectors were deployed and were sometimes not properly calibrated or installed. They also do not produce results for up to 36 hours—long after a biological attack would potentially infect thousands of people.
  • Passenger-screening equipment at airports that auditors have found is no more likely than before federal screeners took over to detect whether someone is trying to carry a weapon or a bomb aboard a plane.
  • Postal Service machines that test only a small percentage of mail and look for anthrax but no other biological agents.

The Washington Post had a series of articles. The first lists some more problems:

  • The contract to hire airport passenger screeners grew to $741 million from $104 million in less than a year. The screeners are failing to detect weapons at roughly the same rate as shortly after the attacks.
  • The contract for airport bomb-detection machines ballooned to at least $1.2 billion from $508 million over 18 months. The machines have been hampered by high false-alarm rates.
  • A contract for a computer network called US-VISIT to screen foreign visitors could cost taxpayers $10 billion. It relies on outdated technology that puts the project at risk.
  • Radiation-detection machines worth a total of a half-billion dollars deployed to screen trucks and cargo containers at ports and borders have trouble distinguishing between highly enriched uranium and common household products. The problem has prompted costly plans to replace the machines.

The second is about border security.

And more recently, a New York Times article on how lousy port security is.

There are a lot of morals here: the problems of believing companies that have something to sell you, the difficulty of making technological security solutions work, the problems with making major security changes quickly, the mismanagement that comes from any large bureaucracy like the DHS, and the wastefulness of defending potential terrorist targets instead of broadly trying to deal with terrorism.

Posted on June 3, 2005 at 8:17 AM29 Comments

Deep Throat Tradecraft

The politics is certainly interesting, but I am impressed with Felt’s tradecraft. Read Bob Woodward’s description of how he would arrange secret meetings with Felt.

I tried to call Felt, but he wouldn’t take the call. I tried his home in Virginia and had no better luck. So one night I showed up at his Fairfax home. It was a plain-vanilla, perfectly kept, everything-in-its-place suburban house. His manner made me nervous. He said no more phone calls, no more visits to his home, nothing in the open.

I did not know then that in Felt’s earliest days in the FBI, during World War II, he had been assigned to work on the general desk of the Espionage Section. Felt learned a great deal about German spying in the job, and after the war he spent time keeping suspected Soviet agents under surveillance.

So at his home in Virginia that summer, Felt said that if we were to talk it would have to be face to face where no one could observe us.

I said anything would be fine with me.

We would need a preplanned notification system—a change in the environment that no one else would notice or attach any meaning to. I didn’t know what he was talking about.

If you keep the drapes in your apartment closed, open them and that could signal me, he said. I could check each day or have them checked, and if they were open we could meet that night at a designated place. I liked to let the light in at times, I explained.

We needed another signal, he said, indicating that he could check my apartment regularly. He never explained how he could do this.

Feeling under some pressure, I said that I had a red cloth flag, less than a foot square—the kind used as warnings on long truck loads—that a girlfriend had found on the street. She had stuck it in an empty flowerpot on my apartment balcony.

Felt and I agreed that I would move the flowerpot with the flag, which usually was in the front near the railing, to the rear of the balcony if I urgently needed a meeting. This would have to be important and rare, he said sternly. The signal, he said, would mean we would meet that same night about 2 a.m. on the bottom level of an underground garage just over the Key Bridge in Rosslyn.

Felt said I would have to follow strict countersurveillance techniques. How did I get out of my apartment?

I walked out, down the hall, and took the elevator.

Which takes you to the lobby? he asked.

Yes.

Did I have back stairs to my apartment house?

Yes.

Use them when you are heading for a meeting. Do they open into an alley?

Yes.

Take the alley. Don’t use your own car. Take a taxi to several blocks from a hotel where there are cabs after midnight, get dropped off and then walk to get a second cab to Rosslyn. Don’t get dropped off directly at the parking garage. Walk the last several blocks. If you are being followed, don’t go down to the garage. I’ll understand if you don’t show. All this was like a lecture. The key was taking the necessary time—one to two hours to get there. Be patient, serene. Trust the prearrangements. There was no fallback meeting place or time. If we both didn’t show, there would be no meeting.

Felt said that if he had something for me, he could get me a message. He quizzed me about my daily routine, what came to my apartment, the mailbox, etc. The Post was delivered outside my apartment door. I did have a subscription to the New York Times. A number of people in my apartment building near Dupont Circle got the Times. The copies were left in the lobby with the apartment number. Mine was No. 617, and it was written clearly on the outside of each paper in marker pen. Felt said if there was something important he could get to my New York Times—how, I never knew. Page 20 would be circled, and the hands of a clock in the lower part of the page would be drawn to indicate the time of the meeting that night, probably 2 a.m., in the same Rosslyn parking garage.

The relationship was a compact of trust; nothing about it was to be discussed or shared with anyone, he said.

How he could have made a daily observation of my balcony is still a mystery to me. At the time, before the era of intensive security, the back of the building was not enclosed, so anyone could have driven in the back alley to observe my balcony. In addition, my balcony and the back of the apartment complex faced onto a courtyard or back area that was shared with a number of other apartment or office buildings in the area. My balcony could have been seen from dozens of apartments or offices, as best I can tell.

A number of embassies were located in the area. The Iraqi Embassy was down the street, and I thought it possible that the FBI had surveillance or listening posts nearby. Could Felt have had the counterintelligence agents regularly report on the status of my flag and flowerpot? That seems highly unlikely, if not impossible.

Posted on June 2, 2005 at 4:31 PM27 Comments

Stupid People Purchase Fake Concert Tickets

From the Boston Herald

Instead of rocking with Bono and The Edge, hundreds of U2 fans were forced to “walk away, walk away” from the sold-out FleetCenter show Tuesday night when their scalped tickets proved bogus.

Some heartbroken fans broke down in tears as they were turned away clutching worthless pieces of paper they shelled out as much as $2,000 for.

You might think this was some fancy counterfeiting scheme, but no.

It took Whelan and his staff a while to figure out what was going on, but a pattern soon emerged. The counterfeit tickets mostly were computer printouts bought online from cyberscalpers.

Online tickets are a great convenience. They contain a unique barcode. You can print as many as you like, but the barcode scanners at the concert door will only accept each barcode once.

Only an idiot would buy a printout from a scalper, because there’s no way to verify that he will only sell it once. This is probably obvious to anyone reading this, but it tuns out that it’s not obvious to everyone.

“On an average concert night we have zero, zilch, zip problems with counterfeit tickets,” Delaney said. “Apparently, U2 has whipped this city into such a frenzy that people are willing to take a risk.”

I find this fascinating. Online verification of authorization tokens is supposed to make counterfeiting more difficult, because it assumes the physical token can be copied. But it won’t work if people believe that the physical token is unique.

Note: Another write-up of the same story is here.

Posted on June 2, 2005 at 2:10 PM24 Comments

Battlefield RFID Listening Rocks

From the Financial Times:

The US military is developing miniature electronic sensors disguised as rocks that can be dropped from an aircraft and used to help detect the sound of approaching enemy combatants.

The devices, which would be no larger than a golf ball, could be ready for use in about 18 months. They use tiny silicon chips and radio frequency identification (RFID) technology that is so sensitive that it can detect the sound of a human footfall at 20ft to 30ft. The project is being carried out by scientists at North Dakota State University, which has licensed nano-technology processes from Alien Technology, a California-based commercial manufacturer of RFID tags for supermarkets.

This kind of thing has been discussed for a while. One of the best discussions is still Martin Libicki’s paper from the mid-1990s, “The Mesh and the Net: Speculations on Armed Conflict in a Time of Free Silicon.” (It’s available as a book, and online.)

Posted on June 2, 2005 at 8:14 AM11 Comments

DHS Enforces Copyright

Why is the Department of Homeland Security involved in copyright issues?

Agents shut down a popular Web site that allegedly had been distributing copyrighted music and movies, including versions of Star Wars Episode III: Revenge of the Sith. Homeland Security agents from several divisions served search warrants on 10 people around the country suspected of being involved with the Elite Torrents site, and took over the group’s main server.

Shouldn’t they be spending their resources on matters of national security instead of worrying about who is downloading the new Star Wars movie? Here’s the DHS’s mission statement, in case anyone is unsure what they’re supposed to be doing.

We will lead the unified national effort to secure America. We will prevent and deter terrorist attacks and protect against and respond to threats and hazards to the nation. We will ensure safe and secure borders, welcome lawful immigrants and visitors, and promote the free-flow of commerce.

I simply don’t believe that running down file sharers counts under “promote the free-flow of commerce.” That’s more along the lines of checking incoming shipping for smuggled nuclear bombs without shutting down our seaports.

Edited to add: Steve Wildstrom of Business Week left this comment, which seems to explain matters:

The DHS involvement turns out to be not the least bit mysterious. DHS is a sprawling agglomeration of agencies and the actual unit involved was Immigration and Customs Enforcement, a/k/a the Customs Service. Its involvement arose because the pirated copy of Star Wars apparently originated outside the U.S. and Customs is routinely involved in the interception and seizure of material entering the U.S. in violation of copyright or trademark laws. In Washington, for example, Customs agents regularly bust street vendors selling T-shirts with unlicensed Disney characters and other trademarked and copyright stuff.

The Secret Service’s role in computer crime enforcement arose from its anti-counterfeiting activities which extended to electronic crimes against financial institutions and cyber-crime in general. But they aren’t very good at it (anyone remember the Steve Jackson Games fiasco?) and the functions would probably best be turned over to another agency.

Posted on June 1, 2005 at 2:31 PM36 Comments

Spelling Errors as a Counterfeiting Defense

This is a weird rumor.

ID cards in Belgium are being printed with intentional misspellings in an attempt to thwart potential fraudsters.

Four circular arcs on the ID cards show the country’s name in different languages—French, Dutch, German and English. According to the article, the German and English arcs will be spelled incorrectly, and misspellings will also appear elsewhere on the cards. The idea is that people making counterfeit cards won’t notice the misspellings on the originals and will print the fraudulent cards with the names spelled properly.

More information is here:

To trick fraudsters, the Home Office has introduced three circular arcs on the card—just beneath the identity photos—where you will find the name of the country in the official languages spoken in Belgium—French, Dutch and German, as well as in English. But instead of ‘Belgien’ in German, the ID card incorrectly uses the name ‘Belgine’ and instead of ‘Belgium’ in English, the card reads ‘Belguim’. Vanneste has promised other errors will be printed on the card to “further confuse fraudsters”. With any luck, these will not be revealed.

I’m not impressed with this as a countermeasure. It’s certainly true that poor counterfeits will have all sorts of noticeable errors—and correct spelling might certainly be one of them. But the more people that know about the misspellings, the less likely a counterfeiter will get it wrong. And the more likely a counterfeiter will get it wrong, the less likely anyone will notice.

I’m all for hard-to-counterfeit features in ID cards. But why make them grammatical?

Posted on June 1, 2005 at 7:58 AM45 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.