Blog: February 2006 Archives

Face Recognition Comes to Bars

BioBouncer is a face recognition system intended for bars:

Its camera snaps customers entering clubs and bars, and facial recognition software compares them with stored images of previously identified troublemakers. The technology alerts club security to image matches, while innocent images are automatically flushed at the end of each night, Dussich said. Various clubs can share databases through a virtual private network, so belligerent drunks might find themselves unwelcome in all their neighborhood bars.

Anyone want to guess how long that "automatically flushed at the end of each night" will last? This data has enormous value. Insurance companies will want to know if someone was in a bar before a car accident. Employers will want to know if their employees were drinking before work -- think airplane pilots. Private investigators will want to know who walked into a bar with whom. The police will want to know all sorts of things. Lots of people will want this data -- and they'll all be willing to pay for it.

And the data will be owned by the bars thatcollect it. They can choose to erase it, or they can choose to sell it to data aggregators like Acxiom.

It's rarely the initial application that's the problem. It's the follow-on applications. It's the function creep. Before you know it, everyone will know that they are identified the moment they walk into a commercial building. We will all lose privacy, and liberty, and freedom as a result.

Posted on February 28, 2006 at 3:47 PM52 Comments

Quantum Computing Just Got More Bizarre

You don't even have to turn it on:

With the right set-up, the theory suggested, the computer would sometimes get an answer out of the computer even though the program did not run. And now researchers from the University of Illinois at Urbana-Champaign have improved on the original design and built a non-running quantum computer that really works.

So now, even turning the machine off won't necessarily prevent hackers from stealing passwords.

And as long as we're on the topic of quantum computing, here's a piece of quantum snake oil:

A University of Toronto professor says he can now use a photon of light to smash through the most sophisticated computer theft schemes that hackers can devise.

EDITED TO ADD (3/1): More information about the University of Illinois result is here.

Posted on February 28, 2006 at 1:14 PM27 Comments

DNA Surveillance in the UK

Wholesale surveillance from the UK:

About 4,000 men working and living in South Croydon are being asked to voluntarily give their DNA as part of the hunt for a teenage model's killer.

Well, sort of voluntarily:

"It is an entirely voluntary process. None of those DNA samples or finger prints will be used to check out any other unsolved crimes.

"Obviously if someone does refuse then each case will be reviewed on its own merits.

Did the detective chief inspector just threaten those 4,000 men? Sure seems that way to me.

Posted on February 28, 2006 at 7:31 AM61 Comments

Kent Robbery

Something like 50 million pounds was stolen from a banknote storage depot in the UK. BBC has a good chronology of the theft.

The Times writes:

Large-scale cash robbery was once a technical challenge: drilling through walls, short-circuiting alarms, gagging guards and stationing the get-away car. Today, the weak points in the banks' defences are not grilles and vaults, but human beings. Stealing money is now partly a matter of psychology. The success of the Tonbridge robbers depended on terrifying Mr Dixon into opening the doors. They had studied their victim. They knew the route he took home, and how he would respond when his wife and child were in mortal danger. It did not take gelignite to blow open the vaults; it took fear, in the hostage technique known as "tiger kidnapping", so called because of the predatory stalking that precedes it. Tiger kidnapping is the point where old-fashioned crime meets modern terrorism.

Posted on February 27, 2006 at 12:26 PM46 Comments

More on Port Security

From Defective Yeti:

Sark Defends Port Deal

Sark today sought to quell the growing controversy over his decision to grant the MCP control of several major ports throughout the region.

"I believe that this arrangement with the Master Control Program should go forward," Sark told reporters aboard Solar Sailer One. He emphasized that security would continued to be handled by Tank and Recognizer programs, with the MCP only be in charge of port operations.

But Dumont, guardian of the I/O towers, voiced skepticism. "I could understand ceding authority over ports 21 and 80," said Dumont. "But port 443? That's supposed to be secure!"

The public's reaction to the plan has also been overwhelmingly negative. "No no no," said a bit upon hearing the news. "No no no no." Others were more blunt. "Sark should be de-rezzed for even proposing this," said Ram, a financial program.

Sark, who has repeatedly denied having ties to the MCP, has insisted that the hand-over go through, and says that he will vigorously resist any effort to block it. But programs such as Yori are equally adamant that the deal be scuttled. "My User," she said, "have we already forgotten the lessons of 1000222846?"

Posted on February 27, 2006 at 6:12 AM20 Comments

Friday Squid Blogging: Semi-Truck of Squid Overturns

Last year in California:

An 18-wheel semi-truck overturned east of Murphy Crossing Road on Riverside Drive on Wednesday morning, spilling 38,500 pounds of frozen squid and taking down a power pole, cutting electricity to about 1,100 people in the Aromas area.


The $22,600 load of squid, caught in Ventura, was packaged and frozen at Del Mar Seafoods at 331 Ford St., where it was loaded into Ramirez's truck.

Posted on February 24, 2006 at 4:10 PM14 Comments

Distributed Enigma Cryptanalysis

And you can help:

The M4 Project is an effort to break 3 original Enigma messages with the help of distributed computing. The signals were intercepted in the North Atlantic in 1942 and are believed to be unbroken.

EDITED TO ADD (3/8): One message has been broken.

Posted on February 24, 2006 at 1:38 PM25 Comments

Do-it-Yourself Keyboard Logger

Here's how to make your own hardware key logger for PS/2 keyboards.

Not that buying one is very expensive. (And there are software versions available.)

Anyone have any experience in using any of these products?

Posted on February 24, 2006 at 8:14 AM45 Comments

Police Cameras in Your Home

This is so nutty that I wasn't even going to blog it. But too many of you are e-mailing the article to me.

Houston's police chief on Wednesday proposed placing surveillance cameras in apartment complexes, downtown streets, shopping malls and even private homes to fight crime during a shortage of police officers.

"I know a lot of people are concerned about Big Brother, but my response to that is, if you are not doing anything wrong, why should you worry about it?" Chief Harold Hurtt told reporters Wednesday at a regular briefing.

One of the problems we have in the privacy community is that we don't have a crisp answer to that question. Any suggestions?

Posted on February 23, 2006 at 1:12 PM254 Comments

U.S. Port Security and Proxies

My twelfth essay for is about U.S. port security, and more generally about trust and proxies:

Pull aside the rhetoric, and this is everyone's point. There are those who don't trust the Bush administration and believe its motivations are political. There are those who don't trust the UAE because of its terrorist ties -- two of the 9/11 terrorists and some of the funding for the attack came out of that country -- and those who don't trust it because of racial prejudices. There are those who don't trust security at our nation's ports generally and see this as just another example of the problem.

The solution is openness. The Bush administration needs to better explain how port security works, and the decision process by which the sale of P&O was approved. If this deal doesn't compromise security, voters -- at least the particular lawmakers we trust -- need to understand that.

Regardless of the outcome of the Dubai deal, we need more transparency in how our government approaches counter-terrorism in general. Secrecy simply isn't serving our nation well in this case. It's not making us safer, and it's properly reducing faith in our government.

Proxies are a natural outgrowth of society, an inevitable byproduct of specialization. But our proxies are not us and they have different motivations -- they simply won't make the same security decisions as we would. Whether a king is hiring mercenaries, an organization is hiring a network security company or a person is asking some guy to watch his bags while he gets a drink of water, successful security proxies are based on trust. And when it comes to government, trust comes through transparency and openness.

Posted on February 23, 2006 at 7:07 AM47 Comments

Photographing Airports

Patrick Smith, a former pilot, writes about his experiences -- involving the police -- taking pictures in airports:

He makes sure to remind me, just as his colleague in New Hampshire had done, that next time I'd benefit from advance permission, and that "we live in a different world now." Not to put undue weight on the cheap prose of patriotic convenience, but few things are more repellant than that oft- repeated catchphrase. There's something so pathetically submissive about it -- a sound bite of such defeat and capitulation. It's also untrue; indeed we find ourselves in an altered way of life, though not for the reasons our protectors would have us think. We weren't forced into this by terrorists, we've chosen it. When it comes to flying, we tend to hold the events of Sept. 11 as the be-all and end-all of air crimes, conveniently purging our memories of several decades' worth of bombings and hijackings. The threats and challenges faced by airports aren't terribly different from what they've always been. What's different, or "too bad," to quote the New Hampshire deputy, is our paranoid, overzealous reaction to those threats, and our amped-up obeisance to authority.

Posted on February 22, 2006 at 2:09 PM26 Comments

Impressive Phishing Attack

Read about it here, or in even more detail.

I find this phishing attack impressive for several reasons. One, it's a very sophisticated attack and demonstrates how clever identity thieves are becoming. Two, it narrowly targets a particular credit union, and sneakily uses the fact that credit cards issued by an institution share the same initial digits. Three, it exploits an authentication problem with SSL certificates. And four, it is yet another proof point that "user education" isn't how we're going to solve this kind of risk.

Posted on February 22, 2006 at 7:41 AM38 Comments

Unfortunate Court Ruling Regarding Gramm-Leach-Bliley

"A Federal Court Rules That A Financial Institution Has No Duty To Encrypt A Customer Database":

In a legal decision that could have broad implications for financial institutions, a court has ruled recently that a student loan company was not negligent and did not have a duty under the Gramm-Leach-Bliley statute to encrypt a customer database on a laptop computer that fell into the wrong hands.

Basically, an employee of Brazos Higher Education Service Corporation, Inc., had customer information on a laptop computer he was using at home. The computer was stolen, and a customer sued Brazos.

The judge dismissed the lawsuit. And then he went further:

Significantly, while recognizing that Gramm-Leach-Bliley does require financial institutions to protect against unauthorized access to customer records, Judge Kyle held that the statute "does not prohibit someone from working with sensitive data on a laptop computer in a home office," and does not require that "any nonpublic personal information stored on a laptop computer should be encrypted."

I know nothing of the legal merits of the case, nor do I have an opinion about whether Gramm-Leach-Bliley does or does not require financial companies to encrypt personal data in its purview. But I do know that we as a society need to force companies to encrypt personal data about us. Companies won't do it on their own -- the market just doesn't encourage this behavior -- so legislation or liability are the only available mechanisms. If this law doesn't do it, we need another one.

EDITED TO ADD (2/22): Some commentary here.

Posted on February 21, 2006 at 1:34 PM29 Comments

School Bus Drivers to Foil Terrorist Plots

This is a great example of a movie-plot threat:

Already mindful of motorists with road rage and kids with weapons, bus drivers are being warned of far more grisly scenarios. Like this one: Terrorists monitor a punctual driver for weeks, then hijack a bus and load the friendly yellow vehicle with enough explosives to take down a building.

It's so bizarre it's comical.

But don't worry:

An alert school bus driver could foil that plan, security expert Jeffrey Beatty recently told a class of 250 of drivers in Norfolk, Va.

So we're funding counterterrorism training for school bus drivers:

Financed by the Homeland Security Department, school bus drivers are being trained to watch for potential terrorists, people who may be casing their routes or plotting to blow up their buses.


The new effort is part of Highway Watch, an industry safety program run by the American Trucking Associations and financed since 2003 with $50 million in homeland security money.

So far, tens of thousands of bus operators have been trained in places large and small, from Dallas and New York City to Kure Beach, N.C., Hopewell, Va., and Mount Pleasant, Texas.

The commentary borders on the surreal:

Kenneth Trump, a school safety consultant who tracks security trends, said being prepared is not being alarmist. "Denying and downplaying schools and school buses as potential terror targets here in the U.S.," Trump said, "would be foolish."

This is certainly a complete waste of money. Possibly it's even bad for security, as bus drivers have to divide their attention between real threats -- automobile accidents involving children -- and movie-plot terrorist threats. And there's the ever-creeping surveillance society:

"Today it's bus drivers, tomorrow it could be postal officials, and the next day, it could be, 'Why don't we have this program in place for the people who deliver the newspaper to the door?' " Rollins said. "We could quickly get into a society where we're all spying on each other. It may be well intentioned, but there is a concern of going a bit too far."

What should we do this with money instead? We should fund things that actually help defend against terrorism: intelligence, investigation, emergency response. Trying to correctly guess what the terrorists are planning is generally a waste of resources; investing in security countermeasures that will help regardless of what the terrorists are planning is much smarter.

Posted on February 21, 2006 at 9:07 AM104 Comments

Proof that Employees Don't Care About Security

Does anyone think that this experiment would turn out any differently?

An experiment carried out within London's square mile has revealed that employees in some of the City's best known financial services companies don't care about basic security policy.

CDs were handed out to commuters as they entered the City by employees of IT skills specialist The Training Camp and recipients were told the disks contained a special Valentine's Day promotion.

However, the CDs contained nothing more than code which informed The Training Camp how many of the recipients had tried to open the CD. Among those who were duped were employees of a major retail bank and two global insurers.

The CD packaging even contained a clear warning about installing third-party software and acting in breach of company acceptable-use policies -- but that didn't deter many individuals who showed little regard for the security of their PC and their company.

This was a benign stunt, but it could have been much more serious. A CD-ROM carried into the office and run on a computer bypasses the company's network security systems. You could easily imagine a criminal ring using this technique to deliver a malicious program into a corporate network -- and it would work.

But concluding that employees don't care about security is a bit naive. Employees care about security; they just don't understand it. Computer and network security is complicated and confusing, and unless you're technologically inclined, you're just not going to have an intuitive feel for what's appropriate and what's a security risk. Even worse, technology changes quickly, and any security intuition an employee has is likely to be out of date within a short time.

Education is one way to deal with this, but education has its limitations. I'm sure these banks had security awareness campaigns; they just didn't stick. Punishment is another form of education, and my guess it would be more effective. If the banks fired everyone who fell for the CD-ROM-on-the-street trick, you can be sure that no one would ever do that again. (At least, until everyone forgot.) That won't ever happen, though, because the morale effects would be huge.

Rather than blaming this kind of behavior on the users, we would be better served by focusing on the technology. Why does the average computer user at a bank need the ability to install software from a CD-ROM? Why doesn't the computer block that action, or at least inform the IT department? Computers need to be secure regardless of who's sitting in front of them, irrespective of what they do.

If I go downstairs and try to repair the heating system in my home, I'm likely to break all sorts of safety rules -- and probably the system and myself in the process. I have no experience in that sort of thing, and honestly, there's no point trying to educate me. But my home heating system works fine without my having to learn anything about it. I know how to set my thermostat, and to call a professional if something goes wrong.

Computers need to work more like that.

Posted on February 20, 2006 at 8:11 AM90 Comments

Friday Squid Blogging: Giant Squid Sex Life

News from a cephalopod conference:

The bizarre sex life of the giant squid is one of the topics at an international cephalopod conference in Hobart this week.

Marine biologists are continuing to unlock the secrets of the giant squid, saying the deep-sea monster may not be a cannibal as previously thought.

It was thought the species was cannibalistic when parts of a fellow giant squid were found in the stomach of a specimen caught off Tasmania's west coast in 1999.

But New Zealand based marine biologist Steve O'Shea believes that was the result of some bizarre mating methods.

He says the creatures do not mean to eat each other but the females accidentally bite bits off of the males during mating.

Posted on February 17, 2006 at 4:04 PM26 Comments

"Lessons from the Sony CD DRM Episode"

"Lessons from the Sony CD DRM Episode" is an interesting paper by J. Alex Halderman and Edward W. Felten.

Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.

Posted on February 17, 2006 at 2:11 PM20 Comments

Database Error Causes Unbalanced Budget

This story of a database error cascading into a major failure has some interesting security morals:

A house erroneously valued at $400 million is being blamed for budget shortfalls and possible layoffs in municipalities and school districts in northwest Indiana.


County Treasurer Jim Murphy said the home usually carried about $1,500 in property taxes; this year, it was billed $8 million.

Most local officials did not learn about the mistake until Tuesday, when 18 government taxing units were asked to return a total of $3.1 million of tax money. The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million. As a result, the school system has a $200,000 budget shortfall, and the city loses $900,000.

User error is being blamed for the problem:

An outside user of Porter County's computer system may have triggered the mess by accidentally changing the value of the Valparaiso house, said Sharon Lippens, director of the county's information technologies and service department.


Lippens said the outside user changed the property value, most likely while trying to access another program while using the county's enhanced access system, which charges users a fee for access to public records that are not otherwise available on the Internet.

Lippens said the user probably tried to access a real estate record display by pressing R-E-D, but accidentally typed R-E-R, which brought up an assessment program written in 1995. The program is no longer in use, and technology officials did not know it could be accessed.

Three things immediately spring to mind:

One, the system did not fail safely. This one error seems to have cascaded into multiple errors, as the new tax total immediately changed budgets of "18 government taxing units."

Two, there were no sanity checks on the system. "The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million." Didn't the city wonder where all that extra money came from in the first place?

Three, the access-control mechanisms on the computer system were too broad. When a user is authenticated to use the "R-E-D" program, he shouldn't automatically have permission to use the "R-E-R" program as well. Authentication isn't all or nothing; it should be granular to the operation.

Posted on February 17, 2006 at 7:29 AM24 Comments

Security, Economics, and Lost Conference Badges

Conference badges are an interesting security token. They can be very valuable -- a full conference registration at the RSA Conference this week in San Jose, for example, costs $1,985 -- but their value decays rapidly with time. By tomorrow afternoon, they'll be worthless.

Counterfeiting badges is one security concern, but an even bigger concern is people losing their badge or having their badge stolen. It's way cheaper to find or steal someone else's badge than it is to buy your own. People could do this sort of thing on purpose, pretending to lose their badge and giving it to someone else.

A few years ago, the RSA Conference charged people $100 for a replacement badge, which is far cheaper than a second membership. So the fraud remained. (At least, I assume it did. I don't know anything about how prevalent this kind of fraud was at RSA.)

Last year, the RSA Conference tried to further limit these types of fraud by putting people's photographs on their badges. Clever idea, but difficult to implement.

For this to work, though, guards need to match photographs with faces. This means that either 1) you need a lot more guards at entrance points, or 2) the lines will move a lot slower. Actually, far more likely is 3) no one will check the photographs.

And it was an expensive solution for the RSA Conference. They needed the equipment to put the photos on the badges. Registration was much slower. And pro-privacy people objected to the conference keeping their photographs on file.

This year, the RSA Conference solved the problem through economics:

If you lose your badge and/or badge holder, you will be required to purchase a new one for a fee of $1,895.00.

Look how clever this is. Instead of trying to solve this particular badge fraud problem through security, they simply moved the problem from the conference to the attendee. The badges still have that $1,895 value, but now if it's stolen and used by someone else, it's the attendee who's out the money. As far as the RSA Conference is concerned, the security risk is an externality.

Note that from an outside perspective, this isn't the most efficient way to deal with the security problem. It's likely that the cost to the RSA Conference for centralized security is less than the aggregate cost of all the individual security measures. But the RSA Conference gets to make the trade-off, so they chose a solution that was cheaper for them.

Of course, it would have been nice if the conference provided a slightly more secure attachment point for the badge holder than a thin strip of plastic. But why should they? It's not their problem anymore.

Posted on February 16, 2006 at 7:16 AM81 Comments

Real Fake ID Cards

Or maybe they're fake real ID cards. This website sells ID cards. They're not ID cards for anything in particular, but they look official. If you need to fool someone who really doesn't know what an ID card is supposed to look like, these are likely to work.

Posted on February 15, 2006 at 1:19 PM45 Comments

Gary Marx on Surveillance

Gary T. Marx is a sociology professor at MIT, and a frequent writer on privacy issues. I find him both clear and insightful, as well as interesting and entertaining.

This new paper is worth reading: "Soft Surveillance: The Growth of Mandatory Volunteerism in Collecting Personal Information -- 'Hey Buddy Can You Spare a DNA?'"

You can read a whole bunch of his other articles here.

Posted on February 15, 2006 at 12:21 PM10 Comments

Security in the Cloud

One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails. An example is a firewall coupled with an intrusion-detection system (IDS). Defense in depth provides security, because there's no single point of failure and no assumed single vector for attacks.

It is for this reason that a choice between implementing network security in the middle of the network -- in the cloud -- or at the endpoints is a false dichotomy. No single security system is a panacea, and it's far better to do both.

This kind of layered security is precisely what we're seeing develop. Traditionally, security was implemented at the endpoints, because that's what the user controlled. An organization had no choice but to put its firewalls, IDSs, and anti-virus software inside its network. Today, with the rise of managed security services and other outsourced network services, additional security can be provided inside the cloud.

I'm all in favor of security in the cloud. If we could build a new Internet today from scratch, we would embed a lot of security functionality in the cloud. But even that wouldn't substitute for security at the endpoints. Defense in depth beats a single point of failure, and security in the cloud is only part of a layered approach.

For example, consider the various network-based e-mail filtering services available. They do a great job of filtering out spam and viruses, but it would be folly to consider them a substitute for anti-virus security on the desktop. Many e-mails are internal only, never entering the cloud at all. Worse, an attacker might open up a message gateway inside the enterprise's infrastructure. Smart organizations build defense in depth: e-mail filtering inside the cloud plus anti-virus on the desktop.

The same reasoning applies to network-based firewalls and intrusion-prevention systems (IPS). Security would be vastly improved if the major carriers implemented cloud-based solutions, but they're no substitute for traditional firewalls, IDSs, and IPSs.

This should not be an either/or decision. At Counterpane, for example, we offer cloud services and more traditional network and desktop services. The real trick is making everything work together.

Security is about technology, people, and processes. Regardless of where your security systems are, they're not going to work unless human experts are paying attention. Real-time monitoring and response is what's most important; where the equipment goes is secondary.

Security is always a trade-off. Budgets are limited and economic considerations regularly trump security concerns. Traditional security products and services are centered on the internal network, because that's the target of attack. Compliance focuses on that for the same reason. Security in the cloud is a good addition, but it's not a replacement for more traditional network and desktop security.

This was published as a "Face-Off" in Network World.

The opposing view is here.

Posted on February 15, 2006 at 8:18 AM9 Comments

WiFi Tracking

"...a few hundred meters away...."

Forget RFID. Well, don't, but National Scientific Corporation has a prototype of a WiFi tagging system that, like RFID, lets you track things in real-time and space. The advantage that the WiFi Tracker system has over passive RFID tracking is that you can keep tabs on objects with WiFi Tracker tags (which can hold up to 256K of data) from as far as a few hundred meters away (the range of passive RFID taggers is just a few meters). While you can do something similar with active RFID tags, with WiFi Tracker companies can use their pre-existing WiFi network to track things rather than having to build a whole new RFID system.

In other news, Apple is adding WiFi to the iPod.

And, of course, you can be tracked from your cellphone:

But the FBI and the U.S. Department of Justice have seized on the ability to locate a cellular customer and are using it to track Americans' whereabouts surreptitiously--even when there's no evidence of wrongdoing.

A pair of court decisions in the last few weeks shows that judges are split
on whether this is legal. One federal magistrate judge in Wisconsin on Jan.
17 ruled it was unlawful, but another nine days later in Louisiana decided
that it was perfectly OK.

This is an unfortunate outcome, not least because it shows that some judges
are reluctant to hold federal agents and prosecutors to the letter of the

It's also unfortunate because it demonstrates that the FBI swore never to
use a 1994 surveillance law to track cellular phones--but then, secretly,
went ahead and did it, anyway.

Posted on February 14, 2006 at 1:29 PM39 Comments

Valentine's Day Security

Last Friday, the Wall Street Journal ran an article (unfortunately, the link is only for paid subscribers) about how Valentine's Day is the day when cheating spouses are most likely to trip up:

Valentine's Day is the biggest single 24-hour period for florists, a huge event for greeting-card companies and a boon for candy makers. But it's also a major crisis day for anyone who is having an affair. After all, Valentine's Day is the one holiday when everyone is expected to do something romantic for their spouse or lover -- and if someone has both, it's a serious problem.

So, of course, private detectives work overtime.

"If anything is going on, it will be happening on that day," says Irene Smith, who says business at her Discreet Investigations detective agency in Golden, Colo., as much as doubles -- to as many as 12 cases some years -- on Valentine's Day.

Private detectives are expensive -- about $100 per hour, according to the article -- and might not be worth it.

The article suggests some surveillance tools you can buy at home: a real-time GPS tracking system you can hide in your spouse's car, a Home Evidence Collection Kit you can use to analyze stains on "clothing, car seats or elsewhere," Internet spying software, a telephone recorder, and a really cool buttonhole camera.

But even that stuff may be overkill:

Ruth Houston, author of a book called Is He Cheating on You? -- 829 Telltale Signs, says she generally recommends against spending money on private detectives to catch cheaters because the indications are so easy to read. (Sign No. 3 under "Gifts": He tries to convince you he bought expensive chocolates for himself.)

I hope I don't need to remind you that cheaters should also be reading that book, familiarizing themselves with the 829 telltale signs they should avoid making.

The article has several interesting personal stories, and warns that "planning a 'business trip' that falls over Valentine's Day is a typical mistake cheaters make."

So now I'm wondering why the RSA Conference is being held over Valentine's Day.

EDITED TO ADD (2/14): Today's Washington Post has a similar story.

Posted on February 14, 2006 at 8:35 AM26 Comments

Windows Access Control

I just found an interesting paper: "Windows Access Control Demystified," by Sudhakar Govindavajhala and Andrew W. Appel. Basically, they show that companies like Adobe, Macromedia, etc., have mistakes in their Access Control Programming that open security holes in Windows XP.


In the Secure Internet Programming laboratory at Princeton University, we have been investigating network security management by using logic programming. We developed a rule based framework -- Multihost, Multistage, Vulnerability Analysis(MulVAL) -- to perform end-to-end, automatic analysis of multi-host, multi-stage attacks on a large network where hosts run different operating systems. The tool finds attack paths where the adversary will have to use one or more than one weaknesses (buffer overflows) in multiple software to attack the network. The MulVAL framework has been demonstrated to be modular, flexible, scalable and efficient [20]. We applied these techniques to perform security analysis of a single host with commonly used software.

We have constructed a logical model of Windows XP access control, in a declarative but executable (Datalog) format. We have built a scanner that reads access-control conguration information from the Windows registry, file system, and service control manager database, and feeds raw conguration data to the model. Therefore we can reason about such things as the existence of privilege-escalation attacks, and indeed we have found several user-to-administrator vulnerabilities caused by misconfigurations of the access-control lists of commercial software from several major vendors. We propose tools such as ours as a vehicle for software developers and system administrators to model and debug the complex interactions of access control on installations under Windows.

EDITED TO ADD (2/13): Ed Felten has some good commentary about the paper on his blog.

Posted on February 13, 2006 at 12:11 PM19 Comments

Secure Flight Suspended

The TSA has announced that Secure Flight, its comprehensive program to match airline passangers against terrorist watch lists, has been suspended:

And because of security concerns, the government is going back to the drawing board with the program called Secure Flight after spending nearly four years and $150 million on it, the Senate Commerce Committee was told.

I have written about this program extensively, most recently here. It's an absolute mess in every way, and doesn't make us safer.

But don't think this is the end. Under Section 4012 of the Intelligence Reform and Terrorism Prevention Act, Congress mandated the TSA put in place a program to screen every domestic passenger against the watch list. Until Congress repeals that mandate, these postponements and suspensions are the best we can hope for. Expect it all to come back under a different name -- and a clean record in the eyes of those not paying close attention -- soon.

EDITED TO ADD (2/15): Ed Felton has some good commentary:

Instead of sticking to this more modest plan, Secure Flight became a vehicle for pie-in-the-sky plans about data mining and automatic identification of terrorists from consumer databases. As the program’s goals grew more ambitious and collided with practical design and deployment challenges, the program lost focus and seemed to have a different rationale and plan from one month to the next.

Posted on February 13, 2006 at 6:09 AM16 Comments

Friday Squid Blogging: Cephalopod Conference

There's a Cephalopod Conference going on right now in Hobart, Australia. It's the Cephalopod International Advisory Council International Symposium. (I'll bet you didn't even know that cephalopods had an international advisory council. Or what they need advice about.) In the coming Fridays I hope to present some of the more interesting papers from this conference.

Posted on February 10, 2006 at 4:04 PM12 Comments

RSA Conference

Next week is the RSA Conference in San Jose, CA. I will speak on "The Economics of Security" at 4:30 PM on the 14th, and again on "Why Security Has So Little to Do with Security" at 2:00 PM on the 15th. I will also participate in a main-stage panel on ID cards at 8:00 AM on the 16th.

Also, my wife and I have written a 110-page restaurant guidebook for the downtown San Jose area. It's a fun read, even if you aren't looking for a San Jose restaurant. (Do people know that I write restaurant reviews for the Minneapolis Star Tribune?)

The restaurant guide will be available at the conference -- and of course you can download it -- but I have a few hundred to give away here. I'll send a copy to anyone who wants one, in exchange for postage. (It's not about the money, but I need some sort of gating function so that only those actually interested get a copy.)

Cost is $2.50 if you live in the U.S., $3.00 for Canada/Mexico, and $6.00 elsewhere. I'll accept PayPal to my e-mail address -- -- or a check to Bruce Schneier, Counterpane Internet Security, Inc., 1090A La Avenida, Mountain View, CA 94043. Sorry, but I can't accept credit cards directly.

Posted on February 10, 2006 at 12:30 PM28 Comments

The New Internet Explorer

I'm just starting to read about the new security features in Internet Explorer 7. So far, I like what I am reading.

IE 7 requires that all browser windows display an address bar. This helps foil attackers that operate by popping up new windows masquerading as pages on a legitimate site, when in fact the site is fraudulent. By requiring an address bar, users will immediately see the true URL of the displayed page, making these types of attacks more obvious. If you think you're looking at, but the browser address bar says, you ought to be suspicious.

I use Opera, and have long used the address bar to "check" on URLs. This is an excellent idea. So is this:

In early November, a bunch of Web browser developers got together and started fleshing out standards for address bar coloring, which can cue users to secured connections. Under the proposal laid out by IE 7 team member Rob Franco, even sites that use a standard SSL certificate will display a standard white address bar. Sites that use a stronger, as yet undetermined level of protection will use a green bar.

I like easy visual indications about what's going on. And I really like that SSL is generic white, because it really doesn't prove that you're communicating with the site you think you're communicating with. This feature helps with that, though:

Franco also said that when navigating to an SSL-protected site, the IE 7 address bar will display the business name and certification authority's name in the address bar.

Some of the security measures in IE7 weaken the integration between the browser and the operating system:

People using Windows Vista beta 2 will find a new feature called Protected Mode, which renders IE 7 unable to modify system files and settings. This essentially breaks down part of the integration between IE and Windows itself.

Think of it is as a wall between IE and the rest of the operating system. No, the code won't be perfect, and yes, there'll be ways found to circumvent this security, but this is an important and long-overdue feature.

The majority of IE's notorious security flaws stem from its pervasive integration with Windows. That is a feature no other Web browser offers -- and an ability that Vista's Protected Mode intends to mitigate. IE 7 obviously won't remove all of that tight integration. Lacking deep architectural changes, the effort has focused instead on hardening or eliminating potential vulnerabilities. Unfortunately, this approach requires Microsoft to anticipate everything that could go wrong and block it in advance -- hardly a surefire way to secure a browser.

That last sentence is about the general Internet attitude to allow everything that is not explicitly denied, rather than deny everything that is not explicitly allowed.

Also, you'll have to wait until Vista to use it:

...this capability will not be available in Windows XP because it's woven directly into Windows Vista itself.

There are also some good changes under the hood:

IE 7 does eliminate a great deal of legacy code that dates back to the IE 4 days, which is a welcome development.


Microsoft has rewritten a good bit of IE 7's core code to help combat attacks that rely on malformed URLs (that typically cause a buffer overflow). It now funnels all URL processing through a single function (thus reducing the amount of code that "looks" at URLs).

All good stuff, but I agree with this conclusion:

IE 7 offers several new security features, but it's hardly a given that the situation will improve. There has already been a set of security updates for IE 7 beta 1 released for both Windows Vista and Windows XP computers. Security vulnerabilities in a beta product shouldn't be alarming (IE 7 is hardly what you'd consider "finished" at this point), but it may be a sign that the product's architecture and design still have fundamental security issues.

I'm not switching from Opera yet, and my second choice is still Firefox. But the masses still use IE, and our security depends in part on those masses keeping their computers worm-free and bot-free.

NOTE: Here's some info on how to get your own copy of Internet Explorer 7 beta 2.

Posted on February 9, 2006 at 3:37 PM50 Comments

The Militarization of Police Work

This was originally published in The Washington Post:

During the past 15 years, The Post and other media outlets have reported on the unsettling "militarization" of police departments across the country. Armed with free surplus military gear from the Pentagon, SWAT teams have multiplied at a furious pace. Tactics once reserved for rare, volatile situations such as hostage takings, bank robberies and terrorist incidents increasingly are being used for routine police work.

Eastern Kentucky University's Peter Kraska -- a widely cited expert on police militarization -- estimates that SWAT teams are called out about 40,000 times a year in the United States; in the 1980s, that figure was 3,000 times a year. Most "call-outs" were to serve warrants on nonviolent drug offenders.

Posted on February 9, 2006 at 12:25 PM32 Comments

Multi-Use ID Cards

My eleventh column for is about ID cards, and why you don't -- and won't -- have a single card in your wallet for everything. It has nothing to do with security.

My airline wants a card with its logo on it in my wallet. So does my rental car company, my supermarket and everyone else I do business with. My credit card company wants me to open up my wallet and notice its card; I'm far more likely to use a physical card than a virtual one that I have to remember is attached to my driver's license number. And I'm more likely to feel important if I have a card, especially a card that recognizes me as a frequent flier or a preferred customer.

Some years ago, when credit cards with embedded chips were new, the card manufacturers designed a secure, multi-application operating system for these smartcards. The idea was that a single physical card could be used for everything: multiple credit card accounts, airline affinity memberships, public-transportation payment cards, etc. Nobody bought into the system: not because of security concerns, but because of branding concerns. Whose logo would get to be on the card? When the manufacturers envisioned a card with multiple small logos, one for each application, everyone wanted to know: Whose logo would be first? On top? In color?

The companies give you their own card partly because they want complete control of the rules around their own system, but mostly because they want you to carry around a small piece of advertising in your wallet. An American Express Gold Card is supposed to make you feel powerful and everyone else feel green. They want you to wave it around.

Posted on February 9, 2006 at 6:39 AM29 Comments

Identity Theft in the UK

Recently there was some serious tax credit fraud in the UK. Basically, there is a tax-credit system that allows taxpayers to get a refund for some of their taxes if they meet certain criteria. Politically, this was a major objective of the Labour Party. So the Inland Revenue (the UK version of the IRS) made it as easy as possible to apply for this refund. One of the ways taxpayers could apply was via a Web portal.

Unfortunately, the only details necessary when applying were the applicant's National Insurance number (the UK version of the Social Security number) and mother's maiden name. The refund was then paid directly into any bank account specified on the application form. Anyone who knows anything about security can guess what happened. Estimates are that fifteen millions pounds has been stolen by criminal syndicates.

The press has been treating this as an issue of identity theft, talking about how criminals went Dumpster diving to get National Insurance numbers and so forth. I have seen very little about how the authentication scheme failed. The system tried -- using semi-secret information like NI number and mother's maiden name -- to authenticate the person. Instead, the system should have tried to authenticate the transaction. Even a simple verification step -- does the name on the account match the name of the person who should receive the refund -- would have gone a long way to preventing this type of fraud.

Posted on February 8, 2006 at 3:42 PM20 Comments


Interesting paper:

Zooko's Triangle argues that names cannot be global, secure, and memorable, all at the same time. Domain names are an example: they are global, and memorable, but as the rapid rise of phishing demonstrates, they are not secure.

Though no single name can have all three properties, the petname system does indeed embody all three properties. Informal experiments with petname-like systems suggest that petnames can be both intuitive and effective. Experimental implementations already exist for simple extensions to existing browsers that could alleviate (possibly dramatically) the problems with phishing. As phishers gain sophistication, it seems compelling to experiment with petname systems as part of the solution.

Posted on February 8, 2006 at 11:25 AM43 Comments

Check Washing

Check washing is a form of fraud. The criminal uses various solvents to remove data from a signed check -- the "pay to" name, the amount -- and replace it with data more beneficial to the criminal: his own name, a larger amount.

This webpage -- I know nothing about who these people are, but they seem a bit amateurish -- talks about check fraud, and then gives this advice to check writers:


If you are a ballpoint pen lover, switch to black ink when security is important. Among water-based inks, remember that gels are the most impervious. But when you're writing checks to pay the monthly bills, only one type of ink, the kind in gel pens, has been found to be counterfeit proof to acetone or any other chemical used in "check washing." Most ballpoint and marker inks are dye based, meaning that the pigments are dissolved in the ink.

Based on recent ink security studies, we highly recommend that you use a gel pen, like the Uniball 207 that uses gel ink that contains tiny particles of color that are trapped into the paper, making check washing a lot more difficult. The pen sells for about $2. Personally I sign all my checks and important documents with one. But if you don't want to switch, do not hesitate to to use your favorite fountain pen. Just fill it with ink in one of the more durable colors and enjoy!

I just wish they footnoted this statistic, obviously designed to scare people:

Check washing takes place to the tune of $815 million every year in the U.S. And it is increasing at an alarming rate.

Posted on February 8, 2006 at 7:57 AM57 Comments

More on Kish's Classical Security Scheme

Here's an interesting rebuttal of Laszlo Kish's theoretically secure classical communications scheme.

EDITED TO ADD (2/18): Kish's response.

Posted on February 7, 2006 at 4:18 PM50 Comments

Passlogix Misquotes Me in Their PR Material

I recently received a PR e-mail from a company called Passlogix:

Password security is still a very prevalent threat, 2005 had security gurus like Bruce Schneier publicly suggest that you actually write them down on sticky-notes. A recent survey stated 78% of employees use passwords as their primary forms of security, 52% use the same password for their accounts -- yet 77% struggle to remember their passwords.

Actually, I don't. I recommend writing your passwords down and keeping them in your wallet.

I know nothing about this company, but I am unhappy at their misrepresentation of what I said.

Posted on February 7, 2006 at 7:23 AM30 Comments

A Model Regime of Privacy Protection

Last year I blogged about an article by Daniel J. Solove and Chris Hoofnagle titled "A Model Regime of Privacy Protection."

The paper has been revised a few times based on comments -- some of them from readers of this blog and Crypto-Gram -- and the final version has been published.

Abstract: A series of major security breaches at companies with sensitive personal information has sparked significant attention to the problems with privacy protection in the United States. Currently, the privacy protections in the United States are riddled with gaps and weak spots. Although most industrialized nations have comprehensive data protection laws, the United States has maintained a sectoral approach where certain industries are covered and others are not. In particular, emerging companies known as "commercial data brokers" have frequently slipped through the cracks of U.S. privacy law. In this article, the authors propose a Model Privacy Regime to address the problems in the privacy protection in the United States, with a particular focus on commercial data brokers. Since the United States is unlikely to shift radically from its sectoral approach to a comprehensive data protection regime, the Model Regime aims to patch up the holes in existing privacy regulation and improve and extend it. In other words, the goal of the Model Regime is to build upon the existing foundation of U.S. privacy law, not to propose an alternative foundation. The authors believe that the sectoral approach in the United States can be improved by applying the Fair Information Practices -- principles that require the entities that collect personal data to extend certain rights to data subjects. The Fair Information Practices are very general principles, and they are often spoken about in a rather abstract manner. In contrast, the Model Regime demonstrates specific ways that they can be incorporated into privacy regulation in the United States.

Definitely worth reading.

Posted on February 6, 2006 at 12:21 PM5 Comments

The Topology of Covert Conflict

Interesting research paper by Shishir Nagaraja and Ross Anderson. Implications for warfare, terrorism, and peer-to-peer file sharing:


Often an attacker tries to disconnect a network by destroying nodes or edges, while the defender counters using various resilience mechanisms. Examples include a music industry body attempting to close down a peer-to-peer file-sharing network; medics attempting to halt the spread of an infectious disease by selective vaccination; and a police agency trying to decapitate a terrorist organisation. Albert, Jeong and Barabási famously analysed the static case, and showed that vertex-order attacks are effective against scale-free networks. We extend this work to the dynamic case by developing a framework based on evolutionary game theory to explore the interaction of attack and defence strategies. We show, first, that naive defences don’t work against vertex-order attack; second, that defences based on simple redundancy don’t work much better, but that defences based on cliques work well; third, that attacks based on centrality work better against clique defences than vertex-order attacks do; and fourth, that defences based on complex strategies such as delegation plus clique resist centrality attacks better than simple clique defences. Our models thus build a bridge between network analysis and evolutionary game theory, and provide a framework for analysing defence and attack in networks where topology matters. They suggest definitions of efficiency of attack and defence, and may even explain the evolution of insurgent organisations from networks of cells to a more virtual leadership that facilitates operations rather than directing them. Finally, we draw some conclusions and present possible directions for future research.

Posted on February 6, 2006 at 7:03 AM6 Comments

Phone Tapping in Greece

Unknowns tapped the mobile phones of about 100 Greek politicians and offices, including the U.S. embassy in Athens and the Greek prime minister.

Details are sketchy, but it seems that a piece of malicious code was discovered by Ericsson technicians in Vodafone's mobile phone software. The code tapped into the conference call system. It "conference called" phone calls to 14 prepaid mobile phones where the calls were recorded.

Some details are here. See also this news article, and -- if you can read Greek -- this one.

Posted on February 3, 2006 at 11:27 AM54 Comments

Security Problems with Controlled Access Systems

There was an interesting security tidbit in this article on last week's post office shooting:

The shooter's pass to access the facility had been expired, officials said, but she apparently used her knowledge of how security at the facility worked to gain entrance, following another vehicle in through the outer gate and getting other employees to open security doors.

This is a failure of both technology and procedure. The gate was configured to allow multiple vehicles to enter on only one person's authorization -- that's a technology failure. And people are programmed to be polite -- to hold the door for others.

SIDE NOTE: There is a common myth that workplace homicides are prevalent in the United States Postal Service. (Note the phrase "going postal.") But not counting this event, there has been less than one shooting fatality per year at Postal Service facilities over the last 20 years. As the USPS has more than 700,000 employees, this is a lower rate than the average workplace.

Posted on February 3, 2006 at 6:19 AM40 Comments

Voting Problems in Congress

This is bizarre:

House Republicans are taking a mulligan on the first ballot for Majority Leader. The first count showed more votes cast than Republicans present at the Conference meeting

I can't find anything about the procedures, the technology, anything.

Posted on February 2, 2006 at 3:13 PM27 Comments

What Can the NSA Do?

Interesting white paper from the ACLU: "Eavesdropping 101: What Can The NSA Do?"

See also this map.

EDITED TO ADD (2/4): Barry Steinhardt of the ACLU responds to some criticism.

Posted on February 2, 2006 at 2:21 PM43 Comments

Big Brother Prison

This Dutch prison is the future of surveillance.

At a high-tech prison opening this week inmates wear electronic wristbands that track their every movement and guards monitor cells using emotion-recognition software.

Remember, new surveillance technologies are first used on populations with limited rights: inmates, children, the mentally ill, military personnel.

Posted on February 2, 2006 at 11:23 AM29 Comments

For-Profit Botnet

Interesting article about someone convicted for running a for-profit botnet:

November's 52-page indictment, along with papers filed last week, offer an unusually detailed glimpse into a shadowy world where hackers, often not old enough to vote, brag in online chat groups about their prowess in taking over vast numbers of computers and herding them into large armies of junk mail robots and arsenals for so-called denial of service attacks on Web sites.

Ancheta one-upped his hacking peers by advertising his network of "bots," short for robots, on Internet chat channels.

A Web site Ancheta maintained included a schedule of prices he charged people who wanted to rent out the machines, along with guidelines on how many bots were required to bring down a particular type of Web site.

In July 2004, he told one chat partner he had more than 40,000 machines available, "more than I can handle," according to the indictment. A month later, Ancheta told another person he controlled at least 100,000 bots, and that his network had added another 10,000 machines in a week and a half.

In a three-month span starting in June 2004, Ancheta rented out or sold bots to at least 10 "different nefarious computer users," according to the plea agreement. He pocketed $3,000 in the process by accepting payments through the online PayPal service, prosecutors said.

Starting in August 2004, Ancheta turned to a new, more lucrative method to profit from his botnets, prosecutors said. Working with a juvenile in Boca Raton, Fla., whom prosecutors identified by his Internet nickname "SoBe," Ancheta infected more than 400,000 computers.

Ancheta and SoBe signed up as affiliates in programs maintained by online advertising companies that pay people each time they get a computer user to install software that displays ads and collects information about the sites a user visits.

Posted on February 2, 2006 at 6:06 AM13 Comments

The NSA on How to Redact

Interesting paper.

Both the Microsoft Word document format (MS Word) and Adobe Portable Document (PDF) are complex, sophisticated computer data formats. They can contain many kinds of information such as text, graphics, tables, images, meta-data, and more all mixed together. The complexity makes them potential vehicles for exposing information unintentionally, especially when downgrading or sanitizing classified materials. Although the focus is on MS Word, the general guidance applies to other word processors and office tools, such as WordPerfect, PowerPoint, Excel, Star Office, etc.

This document does not address all the issues that can arise when distributing or downgrading original document formats such as MS Word or MS PowerPoint. Using original source formats, such as MS Word, for downgrading can entail exceptional risks; the lengthy and complicated procedures for mitigating such risks are outside the scope of this note.

EDITED TO ADD (2/1): The NSA page for the redaction document, and other "Security Configuration Guides," is here.

Posted on February 1, 2006 at 1:09 PM25 Comments

Risks of Losing Portable Devices

Last July I blogged about the risks of storing ever-larger amounts of data in ever-smaller devices.

Last week I wrote my tenth column on the topic:

The point is that it's now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I'd never know it.

This problem isn't going away anytime soon.

There are two solutions that make sense. The first is to protect the data. Hard-disk encryption programs like PGP Disk allow you to encrypt individual files, folders or entire disk partitions. Several manufacturers market USB thumb drives with built-in encryption. Some PDA manufacturers are starting to add password protection -- not as good as encryption, but at least it's something -- to their devices, and there are some aftermarket PDA encryption programs.

The second solution is to remotely delete the data if the device is lost. This is still a new idea, but I believe it will gain traction in the corporate market. If you give an employee a BlackBerry for business use, you want to be able to wipe the device's memory if he loses it. And since the device is online all the time, it's a pretty easy feature to add.

But until these two solutions become ubiquitous, the best option is to pay attention and erase data. Delete old e-mails from your BlackBerry, SMSs from your cell phone and old data from your address books -- regularly. Find that call log and purge it once in a while. Don't store everything on your laptop, only the files you might actually need.

EDITED TO ADD (2/2): A Dutch army officer lost a memory stick with details of an Afgan mission.

Posted on February 1, 2006 at 10:32 AM42 Comments

Privatizing Registered Traveler

Last week the TSA announced details of its Registered Traveler program. Basically, you pay money for a background check and get a biometric ID -- a fingerprint -- that gets you through airline security faster. (See also this and this AP story.)

I've already written about why this is a bad idea for security:

What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.

The Trusted Traveler program is based on the dangerous myth that terrorists match a particular profile and that we can somehow pick terrorists out of a crowd if we only can identify everyone. That's simply not true. Most of the 9/11 terrorists were unknown and not on any watch list. Timothy McVeigh was an upstanding US citizen before he blew up the Oklahoma City Federal Building. Palestinian suicide bombers in Israel are normal, nondescript people. Intelligence reports indicate that Al Qaeda is recruiting non-Arab terrorists for US operations.

But what the TSA is actually doing is even more bizarre. The TSA is privatizing this system. They want the companies that sell for-profit, Registered Traveler passes to do the background checks. They want the companies to use error-filled commercial databases to do this. What incentive do these companies have to not sell someone a pass? Who is liable for mistakes?

I thought airline security was important.

This essay is an excellent discussion of the problems here.

Welcome to the brave new world of "market-driven" airport security, where different private security firms run and operate different lanes at different checkpoints, offering varied levels of accelerated screening depending on how much a user paid and how deep of a background check he or she submitted to. Thus the speed at which you move through a checkpoint will theoretically depend on a multiplicity of factors, only two of which are under your control (the depth of your background check and the firm(s) with which you've contracted). Other factors affecting your screening time, like which private security firm is manning a checkpoint and what resources that particular firm has invested in a particular checkpoint (e.g. extra personnel, more screening equipment, and so on) at a particular time of day, are entirely out of your control.

This is certainly a good point:

What's worse than having identity thieves impersonate you to Chase Bank? Having terrorists impersonate you to the TSA.

Posted on February 1, 2006 at 6:11 AM28 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.