Schneier on Security
A blog covering security and security technology.
February 2006 Archives
BioBouncer is a face recognition system intended for bars:
Its camera snaps customers entering clubs and bars, and facial recognition software compares them with stored images of previously identified troublemakers. The technology alerts club security to image matches, while innocent images are automatically flushed at the end of each night, Dussich said. Various clubs can share databases through a virtual private network, so belligerent drunks might find themselves unwelcome in all their neighborhood bars.
Anyone want to guess how long that "automatically flushed at the end of each night" will last? This data has enormous value. Insurance companies will want to know if someone was in a bar before a car accident. Employers will want to know if their employees were drinking before work -- think airplane pilots. Private investigators will want to know who walked into a bar with whom. The police will want to know all sorts of things. Lots of people will want this data -- and they'll all be willing to pay for it.
And the data will be owned by the bars thatcollect it. They can choose to erase it, or they can choose to sell it to data aggregators like Acxiom.
It's rarely the initial application that's the problem. It's the follow-on applications. It's the function creep. Before you know it, everyone will know that they are identified the moment they walk into a commercial building. We will all lose privacy, and liberty, and freedom as a result.
You don't even have to turn it on:
With the right set-up, the theory suggested, the computer would sometimes get an answer out of the computer even though the program did not run. And now researchers from the University of Illinois at Urbana-Champaign have improved on the original design and built a non-running quantum computer that really works.
So now, even turning the machine off won't necessarily prevent hackers from stealing passwords.
And as long as we're on the topic of quantum computing, here's a piece of quantum snake oil:
A University of Toronto professor says he can now use a photon of light to smash through the most sophisticated computer theft schemes that hackers can devise.
EDITED TO ADD (3/1): More information about the University of Illinois result is here.
Wholesale surveillance from the UK:
About 4,000 men working and living in South Croydon are being asked to voluntarily give their DNA as part of the hunt for a teenage model's killer.
Well, sort of voluntarily:
"It is an entirely voluntary process. None of those DNA samples or finger prints will be used to check out any other unsolved crimes.
Did the detective chief inspector just threaten those 4,000 men? Sure seems that way to me.
Something like 50 million pounds was stolen from a banknote storage depot in the UK. BBC has a good chronology of the theft.
The Times writes:
Large-scale cash robbery was once a technical challenge: drilling through walls, short-circuiting alarms, gagging guards and stationing the get-away car. Today, the weak points in the banks' defences are not grilles and vaults, but human beings. Stealing money is now partly a matter of psychology. The success of the Tonbridge robbers depended on terrifying Mr Dixon into opening the doors. They had studied their victim. They knew the route he took home, and how he would respond when his wife and child were in mortal danger. It did not take gelignite to blow open the vaults; it took fear, in the hostage technique known as "tiger kidnapping", so called because of the predatory stalking that precedes it. Tiger kidnapping is the point where old-fashioned crime meets modern terrorism.
From Defective Yeti:
Sark Defends Port Deal
Last year in California:
An 18-wheel semi-truck overturned east of Murphy Crossing Road on Riverside Drive on Wednesday morning, spilling 38,500 pounds of frozen squid and taking down a power pole, cutting electricity to about 1,100 people in the Aromas area.
And you can help:
The M4 Project is an effort to break 3 original Enigma messages with the help of distributed computing. The signals were intercepted in the North Atlantic in 1942 and are believed to be unbroken.
EDITED TO ADD (3/8): One message has been broken.
Here's how to make your own hardware key logger for PS/2 keyboards.
Anyone have any experience in using any of these products?
This is so nutty that I wasn't even going to blog it. But too many of you are e-mailing the article to me.
Houston's police chief on Wednesday proposed placing surveillance cameras in apartment complexes, downtown streets, shopping malls and even private homes to fight crime during a shortage of police officers.
One of the problems we have in the privacy community is that we don't have a crisp answer to that question. Any suggestions?
My twelfth essay for Wired.com is about U.S. port security, and more generally about trust and proxies:
Pull aside the rhetoric, and this is everyone's point. There are those who don't trust the Bush administration and believe its motivations are political. There are those who don't trust the UAE because of its terrorist ties -- two of the 9/11 terrorists and some of the funding for the attack came out of that country -- and those who don't trust it because of racial prejudices. There are those who don't trust security at our nation's ports generally and see this as just another example of the problem.
Patrick Smith, a former pilot, writes about his experiences -- involving the police -- taking pictures in airports:
He makes sure to remind me, just as his colleague in New Hampshire had done, that next time I'd benefit from advance permission, and that "we live in a different world now." Not to put undue weight on the cheap prose of patriotic convenience, but few things are more repellant than that oft- repeated catchphrase. There's something so pathetically submissive about it -- a sound bite of such defeat and capitulation. It's also untrue; indeed we find ourselves in an altered way of life, though not for the reasons our protectors would have us think. We weren't forced into this by terrorists, we've chosen it. When it comes to flying, we tend to hold the events of Sept. 11 as the be-all and end-all of air crimes, conveniently purging our memories of several decades' worth of bombings and hijackings. The threats and challenges faced by airports aren't terribly different from what they've always been. What's different, or "too bad," to quote the New Hampshire deputy, is our paranoid, overzealous reaction to those threats, and our amped-up obeisance to authority.
I find this phishing attack impressive for several reasons. One, it's a very sophisticated attack and demonstrates how clever identity thieves are becoming. Two, it narrowly targets a particular credit union, and sneakily uses the fact that credit cards issued by an institution share the same initial digits. Three, it exploits an authentication problem with SSL certificates. And four, it is yet another proof point that "user education" isn't how we're going to solve this kind of risk.
In a legal decision that could have broad implications for financial institutions, a court has ruled recently that a student loan company was not negligent and did not have a duty under the Gramm-Leach-Bliley statute to encrypt a customer database on a laptop computer that fell into the wrong hands.
Basically, an employee of Brazos Higher Education Service Corporation, Inc., had customer information on a laptop computer he was using at home. The computer was stolen, and a customer sued Brazos.
The judge dismissed the lawsuit. And then he went further:
Significantly, while recognizing that Gramm-Leach-Bliley does require financial institutions to protect against unauthorized access to customer records, Judge Kyle held that the statute "does not prohibit someone from working with sensitive data on a laptop computer in a home office," and does not require that "any nonpublic personal information stored on a laptop computer should be encrypted."
I know nothing of the legal merits of the case, nor do I have an opinion about whether Gramm-Leach-Bliley does or does not require financial companies to encrypt personal data in its purview. But I do know that we as a society need to force companies to encrypt personal data about us. Companies won't do it on their own -- the market just doesn't encourage this behavior -- so legislation or liability are the only available mechanisms. If this law doesn't do it, we need another one.
EDITED TO ADD (2/22): Some commentary here.
This is a great example of a movie-plot threat:
Already mindful of motorists with road rage and kids with weapons, bus drivers are being warned of far more grisly scenarios. Like this one: Terrorists monitor a punctual driver for weeks, then hijack a bus and load the friendly yellow vehicle with enough explosives to take down a building.
It's so bizarre it's comical.
But don't worry:
An alert school bus driver could foil that plan, security expert Jeffrey Beatty recently told a class of 250 of drivers in Norfolk, Va.
So we're funding counterterrorism training for school bus drivers:
Financed by the Homeland Security Department, school bus drivers are being trained to watch for potential terrorists, people who may be casing their routes or plotting to blow up their buses.
The commentary borders on the surreal:
Kenneth Trump, a school safety consultant who tracks security trends, said being prepared is not being alarmist. "Denying and downplaying schools and school buses as potential terror targets here in the U.S.," Trump said, "would be foolish."
This is certainly a complete waste of money. Possibly it's even bad for security, as bus drivers have to divide their attention between real threats -- automobile accidents involving children -- and movie-plot terrorist threats. And there's the ever-creeping surveillance society:
"Today it's bus drivers, tomorrow it could be postal officials, and the next day, it could be, 'Why don't we have this program in place for the people who deliver the newspaper to the door?' " Rollins said. "We could quickly get into a society where we're all spying on each other. It may be well intentioned, but there is a concern of going a bit too far."
What should we do this with money instead? We should fund things that actually help defend against terrorism: intelligence, investigation, emergency response. Trying to correctly guess what the terrorists are planning is generally a waste of resources; investing in security countermeasures that will help regardless of what the terrorists are planning is much smarter.
Does anyone think that this experiment would turn out any differently?
An experiment carried out within London's square mile has revealed that employees in some of the City's best known financial services companies don't care about basic security policy.
This was a benign stunt, but it could have been much more serious. A CD-ROM carried into the office and run on a computer bypasses the company's network security systems. You could easily imagine a criminal ring using this technique to deliver a malicious program into a corporate network -- and it would work.
But concluding that employees don't care about security is a bit naive. Employees care about security; they just don't understand it. Computer and network security is complicated and confusing, and unless you're technologically inclined, you're just not going to have an intuitive feel for what's appropriate and what's a security risk. Even worse, technology changes quickly, and any security intuition an employee has is likely to be out of date within a short time.
Education is one way to deal with this, but education has its limitations. I'm sure these banks had security awareness campaigns; they just didn't stick. Punishment is another form of education, and my guess it would be more effective. If the banks fired everyone who fell for the CD-ROM-on-the-street trick, you can be sure that no one would ever do that again. (At least, until everyone forgot.) That won't ever happen, though, because the morale effects would be huge.
Rather than blaming this kind of behavior on the users, we would be better served by focusing on the technology. Why does the average computer user at a bank need the ability to install software from a CD-ROM? Why doesn't the computer block that action, or at least inform the IT department? Computers need to be secure regardless of who's sitting in front of them, irrespective of what they do.
If I go downstairs and try to repair the heating system in my home, I'm likely to break all sorts of safety rules -- and probably the system and myself in the process. I have no experience in that sort of thing, and honestly, there's no point trying to educate me. But my home heating system works fine without my having to learn anything about it. I know how to set my thermostat, and to call a professional if something goes wrong.
Computers need to work more like that.
News from a cephalopod conference:
The bizarre sex life of the giant squid is one of the topics at an international cephalopod conference in Hobart this week.
"Lessons from the Sony CD DRM Episode" is an interesting paper by J. Alex Halderman and Edward W. Felten.
Abstract: In the fall of 2005, problems discovered in two Sony-BMG compact disc copy protection systems, XCP and MediaMax, triggered a public uproar that ultimately led to class-action litigation and the recall of millions of discs. We present an in-depth analysis of these technologies, including their design, implementation, and deployment. The systems are surprisingly complex and suffer from a diverse array of flaws that weaken their content protection and expose users to serious security and privacy risks. Their complexity, and their failure, makes them an interesting case study of digital rights management that carries valuable lessons for content companies, DRM vendors, policymakers, end users, and the security community.
This story of a database error cascading into a major failure has some interesting security morals:
A house erroneously valued at $400 million is being blamed for budget shortfalls and possible layoffs in municipalities and school districts in northwest Indiana.
User error is being blamed for the problem:
An outside user of Porter County's computer system may have triggered the mess by accidentally changing the value of the Valparaiso house, said Sharon Lippens, director of the county's information technologies and service department.
Three things immediately spring to mind:
One, the system did not fail safely. This one error seems to have cascaded into multiple errors, as the new tax total immediately changed budgets of "18 government taxing units."
Two, there were no sanity checks on the system. "The city of Valparaiso and the Valparaiso Community School Corp. were asked to return $2.7 million." Didn't the city wonder where all that extra money came from in the first place?
Three, the access-control mechanisms on the computer system were too broad. When a user is authenticated to use the "R-E-D" program, he shouldn't automatically have permission to use the "R-E-R" program as well. Authentication isn't all or nothing; it should be granular to the operation.
Conference badges are an interesting security token. They can be very valuable -- a full conference registration at the RSA Conference this week in San Jose, for example, costs $1,985 -- but their value decays rapidly with time. By tomorrow afternoon, they'll be worthless.
Counterfeiting badges is one security concern, but an even bigger concern is people losing their badge or having their badge stolen. It's way cheaper to find or steal someone else's badge than it is to buy your own. People could do this sort of thing on purpose, pretending to lose their badge and giving it to someone else.
A few years ago, the RSA Conference charged people $100 for a replacement badge, which is far cheaper than a second membership. So the fraud remained. (At least, I assume it did. I don't know anything about how prevalent this kind of fraud was at RSA.)
Last year, the RSA Conference tried to further limit these types of fraud by putting people's photographs on their badges. Clever idea, but difficult to implement.
For this to work, though, guards need to match photographs with faces. This means that either 1) you need a lot more guards at entrance points, or 2) the lines will move a lot slower. Actually, far more likely is 3) no one will check the photographs.
And it was an expensive solution for the RSA Conference. They needed the equipment to put the photos on the badges. Registration was much slower. And pro-privacy people objected to the conference keeping their photographs on file.
This year, the RSA Conference solved the problem through economics:
If you lose your badge and/or badge holder, you will be required to purchase a new one for a fee of $1,895.00.
Look how clever this is. Instead of trying to solve this particular badge fraud problem through security, they simply moved the problem from the conference to the attendee. The badges still have that $1,895 value, but now if it's stolen and used by someone else, it's the attendee who's out the money. As far as the RSA Conference is concerned, the security risk is an externality.
Note that from an outside perspective, this isn't the most efficient way to deal with the security problem. It's likely that the cost to the RSA Conference for centralized security is less than the aggregate cost of all the individual security measures. But the RSA Conference gets to make the trade-off, so they chose a solution that was cheaper for them.
Of course, it would have been nice if the conference provided a slightly more secure attachment point for the badge holder than a thin strip of plastic. But why should they? It's not their problem anymore.
Or maybe they're fake real ID cards. This website sells ID cards. They're not ID cards for anything in particular, but they look official. If you need to fool someone who really doesn't know what an ID card is supposed to look like, these are likely to work.
Gary T. Marx is a sociology professor at MIT, and a frequent writer on privacy issues. I find him both clear and insightful, as well as interesting and entertaining.
This new paper is worth reading: "Soft Surveillance: The Growth of Mandatory Volunteerism in Collecting Personal Information -- 'Hey Buddy Can You Spare a DNA?'"
You can read a whole bunch of his other articles here.
One of the basic philosophies of security is defense in depth: overlapping systems designed to provide security even if one of them fails. An example is a firewall coupled with an intrusion-detection system (IDS). Defense in depth provides security, because there's no single point of failure and no assumed single vector for attacks.
It is for this reason that a choice between implementing network security in the middle of the network -- in the cloud -- or at the endpoints is a false dichotomy. No single security system is a panacea, and it's far better to do both.
This kind of layered security is precisely what we're seeing develop. Traditionally, security was implemented at the endpoints, because that's what the user controlled. An organization had no choice but to put its firewalls, IDSs, and anti-virus software inside its network. Today, with the rise of managed security services and other outsourced network services, additional security can be provided inside the cloud.
I'm all in favor of security in the cloud. If we could build a new Internet today from scratch, we would embed a lot of security functionality in the cloud. But even that wouldn't substitute for security at the endpoints. Defense in depth beats a single point of failure, and security in the cloud is only part of a layered approach.
For example, consider the various network-based e-mail filtering services available. They do a great job of filtering out spam and viruses, but it would be folly to consider them a substitute for anti-virus security on the desktop. Many e-mails are internal only, never entering the cloud at all. Worse, an attacker might open up a message gateway inside the enterprise's infrastructure. Smart organizations build defense in depth: e-mail filtering inside the cloud plus anti-virus on the desktop.
The same reasoning applies to network-based firewalls and intrusion-prevention systems (IPS). Security would be vastly improved if the major carriers implemented cloud-based solutions, but they're no substitute for traditional firewalls, IDSs, and IPSs.
This should not be an either/or decision. At Counterpane, for example, we offer cloud services and more traditional network and desktop services. The real trick is making everything work together.
Security is about technology, people, and processes. Regardless of where your security systems are, they're not going to work unless human experts are paying attention. Real-time monitoring and response is what's most important; where the equipment goes is secondary.
Security is always a trade-off. Budgets are limited and economic considerations regularly trump security concerns. Traditional security products and services are centered on the internal network, because that's the target of attack. Compliance focuses on that for the same reason. Security in the cloud is a good addition, but it's not a replacement for more traditional network and desktop security.
This was published as a "Face-Off" in Network World.
The opposing view is here.
Forget RFID. Well, don't, but National Scientific Corporation has a prototype of a WiFi tagging system that, like RFID, lets you track things in real-time and space. The advantage that the WiFi Tracker system has over passive RFID tracking is that you can keep tabs on objects with WiFi Tracker tags (which can hold up to 256K of data) from as far as a few hundred meters away (the range of passive RFID taggers is just a few meters). While you can do something similar with active RFID tags, with WiFi Tracker companies can use their pre-existing WiFi network to track things rather than having to build a whole new RFID system.
In other news, Apple is adding WiFi to the iPod.
And, of course, you can be tracked from your cellphone:
But the FBI and the U.S. Department of Justice have seized on the ability to locate a cellular customer and are using it to track Americans' whereabouts surreptitiously--even when there's no evidence of wrongdoing.
Last Friday, the Wall Street Journal ran an article (unfortunately, the link is only for paid subscribers) about how Valentine's Day is the day when cheating spouses are most likely to trip up:
Valentine's Day is the biggest single 24-hour period for florists, a huge event for greeting-card companies and a boon for candy makers. But it's also a major crisis day for anyone who is having an affair. After all, Valentine's Day is the one holiday when everyone is expected to do something romantic for their spouse or lover -- and if someone has both, it's a serious problem.
So, of course, private detectives work overtime.
"If anything is going on, it will be happening on that day," says Irene Smith, who says business at her Discreet Investigations detective agency in Golden, Colo., as much as doubles -- to as many as 12 cases some years -- on Valentine's Day.
Private detectives are expensive -- about $100 per hour, according to the article -- and might not be worth it.
The article suggests some surveillance tools you can buy at home: a real-time GPS tracking system you can hide in your spouse's car, a Home Evidence Collection Kit you can use to analyze stains on "clothing, car seats or elsewhere," Internet spying software, a telephone recorder, and a really cool buttonhole camera.
But even that stuff may be overkill:
Ruth Houston, author of a book called Is He Cheating on You? -- 829 Telltale Signs, says she generally recommends against spending money on private detectives to catch cheaters because the indications are so easy to read. (Sign No. 3 under "Gifts": He tries to convince you he bought expensive chocolates for himself.)
I hope I don't need to remind you that cheaters should also be reading that book, familiarizing themselves with the 829 telltale signs they should avoid making.
The article has several interesting personal stories, and warns that "planning a 'business trip' that falls over Valentine's Day is a typical mistake cheaters make."
So now I'm wondering why the RSA Conference is being held over Valentine's Day.
EDITED TO ADD (2/14): Today's Washington Post has a similar story.
I just found an interesting paper: "Windows Access Control Demystified," by Sudhakar Govindavajhala and Andrew W. Appel. Basically, they show that companies like Adobe, Macromedia, etc., have mistakes in their Access Control Programming that open security holes in Windows XP.
EDITED TO ADD (2/13): Ed Felten has some good commentary about the paper on his blog.
The TSA has announced that Secure Flight, its comprehensive program to match airline passangers against terrorist watch lists, has been suspended:
And because of security concerns, the government is going back to the drawing board with the program called Secure Flight after spending nearly four years and $150 million on it, the Senate Commerce Committee was told.
I have written about this program extensively, most recently here. It's an absolute mess in every way, and doesn't make us safer.
But don't think this is the end. Under Section 4012 of the Intelligence Reform and Terrorism Prevention Act, Congress mandated the TSA put in place a program to screen every domestic passenger against the watch list. Until Congress repeals that mandate, these postponements and suspensions are the best we can hope for. Expect it all to come back under a different name -- and a clean record in the eyes of those not paying close attention -- soon.
EDITED TO ADD (2/15): Ed Felton has some good commentary:
Instead of sticking to this more modest plan, Secure Flight became a vehicle for pie-in-the-sky plans about data mining and automatic identification of terrorists from consumer databases. As the program’s goals grew more ambitious and collided with practical design and deployment challenges, the program lost focus and seemed to have a different rationale and plan from one month to the next.
There's a Cephalopod Conference going on right now in Hobart, Australia. It's the Cephalopod International Advisory Council International Symposium. (I'll bet you didn't even know that cephalopods had an international advisory council. Or what they need advice about.) In the coming Fridays I hope to present some of the more interesting papers from this conference.
Next week is the RSA Conference in San Jose, CA. I will speak on "The Economics of Security" at 4:30 PM on the 14th, and again on "Why Security Has So Little to Do with Security" at 2:00 PM on the 15th. I will also participate in a main-stage panel on ID cards at 8:00 AM on the 16th.
Also, my wife and I have written a 110-page restaurant guidebook for the downtown San Jose area. It's a fun read, even if you aren't looking for a San Jose restaurant. (Do people know that I write restaurant reviews for the Minneapolis Star Tribune?)
The restaurant guide will be available at the conference -- and of course you can download it -- but I have a few hundred to give away here. I'll send a copy to anyone who wants one, in exchange for postage. (It's not about the money, but I need some sort of gating function so that only those actually interested get a copy.)
Cost is $2.50 if you live in the U.S., $3.00 for Canada/Mexico, and $6.00 elsewhere. I'll accept PayPal to my e-mail address -- firstname.lastname@example.org -- or a check to Bruce Schneier, Counterpane Internet Security, Inc., 1090A La Avenida, Mountain View, CA 94043. Sorry, but I can't accept credit cards directly.
Nice paper that dispels the myth that worms won't be able to propagate under IPv6, because the address space will be too sparse.
I'm just starting to read about the new security features in Internet Explorer 7. So far, I like what I am reading.
IE 7 requires that all browser windows display an address bar. This helps foil attackers that operate by popping up new windows masquerading as pages on a legitimate site, when in fact the site is fraudulent. By requiring an address bar, users will immediately see the true URL of the displayed page, making these types of attacks more obvious. If you think you're looking at www.microsoft.com, but the browser address bar says www.illhackyou.net, you ought to be suspicious.
I use Opera, and have long used the address bar to "check" on URLs. This is an excellent idea. So is this:
In early November, a bunch of Web browser developers got together and started fleshing out standards for address bar coloring, which can cue users to secured connections. Under the proposal laid out by IE 7 team member Rob Franco, even sites that use a standard SSL certificate will display a standard white address bar. Sites that use a stronger, as yet undetermined level of protection will use a green bar.
I like easy visual indications about what's going on. And I really like that SSL is generic white, because it really doesn't prove that you're communicating with the site you think you're communicating with. This feature helps with that, though:
Franco also said that when navigating to an SSL-protected site, the IE 7 address bar will display the business name and certification authority's name in the address bar.
Some of the security measures in IE7 weaken the integration between the browser and the operating system:
People using Windows Vista beta 2 will find a new feature called Protected Mode, which renders IE 7 unable to modify system files and settings. This essentially breaks down part of the integration between IE and Windows itself.
Think of it is as a wall between IE and the rest of the operating system. No, the code won't be perfect, and yes, there'll be ways found to circumvent this security, but this is an important and long-overdue feature.
The majority of IE's notorious security flaws stem from its pervasive integration with Windows. That is a feature no other Web browser offers -- and an ability that Vista's Protected Mode intends to mitigate. IE 7 obviously won't remove all of that tight integration. Lacking deep architectural changes, the effort has focused instead on hardening or eliminating potential vulnerabilities. Unfortunately, this approach requires Microsoft to anticipate everything that could go wrong and block it in advance -- hardly a surefire way to secure a browser.
That last sentence is about the general Internet attitude to allow everything that is not explicitly denied, rather than deny everything that is not explicitly allowed.
Also, you'll have to wait until Vista to use it:
...this capability will not be available in Windows XP because it's woven directly into Windows Vista itself.
There are also some good changes under the hood:
IE 7 does eliminate a great deal of legacy code that dates back to the IE 4 days, which is a welcome development.
Microsoft has rewritten a good bit of IE 7's core code to help combat attacks that rely on malformed URLs (that typically cause a buffer overflow). It now funnels all URL processing through a single function (thus reducing the amount of code that "looks" at URLs).
All good stuff, but I agree with this conclusion:
IE 7 offers several new security features, but it's hardly a given that the situation will improve. There has already been a set of security updates for IE 7 beta 1 released for both Windows Vista and Windows XP computers. Security vulnerabilities in a beta product shouldn't be alarming (IE 7 is hardly what you'd consider "finished" at this point), but it may be a sign that the product's architecture and design still have fundamental security issues.
I'm not switching from Opera yet, and my second choice is still Firefox. But the masses still use IE, and our security depends in part on those masses keeping their computers worm-free and bot-free.
NOTE: Here's some info on how to get your own copy of Internet Explorer 7 beta 2.
This was originally published in The Washington Post:
During the past 15 years, The Post and other media outlets have reported on the unsettling "militarization" of police departments across the country. Armed with free surplus military gear from the Pentagon, SWAT teams have multiplied at a furious pace. Tactics once reserved for rare, volatile situations such as hostage takings, bank robberies and terrorist incidents increasingly are being used for routine police work.
My eleventh column for Wired.com is about ID cards, and why you don't -- and won't -- have a single card in your wallet for everything. It has nothing to do with security.
My airline wants a card with its logo on it in my wallet. So does my rental car company, my supermarket and everyone else I do business with. My credit card company wants me to open up my wallet and notice its card; I'm far more likely to use a physical card than a virtual one that I have to remember is attached to my driver's license number. And I'm more likely to feel important if I have a card, especially a card that recognizes me as a frequent flier or a preferred customer.
Recently there was some serious tax credit fraud in the UK. Basically, there is a tax-credit system that allows taxpayers to get a refund for some of their taxes if they meet certain criteria. Politically, this was a major objective of the Labour Party. So the Inland Revenue (the UK version of the IRS) made it as easy as possible to apply for this refund. One of the ways taxpayers could apply was via a Web portal.
Unfortunately, the only details necessary when applying were the applicant's National Insurance number (the UK version of the Social Security number) and mother's maiden name. The refund was then paid directly into any bank account specified on the application form. Anyone who knows anything about security can guess what happened. Estimates are that fifteen millions pounds has been stolen by criminal syndicates.
The press has been treating this as an issue of identity theft, talking about how criminals went Dumpster diving to get National Insurance numbers and so forth. I have seen very little about how the authentication scheme failed. The system tried -- using semi-secret information like NI number and mother's maiden name -- to authenticate the person. Instead, the system should have tried to authenticate the transaction. Even a simple verification step -- does the name on the account match the name of the person who should receive the refund -- would have gone a long way to preventing this type of fraud.
Zooko's Triangle argues that names cannot be global, secure, and memorable, all at the same time. Domain names are an example: they are global, and memorable, but as the rapid rise of phishing demonstrates, they are not secure.
Check washing is a form of fraud. The criminal uses various solvents to remove data from a signed check -- the "pay to" name, the amount -- and replace it with data more beneficial to the criminal: his own name, a larger amount.
This webpage -- I know nothing about who these people are, but they seem a bit amateurish -- talks about check fraud, and then gives this advice to check writers:
WHAT TYPE OF PEN TO USE WHEN WRITING A CHECK:
I just wish they footnoted this statistic, obviously designed to scare people:
Check washing takes place to the tune of $815 million every year in the U.S. And it is increasing at an alarming rate.
EDITED TO ADD (2/18): Kish's response.
This article by Malcolm Gladwell on profiling and generalizations is excellent.
I recently received a PR e-mail from a company called Passlogix:
Password security is still a very prevalent threat, 2005 had security gurus like Bruce Schneier publicly suggest that you actually write them down on sticky-notes. A recent survey stated 78% of employees use passwords as their primary forms of security, 52% use the same password for their accounts -- yet 77% struggle to remember their passwords.
Actually, I don't. I recommend writing your passwords down and keeping them in your wallet.
I know nothing about this company, but I am unhappy at their misrepresentation of what I said.
This Barcelona club requires an embedded RFID chip for VIP status. (Note that the article is from October 2004.)
Last year I blogged about an article by Daniel J. Solove and Chris Hoofnagle titled "A Model Regime of Privacy Protection."
The paper has been revised a few times based on comments -- some of them from readers of this blog and Crypto-Gram -- and the final version has been published.
Abstract: A series of major security breaches at companies with sensitive personal information has sparked significant attention to the problems with privacy protection in the United States. Currently, the privacy protections in the United States are riddled with gaps and weak spots. Although most industrialized nations have comprehensive data protection laws, the United States has maintained a sectoral approach where certain industries are covered and others are not. In particular, emerging companies known as "commercial data brokers" have frequently slipped through the cracks of U.S. privacy law. In this article, the authors propose a Model Privacy Regime to address the problems in the privacy protection in the United States, with a particular focus on commercial data brokers. Since the United States is unlikely to shift radically from its sectoral approach to a comprehensive data protection regime, the Model Regime aims to patch up the holes in existing privacy regulation and improve and extend it. In other words, the goal of the Model Regime is to build upon the existing foundation of U.S. privacy law, not to propose an alternative foundation. The authors believe that the sectoral approach in the United States can be improved by applying the Fair Information Practices -- principles that require the entities that collect personal data to extend certain rights to data subjects. The Fair Information Practices are very general principles, and they are often spoken about in a rather abstract manner. In contrast, the Model Regime demonstrates specific ways that they can be incorporated into privacy regulation in the United States.
Definitely worth reading.
Interesting research paper by Shishir Nagaraja and Ross Anderson. Implications for warfare, terrorism, and peer-to-peer file sharing:
User Friendly on the topic.
It's not a squid this time, but an octopus.
Rare video footage shows a giant octopus attacking a small submarine off the west coast of Vancouver Island.
Here are the side-by-side search results for "tiananmen" on google.com and google.cn.
Unknowns tapped the mobile phones of about 100 Greek politicians and offices, including the U.S. embassy in Athens and the Greek prime minister.
Details are sketchy, but it seems that a piece of malicious code was discovered by Ericsson technicians in Vodafone's mobile phone software. The code tapped into the conference call system. It "conference called" phone calls to 14 prepaid mobile phones where the calls were recorded.
There was an interesting security tidbit in this article on last week's post office shooting:
The shooter's pass to access the facility had been expired, officials said, but she apparently used her knowledge of how security at the facility worked to gain entrance, following another vehicle in through the outer gate and getting other employees to open security doors.
This is a failure of both technology and procedure. The gate was configured to allow multiple vehicles to enter on only one person's authorization -- that's a technology failure. And people are programmed to be polite -- to hold the door for others.
SIDE NOTE: There is a common myth that workplace homicides are prevalent in the United States Postal Service. (Note the phrase "going postal.") But not counting this event, there has been less than one shooting fatality per year at Postal Service facilities over the last 20 years. As the USPS has more than 700,000 employees, this is a lower rate than the average workplace.
This is bizarre:
House Republicans are taking a mulligan on the first ballot for Majority Leader. The first count showed more votes cast than Republicans present at the Conference meeting
I can't find anything about the procedures, the technology, anything.
Interesting white paper from the ACLU: "Eavesdropping 101: What Can The NSA Do?"
See also this map.
EDITED TO ADD (2/4): Barry Steinhardt of the ACLU responds to some criticism.
This Dutch prison is the future of surveillance.
At a high-tech prison opening this week inmates wear electronic wristbands that track their every movement and guards monitor cells using emotion-recognition software.
Remember, new surveillance technologies are first used on populations with limited rights: inmates, children, the mentally ill, military personnel.
Interesting article about someone convicted for running a for-profit botnet:
November's 52-page indictment, along with papers filed last week, offer an unusually detailed glimpse into a shadowy world where hackers, often not old enough to vote, brag in online chat groups about their prowess in taking over vast numbers of computers and herding them into large armies of junk mail robots and arsenals for so-called denial of service attacks on Web sites.
Both the Microsoft Word document format (MS Word) and Adobe Portable Document (PDF) are complex, sophisticated computer data formats. They can contain many kinds of information such as text, graphics, tables, images, meta-data, and more all mixed together. The complexity makes them potential vehicles for exposing information unintentionally, especially when downgrading or sanitizing classified materials. Although the focus is on MS Word, the general guidance applies to other word processors and office tools, such as WordPerfect, PowerPoint, Excel, Star Office, etc.
EDITED TO ADD (2/1): The NSA page for the redaction document, and other "Security Configuration Guides," is here.
Last July I blogged about the risks of storing ever-larger amounts of data in ever-smaller devices.
Last week I wrote my tenth Wired.com column on the topic:
The point is that it's now amazingly easy to lose an enormous amount of information. Twenty years ago, someone could break into my office and copy every customer file, every piece of correspondence, everything about my professional life. Today, all he has to do is steal my computer. Or my portable backup drive. Or my small stack of DVD backups. Furthermore, he could sneak into my office and copy all this data, and I'd never know it.
EDITED TO ADD (2/2): A Dutch army officer lost a memory stick with details of an Afgan mission.
Last week the TSA announced details of its Registered Traveler program. Basically, you pay money for a background check and get a biometric ID -- a fingerprint -- that gets you through airline security faster. (See also this and this AP story.)
I've already written about why this is a bad idea for security:
What the Trusted Traveler program does is create two different access paths into the airport: high security and low security. The intent is that only good guys will take the low-security path, and the bad guys will be forced to take the high-security path, but it rarely works out that way. You have to assume that the bad guys will find a way to take the low-security path.
But what the TSA is actually doing is even more bizarre. The TSA is privatizing this system. They want the companies that sell for-profit, Registered Traveler passes to do the background checks. They want the companies to use error-filled commercial databases to do this. What incentive do these companies have to not sell someone a pass? Who is liable for mistakes?
I thought airline security was important.
This essay is an excellent discussion of the problems here.
Welcome to the brave new world of "market-driven" airport security, where different private security firms run and operate different lanes at different checkpoints, offering varied levels of accelerated screening depending on how much a user paid and how deep of a background check he or she submitted to. Thus the speed at which you move through a checkpoint will theoretically depend on a multiplicity of factors, only two of which are under your control (the depth of your background check and the firm(s) with which you've contracted). Other factors affecting your screening time, like which private security firm is manning a checkpoint and what resources that particular firm has invested in a particular checkpoint (e.g. extra personnel, more screening equipment, and so on) at a particular time of day, are entirely out of your control.
This is certainly a good point:
What's worse than having identity thieves impersonate you to Chase Bank? Having terrorists impersonate you to the TSA.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.