Schneier on Security
A blog covering security and security technology.
March 2006 Archives
Third item on the page:
According to juicy folklore and loose legend, for centuries, the inky waters of our deepest oceans have been home to that most mysterious of marine creatures -- the giant squid. Well, as we speak, visitors to Melbourne's aquarium can take a gander at the real thing, a 7m-long squid, caught in New Zealand and frozen in a block of ice.
The San Francisco Bay Guardian is reporting on a new crime: people who grab laptops out of their owners' hands and then run away. It's called "iJacking," and there seems to be a wave of this type of crime at Internet cafes in San Francisco:
In 2004 the SFPD Robbery Division recorded 17 strong-arm laptop robberies citywide. This increased to 30 cases in 2005, a total that doesn't even include thefts that fall under the category of "burglary," when a victim isn't present. (SFPD could not provide statistics on the number of laptop burglaries.)
Maloney was absorbed in his work when suddenly a hooded person yanked the laptop from Maloney's hands and ran out the door. Maloney tried to grab his computer, but he stumbled across a few chairs and landed on the floor as the perpetrator dashed to a vehicle waiting a quarter block away.
It's obvious why these thefts are occurring. Laptops are valuable, easy to steal, and easy to fence. If we want to "solve" this problem, we need to modify at least one of those characteristics. Some Internet cafes are providing locking cables for their patrons, in an attempt to make them harder to steal. But that will only mean that the muggers will follow their victims out of the cafes. Laptops will become less valuable over time, but that really isn't a good solution. The only thing left is to make them harder to fence.
This isn't an easy problem. There are a bunch of companies that make solutions that help people recover stolen laptops. There are programs that "phone home" if a laptop is stolen. There are programs that hide a serial number on the hard drive somewhere. There are non-removable tags users can affix to their computers with ID information. But until this kind of thing becomes common, the crimes will continue.
Reminds me of the problem of bicycle thefts.
The British security service MI5 is warning business leaders that their offices are probably badly designed against terrorist bombs. The common modern office consists of large rooms without internal walls, which puts employees at greater risk in the event of terrorist bombs.
From The Scotsman:
The trend towards open-plan offices without internal walls could put employees at increased risk in the event of a terrorist bomb, MI5 has warned business leaders. The advice comes as the Security Service steps up its advice to companies on how to prepare for an attack. MI5 has produced a 40-page leaflet, "Protecting Against Terrorism", which will be distributed to large businesses and public-sector bodies across Britain. Among the guidance in the pamphlet is that bosses should consider the security implications of getting rid of internal walls.
Interesting paper: "Passenger Profiling, Imperfect Screening, and Airport Security," by Nicola Persico and Petra E. Todd. The authors use game theory to investigate the optimal screening policy, in a scenario when there are different social groups (separated by felons, race, religion, etc.) with different preferences for crime and/or terrorism.
Monolith is an open-source program that can XOR two files together to create a third file, and -- of course -- can XOR that third file with one of the original two to create the other original file.
The website wonders about the copyright implications of all of this:
Things get interesting when you apply Monolith to copyrighted files. For example, munging two copyrighted files will produce a completely new file that, in most cases, contains no information from either file. In other words, the resulting Mono file is not "owned" by the original copyright holders (if owned at all, it would be owned by the person who did the munging). Given that the Mono file can be combined with either of the original, copyrighted files to reconstruct the other copyrighted file, this lack of Mono ownership may be seem hard to believe.
The website then postulates this as a mechanism to get around copyright law:
What does this mean? This means that Mono files can be freely distributed.
Clever, but it won't hold up in court. In general, technical hair splitting is not an effective way to get around the law. My guess is that anyone who distributes that third file -- they call it a "Mono" file -- along with instructions on how to recover the copyrighted file is going to be found guilty of copyright violation.
The correct way to solve this problem is through law, not technology.
This story is about the remote town of Dillingham, Alaska, which is probably the most watched town in the country. There are 80 surveillance cameras for the 2,400 people, which translates to one camera for every 30 people.
The cameras were bought, I assume, because the town couldn't think of anything else to do with the $202,000 Homeland Security grant they received. (One of the problems of giving this money out based on political agenda, rather than by where the actual threats are.)
But they got the money, and they spent it. And now they have to justify the expense. Here's the movie-plot threat the Dillingham Police Chief uses to explain why the expense was worthwhile:
"Russia is about 800 miles that way," he says, arm extending right.
The first problem with the movie plot is that it's just plain silly. But the second problem, which you might have to look back to notice, is that those 80 cameras will do nothing to stop his imagined attack.
We are all security consumers. We spend money, and we expect security in return. This expenditure was a waste of money, and as a U.S. taxpayer, I am pissed that I'm getting such a lousy deal.
You can't detect them, because they look normal:
One type is the exact size and shape of a credit card, except that two of the edges are lethally sharp. It's made of G10 laminate, an ultra-hard material normally employed for circuit boards. You need a diamond file to get an edge on it.
The FBI's extensive Guide to Concealable Weapons has 89 pages of weapons intended to get through security. These are generally variations of a knifeblade concealed in a pen, comb or a cross -- and most of them are pretty obvious on X-ray.
Detectives used profiles posted on the MySpace social networking Web site to identify six suspects in a rape and robbery....
For almost two years, intelligence services around the world tried to uncover the identity of an Internet hacker who had become a key conduit for al-Qaeda. The savvy, English-speaking, presumably young webmaster taunted his pursuers, calling himself Irhabi -- Terrorist -- 007. He hacked into American university computers, propagandized for the Iraq insurgents led by Abu Musab al-Zarqawi and taught other online jihadists how to wield their computers for the cause.
Assuming the British authorities are to be believed, he definitely was a terrorist:
Suddenly last fall, Irhabi 007 disappeared from the message boards. The postings ended after Scotland Yard arrested a 22-year-old West Londoner, Younis Tsouli, suspected of participating in an alleged bomb plot. In November, British authorities brought a range of charges against him related to that plot. Only later, according to our sources familiar with the British probe, was Tsouli's other suspected identity revealed. British investigators eventually confirmed to us that they believe he is Irhabi 007.
Okay. So he was a terrorist. And he used the Internet, both as a communication tool and to break into networks. But this does not make him a cyberterrorist.
Interesting article, though.
Here's the Slashdot thread on the topic.
Does anyone have the faintest clue what they're talking about here? If I had to guess, it's just another random-number generator. It definitely doesn't sound like two telescopes pointing at the same piece of key can contruct the same key -- now that would be cool.
The National Institute of Information and Communications Technology is trying to patent a system of encryption using electromagnetic waves from Quasars.
I can see the story on the home page of Nikkei.net Interactive, but can't get at the story without a login.
Creative Home Engineering can make secret doors and hidden passageways for your home.
Pull a favorite book from your library shelf and watch a cabinet section recess to reveal a hidden passageway.
Who cares about the security properties? I want one.
A couple -- living together, I assume -- and engaged to be married, shared a computer. He used Firefox to visit a bunch of dating sites, being smart enough not to have the browser save his password. But Firefox did save the names of the sites it was told never to save the password for. She happened to stumble on this list. The details are left to the imagination, but they broke up.
Most bug reports aren't this colorful.
I don't know what this is, but it sure looks like a working model of an Enigma. And it's beautiful.
If Friday cat blogging involves cute pictures of cats, shouldn't Friday squid blogging include cute pictures of squid?
Since when did The New Scientist hire novelists to write science stories?
A truck pulls up in front of New York City's Grand Central Station, one of the most densely crowded spots in the world. It is a typical weekday afternoon, with over half a million people in the immediate area, working, shopping or just passing through. A few moments later the driver makes his delivery: a 10-kiloton atomic explosion.
EDITED TO ADD (3/24): Here's the full article.
Who needs terrorists? We can cause terror all by ourselves:
A worker at a Downtown building who was using a pellet gun with a scope to scare pigeons prompted a massive police response that led to the shutdown of several blocks this afternoon.
Rare outbreak of security common sense in London:
London Underground is likely to reject the use of passenger scanners designed to detect weapons or explosives as they are "not practical", a security chief for the capital's transport authority said on 14 March 2006.
It seems like every time someone tests airport security, airport security fails. In tests between November 2001 and February 2002, screeners missed 70 percent of knives, 30 percent of guns and 60 percent of (fake) bombs. And recently (see also this), testers were able to smuggle bomb-making parts through airport security in 21 of 21 attempts. It makes you wonder why we're all putting our laptops in a separate bin and taking off our shoes. (Although we should all be glad that Richard Reid wasn't the "underwear bomber.")
The failure to detect bomb-making parts is easier to understand. Break up something into small enough parts, and it's going to slip past the screeners pretty easily. The explosive material won't show up on the metal detector, and the associated electronics can look benign when disassembled. This isn't even a new problem. It's widely believed that the Chechen women who blew up the two Russian planes in August 2004 probably smuggled their bombs aboard the planes in pieces.
But guns and knives? That surprises most people.
Airport screeners have a difficult job, primarily because the human brain isn't naturally adapted to the task. We're wired for visual pattern matching, and are great at picking out something we know to look for -- for example, a lion in a sea of tall grass.
But we're much less adept at detecting random exceptions in uniform data. Faced with an endless stream of identical objects, the brain quickly concludes that everything is identical and there's no point in paying attention. By the time the exception comes around, the brain simply doesn't notice it. This psychological phenomenon isn't just a problem in airport screening: It's been identified in inspections of all kinds, and is why casinos move their dealers around so often. The tasks are simply mind-numbing.
To make matters worse, the smuggler can try to exploit the system. He can position the weapons in his baggage just so. He can try to disguise them by adding other metal items to distract the screeners. He can disassemble bomb parts so they look nothing like bombs. Against a bored screener, he has the upper hand.
And, as has been pointed out again and again in essays on the ludicrousness of post-9/11 airport security, improvised weapons are a huge problem. A rock, a battery for a laptop, a belt, the extension handle off a wheeled suitcase, fishing line, the bare hands of someone who knows karate ... the list goes on and on.
Technology can help. X-ray machines already randomly insert "test" bags into the stream -- keeping screeners more alert. Computer-enhanced displays are making it easier for screeners to find contraband items in luggage, and eventually the computers will be able to do most of the work. It makes sense: Computers excel at boring repetitive tasks. They should do the quick sort, and let the screeners deal with the exceptions.
Sure, there'll be a lot of false alarms, and some bad things will still get through. But it's better than the alternative.
And it's likely good enough. Remember the point of passenger screening. We're not trying to catch the clever, organized, well-funded terrorists. We're trying to catch the amateurs and the incompetent. We're trying to catch the unstable. We're trying to catch the copycats. These are all legitimate threats, and we're smart to defend against them. Against the professionals, we're just trying to add enough uncertainty into the system that they'll choose other targets instead.
The terrorists' goals have nothing to do with airplanes; their goals are to cause terror. Blowing up an airplane is just a particular attack designed to achieve that goal. Airplanes deserve some additional security because they have catastrophic failure properties: If there's even a small explosion, everyone on the plane dies. But there's a diminishing return on investments in airplane security. If the terrorists switch targets from airplanes to shopping malls, we haven't really solved the problem.
What that means is that a basic cursory screening is good enough. If I were investing in security, I would fund significant research into computer-assisted screening equipment for both checked and carry-on bags, but wouldn't spend a lot of money on invasive screening procedures and secondary screening. I would much rather have well-trained security personnel wandering around the airport, both in and out of uniform, looking for suspicious actions.
When I travel in Europe, I never have to take my laptop out of its case or my shoes off my feet. Those governments have had far more experience with terrorism than the U.S. government, and they know when passenger screening has reached the point of diminishing returns. (They also implemented checked-baggage security measures decades before the United States did -- again recognizing the real threat.)
And if I were investing in security, I would invest in intelligence and investigation. The best time to combat terrorism is before the terrorist tries to get on an airplane. The best countermeasures have value regardless of the nature of the terrorist plot or the particular terrorist target.
In some ways, if we're relying on airport screeners to prevent terrorism, it's already too late. After all, we can't keep weapons out of prisons. How can we ever hope to keep them out of airports?
A version of this essay originally appeared on Wired.com.
I really wish this article had more details about the crime. Basically, a criminal ring used an authentication failure with fax transmissions to steal (unsuccessfully, as it turned out) $150 million Australian dollars.
There's a new kind of door lock from the Israeli company E-Lock. It responds to sound. Instead of carrying a key, you carry a small device that makes a series of quick knocking sounds. Just touching it to the door causes the door to open; there's no keyhole. The device, called a "KnocKey," has a keypad and can be programmed to require a PIN before operation -- for even greater security.
Clever idea, but there's the usual security hyperbole:
Since there is no keyhole or contact point on the door, this unique mechanism offers a significantly higher level of security than existing technology.
More accurate would be to say that the security vulnerabilities are different than existing technology. We know a lot about the vulnerabilities of conventional locks, but we know very little about the security of this system. But don't confuse this lack of knowledge with increased security.
Last year, the Department of Homeland Security finally got around to appointing its DHS Data Privacy and Integrity Advisory Committee. It was mostly made up of industry insiders instead of anyone with any real privacy experience. (Lance Hoffman from George Washington University was the most notable exception.)
And now, we have something from that committee. On March 7th they published their "Framework for Privacy Analysis of Programs, Technologies, and Applications."
This document sets forth a recommended framework for analyzing programs, technologies, and applications in light of their effects on privacy and related interests. It is intended as guidance for the Data Privacy and Integrity Advisory Committee (the Committee) to the U.S. Department of Homeland Security (DHS). It may also be useful to the DHS Privacy Office, other DHS components, and other governmental entities that are seeking to reconcile personal data-intensive programs and activities with important social and human values.
It's surprisingly good.
I like that it is a series of questions a program manager has to answer: about the legal basis for the program, its efficacy against the threat, and its effects on privacy. I am particularly pleased that their questions on pages 3-4 are very similar to the "five steps" I wrote about in Beyond Fear. I am thrilled that the document takes a "trade-off" approach; the last question asks: "Should the program proceed? Do the benefits of the program...justify the costs to privacy interests....?"
I think this is a good starting place for any technology or program with respect to security and privacy. And I hope the DHS actually follows the recommendations in this report.
Really interesting article by Robert X. Cringely on the lack of federal funding for security technologies.
After the 9-11 terrorist attacks, the United States threw its considerable fortune into the War on Terror, of which a large component was Homeland Security. We conducted a couple wars abroad, both of which still seem to be going on, and took a vast domestic security bureaucracy and turned it into a different and even more vast domestic security bureaucracy. We could argue all day about whether or not America is more secure as a result of these changes, but we'd all agree that a lot of money has been spent. In fact, from a pragmatic point of view, ALL the money has been spent, and that's the point of this particular column. For a variety of reasons, there is no money left to spend on homeland security none, nada, zilch. We're busted.
I think his assessment is spot on.
They're deliberately fake, made in Germany for a promotion. But they're being passed as real:
Cologne newsagent Bernd Friedhelm, 33, accepted one of the fake 600 euro notes from an unknown customer who bought two cartons of cigarettes and walked off with 534 euros in change.
This is why security is so hard: people.
Last summer, the surprising news came out that Japanese nuclear secrets leaked out, after a contractor was allowed to connect his personal virus-infested computer to the network at a nuclear power plant. The contractor had a file sharing app on his laptop as well, and suddenly nuclear secrets were available to plenty of kids just trying to download the latest hit single. It's only taken about nine months for the government to come up with its suggestion on how to prevent future leaks of this nature: begging all Japanese citizens not to use file sharing systems -- so that the next time this happens, there won't be anyone on the network to download such documents.
Even if their begging works, it solves the wrong problem. Sad.
EDITED TO ADD (3/22): Another article.
Really good article by a reporter who has been covering improvised explosive devices in Iraq:
Last summer, a U.S. Colonel in Baghdad told me that I was America's enemy, or very close to it. For months, I had been covering the U.S. military's efforts to deal with the threat of IEDs, improvised explosive devices. And my writing, he told me, was going too far -- especially this January 2005 Wired News story, in which I described some of the Pentagon's more exotic attempts to counter these bombs.
She had airplane blueprints. Oh, and she was a woman -- which cast immediate suspicion on her story.
Like most Nigerians, you're probably finding that it's increasingly difficult to earn a decent living from email. That's why you need to attend the 3rd Annual Nigerian EMail Conference.
Squid fishing turns into an international incident back in February 2005:
A Taiwanese flagged jigger allegedly poaching in the South Atlantic was arrested by the Argentine Coast Guard after intimidating fire. This is the second incident in a week.
This is great work by Yossi Oren and Adi Shamir:
My guess of the industry's response: downplay the results and pretend it's not a problem.
If I were going to commit armed robbery, I'd probably want to bring a cell phone jammer with me.
EDITED TO ADD (3/25): Another article.
Of course RFID chips can carry viruses. They're just little computers.
More info here. The coverage is more than a tad sensationalist, though.
EDITED TO ADD (3/16): I thought the attack vector was interesting: a Trojan RFID attacks the central database, rather than attacking other RFID chips directly. Metaphorically, it's a lot closer to biological viruses, because it actually requires the more powerful host being subverted, and there's no way an infected tag could propagate directly to another tag.
Long, and interesting, article on bioterrorism.
When you read this, don't concentrate too much on what's possible right now. If the techniques discussed in the article are beyond the reach of government laboratories now, they won't be in five or ten years. And then they'll become cheaper and easier. Attackers look for leverage, and technology gives attackers leverage.
It's easier than you think to create your own police department in the United States.
Yosef Maiwandi formed the San Gabriel Valley Transit Authority -- a tiny, privately run nonprofit organization that provides bus rides to disabled people and senior citizens. It operates out of an auto repair shop. Then, because the law seems to allow transit companies to form their own police departments, he formed the San Gabriel Valley Transit Authority Police Department. As a thank you, he made Stefan Eriksson a deputy police commissioner of the San Gabriel Transit Authority Police's anti-terrorism division, and gave him business cards.
Police departments like this don't have much legal authority, they don't really need to. My guess is that the name alone is impressive enough.
In the computer security world, privilege escalation means using some legitimately granted authority to secure extra authority that was not intended. This is a real-world counterpart. Even though transit police departments are meant to police their vehicles only, the title -- and the ostensible authority that comes along with it -- is useful elsewhere. Someone with criminal intent could easily use this authority to evade scrutiny or commit fraud.
Deal said that his agency has discovered that several railroad agencies around California have created police departments — even though the companies have no rail lines in California to patrol. The police certification agency is seeking to decertify those agencies because it sees no reason for them to exist in California.
The real problem is that we're too deferential to police power. We don't know the limits of police authority, whether it be an airport policeman or someone with a business card from the "San Gabriel Valley Transit Authority Police Department."
At LaGuardia, a man successfully walked through the metal detector, but screeners wanted to check his shoes. (Some reports say that his shoes set off an alarm.) But he didn't wait, and disappeared into the crowd.
The entire Delta Airlines terminal had to be evacuated, and between 2,500 and 3,000 people had to be rescreened. I'm sure the resultant flight delays rippled through the entire system.
Security systems can fail in two ways. They can fail to defend against an attack. And they can fail when there is no attack to defend. The latter failure is often more important, because false alarms are more common than real attacks.
Aside from the obvious security failure -- how did this person manage to disappear into the crowd, anyway -- it's painfully obvious that the overall security system did not fail well. Well-designed security systems fail gracefully, without affecting the entire airport terminal. That the only thing the TSA could do after the failure was evacuate the entire terminal and rescreen everyone is a testament to how badly designed the security system is.
On March 4, University of California Berkeley (Cal) played a basketball game against the University of Southern California (USC). With Cal in contention for the PAC-10 title and the NCAA tournament at stake, the game was a must-win.
Victoria was a hoax UCLA co-ed, created by Cal's Rally Committee. For the previous week, "she" had been chatting with Gabe Pruitt, USC's starting guard, over AOL Instant Messenger. It got serious. Pruitt and several of his teammates made plans to go to Westwood after the game so that they could party with Victoria and her friends.
On Saturday, at the game, when Pruitt was introduced in the starting lineup, the chants began: "Victoria, Victoria." One of the fans held up a sign with her phone number.
The look on Pruitt's face when he turned to the bench after the first Victoria chant was priceless. The expression was unlike anything ever seen in collegiate or pro sports. Never did a chant by the opposing crowd have such an impact on a visiting player. Pruitt was in total shock. (This is the only picture I could find.)
The chant "Victoria" lasted all night. To add to his embarrassment, transcripts of their IM conversations were handed out to the bench before the game: "You look like you have a very fit body." "Now I want to c u so bad."
Pruitt ended up a miserable 3-for-13 from the field.
Security morals? First, this is the cleverest social engineering attack I've read about in a long time. Second, authentication is hard in little text windows -- but it's no less important. (Although even if this were a real co-ed recruited for the ruse, authentication wouldn't have helped.) And third, you can hoodwink college basketball players if you get them thinking with their hormones.
I don't worry about it now any more than I worried about it then:
In terms of security, this is no big deal; the photo-ID requirement doesn't provide much security. Identification of passengers doesn't increase security very much. All of the 9/11 terrorists presented photo-IDs, many in their real names. Others had legitimate driver's licenses in fake names that they bought from unscrupulous people working in motor vehicle offices.
This has been making the rounds on the Internet. Basically, a guy tears up a credit card application, tapes it back together, fills it out with someone else's address and a different phone number, and send it in. He still gets a credit card.
Imagine that some fraudster is rummaging through your trash and finds a torn-up credit card application. That's why this is bad.
To understand why it's happening, you need to understand the trade-offs and the agenda. From the point of view of the credit card company, the benefits of giving someone a credit card is that he'll use it and generate revenue. The risk is that it's a fraudster who will cost the company revenue. The credit card industry has dealt with the risk in two ways: they've pushed a lot of the risk onto the merchants, and they've implemented fraud detection systems to limit the damage.
All other costs and problems of identity theft are borne by the consumer; they're an externality to the credit card company. They don't enter into the trade-off decision at all.
We can laugh at this kind of thing all day, but it's actually in the best interests of the credit card industry to mail cards in response to torn-up and taped-together applications without doing much checking of the address or phone number. If we want that to change, we need to fix the externality.
It's easy to blow the cover of CIA agents using the Internet:
The CIA asked the Tribune not to publish her name because she is a covert operative, and the newspaper agreed. But unbeknown to the CIA, her affiliation and those of hundreds of men and women like her have somehow become a matter of public record, thanks to the Internet.
Seems to be serious:
Not all of the 2,653 employees whose names were produced by the Tribune search are supposed to be working under cover. More than 160 are intelligence analysts, an occupation that is not considered a covert position, and senior CIA executives such as Tenet are included on the list.
GPG is an open-source version of the PGP e-mail encryption protocol. Recently, a very serious vulnerability was discovered in the software: given a signed e-mail message, you can modify the message -- specifically, you can prepend or append arbitrary data -- without disturbing the signature verification.
It appears this bug has existed for years without anybody finding it.
Moral: Open source does not necessarily mean "fewer bugs." I wrote about this back in 1999.
UPDATED TO ADD (3/13): This bug is fixed in Version 18.104.22.168. Users should upgrade immediately.
According to juicy folklore and loose legend, for centuries, the inky waters of our deepest oceans have been home to that most mysterious of marine creatures -- the giant squid. Well, as we speak, visitors to Melbourne's aquarium can take a gander at the real thing, a 7m-long squid, caught in New Zealand and frozen in a block of ice.
Watch the video here.
In the Netherlands, criminals are stealing money from ATMs by blowing them up (article in Dutch). First, they drill a hole in an ATM and fill it with some sort of gas. Then, they ignite the gas -- from a safe distance -- and clean up the money that flies all over the place after the ATM explodes.
Sounds crazy, but apparently there has been an increase in this type of attack recently. The banks' countermeasure is to install air vents so that gas can't build up inside the ATMs.
According to the TSA, in the 9th Circuit Case of John Gilmore, you are allowed to fly without showing ID -- you'll just have to submit yourself to secondary screening.
The Identity Project wants you to try it out. If you have time, try to fly without showing ID.
Mr. Gilmore recommends that every traveler who is concerned with privacy or anonymity should opt to become a "selectee" rather than show an ID. We are very likely to lose the right to travel anonymously, if citizens do not exercise it. TSA and the airlines will attempt to make it inconvenient for you, by wasting your time and hassling you, but they can't do much in that regard without compromising their avowed missions, which are to transport paying passengers, and to keep weapons off planes. If you never served in the armed services, this is a much easier way to spend some time keeping your society free. (Bring a copy of the court decision with you and point out some of the numerous places it says you can fly as a selectee rather than show ID. Paper tickets are also helpful, though not required.)
I'm curious what the results are.
EDITED TO ADD (11/25): Here's someone who tried, and failed.
With consumers around the country reporting mysterious fraudulent account withdrawals, and multiple banks announcing problems with stolen account information, it appears thieves have unleashed a powerful new way to steal money from cash machines.
Read the whole article. Details are emerging slowly, but there's still a lot we don't know.
Criminals are breaking into stores and pretending to ransack them, as a cover for installing ATM skimming hardware, complete with a transmitter.
Note the last paragraph of the story -- it's in Danish, sorry -- where the company admits that this is the fourth attempt they know of criminals installing reader equipment inside ATM terminals for the purpose of skimming numbers and PINs.
In the post 9/11 world, there's much focus on connecting the dots. Many believe that data mining is the crystal ball that will enable us to uncover future terrorist plots. But even in the most wildly optimistic projections, data mining isn't tenable for that purpose. We're not trading privacy for security; we're giving up privacy and getting no security in return.
Most people first learned about data mining in November 2002, when news broke about a massive government data mining program called Total Information Awareness. The basic idea was as audacious as it was repellent: suck up as much data as possible about everyone, sift through it with massive computers, and investigate patterns that might indicate terrorist plots. Americans across the political spectrum denounced the program, and in September 2003, Congress eliminated its funding and closed its offices.
But TIA didn't die. According to The National Journal, it just changed its name and moved inside the Defense Department.
This shouldn't be a surprise. In May 2004, the General Accounting Office published a report that listed 122 different federal government data mining programs that used people's personal information. This list didn't include classified programs, like the NSA's eavesdropping effort, or state-run programs like MATRIX.
The promise of data mining is compelling, and convinces many. But it's wrong. We're not going to find terrorist plots through systems like this, and we're going to waste valuable resources chasing down false alarms. To understand why, we have to look at the economics of the system.
Security is always a trade-off, and for a system to be worthwhile, the advantages have to be greater than the disadvantages. A national security data mining program is going to find some percentage of real attacks, and some percentage of false alarms. If the benefits of finding and stopping those attacks outweigh the cost -- in money, liberties, etc. -- then the system is a good one. If not, then you'd be better off spending that cost elsewhere.
Data mining works best when there's a well-defined profile you're searching for, a reasonable number of attacks per year, and a low cost of false alarms. Credit card fraud is one of data mining's success stories: all credit card companies data mine their transaction databases, looking for spending patterns that indicate a stolen card. Many credit card thieves share a pattern -- purchase expensive luxury goods, purchase things that can be easily fenced, etc. -- and data mining systems can minimize the losses in many cases by shutting down the card. In addition, the cost of false alarms is only a phone call to the cardholder asking him to verify a couple of purchases. The cardholders don't even resent these phone calls -- as long as they're infrequent -- so the cost is just a few minutes of operator time.
Terrorist plots are different. There is no well-defined profile, and attacks are very rare. Taken together, these facts mean that data mining systems won't uncover any terrorist plots until they are very accurate, and that even very accurate systems will be so flooded with false alarms that they will be useless.
All data mining systems fail in two different ways: false positives and false negatives. A false positive is when the system identifies a terrorist plot that really isn't one. A false negative is when the system misses an actual terrorist plot. Depending on how you "tune" your detection algorithms, you can err on one side or the other: you can increase the number of false positives to ensure that you are less likely to miss an actual terrorist plot, or you can reduce the number of false positives at the expense of missing terrorist plots.
To reduce both those numbers, you need a well-defined profile. And that's a problem when it comes to terrorism. In hindsight, it was really easy to connect the 9/11 dots and point to the warning signs, but it's much harder before the fact. Certainly, there are common warning signs that many terrorist plots share, but each is unique, as well. The better you can define what you're looking for, the better your results will be. Data mining for terrorist plots is going to be sloppy, and it's going to be hard to find anything useful.
Data mining is like searching for a needle in a haystack. There are 900 million credit cards in circulation in the United States. According to the FTC September 2003 Identity Theft Survey Report, about 1% (10 million) cards are stolen and fraudulently used each year. Terrorism is different. There are trillions of connections between people and events -- things that the data mining system will have to "look at" -- and very few plots. This rarity makes even accurate identification systems useless.
Let's look at some numbers. We'll be optimistic. We'll assume the system has a 1 in 100 false positive rate (99% accurate), and a 1 in 1,000 false negative rate (99.9% accurate).
Assume one trillion possible indicators to sift through: that's about ten events -- e-mails, phone calls, purchases, web surfings, whatever -- per person in the U.S. per day. Also assume that 10 of them are actually terrorists plotting.
This unrealistically-accurate system will generate one billion false alarms for every real terrorist plot it uncovers. Every day of every year, the police will have to investigate 27 million potential plots in order to find the one real terrorist plot per month. Raise that false-positive accuracy to an absurd 99.9999% and you're still chasing 2,750 false alarms per day -- but that will inevitably raise your false negatives, and you're going to miss some of those ten real plots.
This isn't anything new. In statistics, it's called the "base rate fallacy," and it applies in other domains as well. For example, even highly accurate medical tests are useless as diagnostic tools if the incidence of the disease is rare in the general population. Terrorist attacks are also rare, any "test" is going to result in an endless stream of false alarms.
This is exactly the sort of thing we saw with the NSA's eavesdropping program: the New York Times reported that the computers spat out thousands of tips per month. Every one of them turned out to be a false alarm.
And the cost was enormous: not just the cost of the FBI agents running around chasing dead-end leads instead of doing things that might actually make us safer, but also the cost in civil liberties. The fundamental freedoms that make our country the envy of the world are valuable, and not something that we should throw away lightly.
Data mining can work. It helps Visa keep the costs of fraud down, just as it helps Amazon.com show me books that I might want to buy, and Google show me advertising I'm more likely to be interested in. But these are all instances where the cost of false positives is low -- a phone call from a Visa operator, or an uninteresting ad -- and in systems that have value even if there is a high number of false negatives.
Finding terrorism plots is not a problem that lends itself to data mining. It's a needle-in-a-haystack problem, and throwing more hay on the pile doesn't make that problem any easier. We'd be far better off putting people in charge of investigating potential plots and letting them direct the computers, instead of putting the computers in charge and letting them decide who should be investigated.
Nice essay on the human dimension of the problem of securing information. "Analog hole" is a good name for it.
Along the same lines, here's a story about the security risks of talking loudly:
About four seats away is a gentleman (on this occasion pronounced 'fool') with a BlackBerry mobile device and a very loud voice. He is obviously intent on selling a customer something and is briefing his team. It seems he is the leader as he defines the strategy and assigns each of his unseen team with specific tasks and roles.
I like this idea:
I had to sign a tedious business contract the other day. They wanted my corporation number -- fair enough -- plus my Social Security number -- well, if you insist -- and also my driver's license number -- hang on, what's the deal with that?
This is a pretty good commentary on the issue.
(I've said in the past that the real security problem here is the transparency of the process.)
It's all social engineering:
If you want to crash the glitziest party of all, the Oscars, here's a tip from a professional: Show up at the theater, dressed as a chef carrying a live lobster, looking really concerned.
EDITED TO ADD (3/7): Another news article.
From Jake Appelbaum: "The one unanswered question in all of this seems to be: Why is the new card going to have any issues in any of the affected countries? No one from Citibank was able to provide me with a promise my new card wouldn't be locked yet again. Pretty amazing. I guess when I get my new card, I'll find out.
EDITED TO ADD (3/8): Some more news.
This article shows how badly terrorist profiling can go wrong:
They paid down some debt. The balance on their JCPenney Platinum MasterCard had gotten to an unhealthy level. So they sent in a large payment, a check for $6,522.
The article goes on to blame something called the Bank Privacy Act, but that's not correct. The culprit here is the amendments made to the Bank Secrecy Act by the USA Patriot Act, Sections 351 and 352. There's a general discussion here, and the Federal Register here.
There has been some rumbling on the net that this story is badly garbled -- or even a hoax -- but certainly this kind of thing is what financial institutions are required to report under the Patriot Act.
Remember, all the time spent chasing down silly false alarms is time wasted. Finding terrorist plots is a signal-to-noise problem, and stuff like this substantially decreases that ratio: it adds a lot of noise without adding enough signal. It makes us less safe, because it makes terrorist plots harder to find.
Over the past 20 years, there's been a sea change in the battle for personal privacy.
The pervasiveness of computers has resulted in the almost constant surveillance of everyone, with profound implications for our society and our freedoms. Corporations and the police are both using this new trove of surveillance data. We as a society need to understand the technological trends and discuss their implications. If we ignore the problem and leave it to the "market," we'll all find that we have almost no privacy left.
Most people think of surveillance in terms of police procedure: Follow that car, watch that person, listen in on his phone conversations. This kind of surveillance still occurs. But today's surveillance is more like the NSA's model, recently turned against Americans: Eavesdrop on every phone call, listening for certain keywords. It's still surveillance, but it's wholesale surveillance.
Wholesale surveillance is a whole new world. It's not "follow that car," it's "follow every car." The National Security Agency can eavesdrop on every phone call, looking for patterns of communication or keywords that might indicate a conversation between terrorists. Many airports collect the license plates of every car in their parking lots, and can use that database to locate suspicious or abandoned cars. Several cities have stationary or car-mounted license-plate scanners that keep records of every car that passes, and save that data for later analysis.
More and more, we leave a trail of electronic footprints as we go through our daily lives. We used to walk into a bookstore, browse, and buy a book with cash. Now we visit Amazon, and all of our browsing and purchases are recorded. We used to throw a quarter in a toll booth; now EZ Pass records the date and time our car passed through the booth. Data about us are collected when we make a phone call, send an e-mail message, make a purchase with our credit card, or visit a website.
Much has been written about RFID chips and how they can be used to track people. People can also be tracked by their cell phones, their Bluetooth devices, and their WiFi-enabled computers. In some cities, video cameras capture our image hundreds of times a day.
The common thread here is computers. Computers are involved more and more in our transactions, and data are byproducts of these transactions. As computer memory becomes cheaper, more and more of these electronic footprints are being saved. And as processing becomes cheaper, more and more of it is being cross-indexed and correlated, and then used for secondary purposes.
Information about us has value. It has value to the police, but it also has value to corporations. The Justice Department wants details of Google searches, so they can look for patterns that might help find child pornographers. Google uses that same data so it can deliver context-sensitive advertising messages. The city of Baltimore uses aerial photography to surveil every house, looking for building permit violations. A national lawn-care company uses the same data to better market its services. The phone company keeps detailed call records for billing purposes; the police use them to catch bad guys.
In the dot-com bust, the customer database was often the only salable asset a company had. Companies like Experian and Acxiom are in the business of buying and reselling this sort of data, and their customers are both corporate and government.
Computers are getting smaller and cheaper every year, and these trends will continue. Here's just one example of the digital footprints we leave:
It would take about 100 megabytes of storage to record everything the fastest typist input to his computer in a year. That's a single flash memory chip today, and one could imagine computer manufacturers offering this as a reliability feature. Recording everything the average user does on the Internet requires more memory: 4 to 8 gigabytes a year. That's a lot, but "record everything" is Gmail's model, and it's probably only a few years before ISPs offer this service.
The typical person uses 500 cell phone minutes a month; that translates to 5 gigabytes a year to save it all. My iPod can store 12 times that data. A "life recorder" you can wear on your lapel that constantly records is still a few generations off: 200 gigabytes/year for audio and 700 gigabytes/year for video. It'll be sold as a security device, so that no one can attack you without being recorded. When that happens, will not wearing a life recorder be used as evidence that someone is up to no good, just as prosecutors today use the fact that someone left his cell phone at home as evidence that he didn't want to be tracked?
In a sense, we're living in a unique time in history. Identification checks are common, but they still require us to whip out our ID. Soon it'll happen automatically, either through an RFID chip in our wallet or face-recognition from cameras. And those cameras, now visible, will shrink to the point where we won't even see them.
We're never going to stop the march of technology, but we can enact legislation to protect our privacy: comprehensive laws regulating what can be done with personal information about us, and more privacy protection from the police. Today, personal information about you is not yours; it's owned by the collector. There are laws protecting specific pieces of personal data -- videotape rental records, health care information -- but nothing like the broad privacy protection laws you find in European countries. That's really the only solution; leaving the market to sort this out will result in even more invasive wholesale surveillance.
Most of us are happy to give out personal information in exchange for specific services. What we object to is the surreptitious collection of personal information, and the secondary use of information once it's collected: the buying and selling of our information behind our back.
In some ways, this tidal wave of data is the pollution problem of the information age. All information processes produce it. If we ignore the problem, it will stay around forever. And the only way to successfully deal with it is to pass laws regulating its generation, use and eventual disposal.
This essay was originally published in the Minneapolis Star-Tribune.
There's a 28-foot (8.62-meter) giant squid on display at the Natural History Museum in London:
It took several months to prepare the squid for display.
This whole article is worth reading, but I found this tidbit particularly interesting:
He was alluding to databases maintained at an AT&T data center in Kansas, which now contain electronic records of 1.92 trillion telephone calls, going back decades. The Electronic Frontier Foundation, a digital-rights advocacy group, has asserted in a lawsuit that the AT&T Daytona system, a giant storehouse of calling records and Internet message routing information, was the foundation of the N.S.A.'s effort to mine telephone records without a warrant.
What's worse than a bad authentication system? A bad authentication system that people have learned to trust. According to the Associated Press:
In the last few years, Caller ID spoofing has become much easier. Millions of people have Internet telephone equipment that can be set to make any number appear on a Caller ID system. And several Web sites have sprung up to provide Caller ID spoofing services, eliminating the need for any special hardware.
Near as anyone can tell, this is perfectly legal. (Although the FCC is investigating.)
The applications for Caller ID spoofing are not limited to fooling people. There's real fraud that can be committed:
Lance James, chief scientist at security company Secure Science Corp., said Caller ID spoofing Web sites are used by people who buy stolen credit card numbers. They will call a service such as Western Union, setting Caller ID to appear to originate from the card holder's home, and use the credit card number to order cash transfers that they then pick up.
And, of course, harmful pranks:
In one case, SWAT teams surrounded a building in New Brunswick, N.J., last year after police received a call from a woman who said she was being held hostage in an apartment. Caller ID was spoofed to appear to come from the apartment.
I have never been a fan of Caller ID. My phone number is configured to block Caller ID on outgoing calls. The number of phone numbers that refuse to accept my calls is growing, however.
Nothing too surprising in this study of password generation practices:
The majority of participants in the current study most commonly reported password generation practices that are simplistic and hence very insecure. Particular practices reported include using lowercase letters, numbers or digits, personally meaningful words and numbers (e.g., dates). It is widely known that users typically use birthdates, anniversary dates, telephone numbers, license plate numbers, social security numbers, street addresses, apartment numbers, etc. Likewise, personally meaningful words are typically derived from predictable areas and interests in the person's life and could be guessed through basic knowledge of his or her interests.
This site goes into detail about how the FedEx Kinko's ExpressPay stored value card has been hacked. There's nothing particulary amazing about the hack; the most remarkable thing is how badly the system was designed in the first place. The only security on the cards is a three-byte code that lets you read and write to the card. I'd be amazed if no one has hacked this before.
EDITED TO ADD (3/2): News article.
Identity thieves trick the unwary into revealing their personal details by telling them they've failed to report for jury duty and warrants for their arrest are being issued.
Earlier this month I blogged about a wiretapping scandal in Greece.
Unknowns tapped the mobile phones of about 100 Greek politicians and offices, including the U.S. embassy in Athens and the Greek prime minister.
More details are emerging. It turns out that the "malicious code" was actually code designed into the system. It's eavesdropping code put into the system for the police.
The attackers managed to bypass the authorization mechanisms of the eavesdropping system, and activate the "lawful interception" module in the mobile network. They then redirected about 100 numbers to 14 shadow numbers they controlled. (Here are translations of some of the press conferences with technical details. And here are details of the system used.)
There is an important security lesson here. I have long argued that when you build surveillance mechanisms into communication systems, you invite the bad guys to use those mechanisms for their own purposes. That's exactly what happened here.
UPDATED TO ADD (3/2): From a reader: "I have an update. There is some news from the 'Hellenic Authority for the Information and Communication Security and Privacy' with a few facts and I got a rumor that there is a root backdoor in the telnetd of Ericssons AXE backdoor. (No, I can't confirm the rumor.)"
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.