Schneier on Security
A blog covering security and security technology.
April 2006 Archives
Attackers are adaptable.
At least the Southern Dumpling Squid does:
Squid have personalities that appear to be passed down from parent to offspring, but those traits can be modified by environment, an Australian researcher says.
John Dvorak makes an interesting argument that Internet Explorer was Microsoft's greatest mistake ever. Certainly its decision to tightly integrate IE with the operating system -- done as an anti-competitive maneuver against Netscape during the Browser Wars -- has resulted in some enormous security problems that Microsoft has still not recovered from. Not even with the introduction of IE7.
Technology Review has an interesting article discussing some of the technologies used by the NSA in its warrantless wiretapping program, some of them from the killed Total Information Awareness (TIA) program.
Washington's lawmakers ostensibly killed the TIA project in Section 8131 of the Department of Defense Appropriations Act for fiscal 2004. But legislators wrote a classified annex to that document which preserved funding for TIA's component technologies, if they were transferred to other government agencies, say sources who have seen the document, according to reports first published in The National Journal. Congress did stipulate that those technologies should only be used for military or foreign intelligence purposes against non-U.S. citizens. Still, while those component projects' names were changed, their funding remained intact, sometimes under the same contracts.
You can find it by searching for the characters in italic and boldface scattered throughout the ruling. The first characters spell out "SMITHCODE": that's the name of the judge who wrote the ruling The rest remains unsolved.
According to The Times, the remaining letters are: J, a, e, i, e, x, t, o, s, t, p, s, a, c, g, r, e, a, m, q, w, f, k, a, d, p, m, q, z.
According to The Register, the remaining letters are: j a e i e x t o s t g p s a c g r e a m q w f k a d p m q z v.
According to one of my readers, who says he "may have missed some letters," it's: SMITHYCODEJAEIEXTOSTGPSACGREAMQWFKADPMQZV.
I think a bunch of us need to check for ourselves, and then compare notes.
And then we have to start working on solving the thing.
From the BBC:
Although he would not be drawn on his code and its meaning, Mr Justice Smith said he would probably confirm it if someone cracked it, which was "not a difficult thing to do".
As an aside, I am mentioned in Da Vinci Code. No, really. Page 199 of the American hardcover edition. "Da Vinci had been a cryptography pioneer, Sophie knew, although he was seldom given credit. Sophie's university instructors, while presenting computer encryption methods for securing data, praised modern cryptologists like Zimmermann and Schneier but failed to mention that it was Leonardo who had invented one of the first rudimentary forms of public key encryption centuries ago."
That's right. I am a realistic background detail.
Among Justice Smith’s hints, he told decoders to look at page 255 in the British paperback edition of "The Da Vinci Code," where the protagonists discuss the Fibonacci Sequence, a famous numerical series in which each number is the sum of the two preceding ones. Omitting the zero as Dan Brown, "The Da Vinci Code" author, does the series begins 1, 1, 2, 3, 5, 8, 13, 21.
The message reads: "Jackie Fisher who are you Dreadnought."
I'm disappointed, actually. That was a whopper of a hint, and I would have preferred the judge to keep quiet.
EDITED TO ADD (5/8): Commentary on my name being in The Da Vinci Code.
Good essay by Gene Spafford.
Larry Beinhart makes an interesting case for the elimination of most government secrecy.
He has a good argument, although I think the issue is a bit more complicated.
Kaspersky Labs reports on extortion scams using malware:
We've reported more than once on cases where remote malicious users have moved away from the stealth use of infected computers (stealing data from them, using them as part of zombie networks etc) to direct blackmail, demanding payment from victims. At the moment, this method is used in two main ways: encrypting user data and corrupting system information.
Among other worms, the article discusses the GpCode.ac worm, which encrypts data using 56-bit RSA (no, that's not a typo). The whole article is interesting reading.
Police officers are permitted to bypass airport security at the Dublin Airport. They flash their ID, and walk around the checkpoints.
A female member of the airport search unit is undergoing re-training after the incident in which a Department of Transport inspector passed unchecked through security screening.
There are two ways this failure could have happened. One, security person could have thought that Department of Transportation officials have the same privileges as police officers. And two, the security person could have thought she was being shown a police ID.
This could have just as easily been a bad guy showing a fake police ID. My guess is that the security people don't check them all that carefully.
The meta-point is that exceptions to security are themselves security vulnerabilities. As soon as you create a system by which some people can bypass airport security checkpoints, you invite the bad guys to try and use that system. There are reasons why you might want to create those alternate paths through security, of course, but the trade-offs should be well thought out.
Fridrich's technique is rooted in the discovery by her research group of this simple fact: Every original digital picture is overlaid by a weak noise-like pattern of pixel-to-pixel non-uniformity.
There's one important aspect of this fingerprint that the article did not talk about: how easy is it to forge? Can someone analyze 100 images from a given camera, and then doctor a pre-existing picture so that it appeared to come from that camera?
My guess is that it can be done relatively easily.
Recent articles about a proposed US-Canada and US-Mexico travel document (kind of like a passport, but less useful), with an embedded RFID chip that can be read up to 25 feet away, have once again made RFID security newsworthy.
My views have not changed. The most secure solution is a smart card that only works in contact with a reader; RFID is much more risky. But if we're stuck with RFID, the combination of shielding for the chip, basic access control security measures, and some positive action by the user to get the chip to operate is a good one. The devil is in the details, of course, but those are good starting points.
And when you start proposing chips with a 25-foot read range, you need to worry about man-in-the-middle attacks. An attacker could potentially impersonate the card of a nearby person to an official reader, just by relaying messages to and from that nearby person's card.
Here's how the attack would work. In this scenario, customs Agent Alice has the official card reader. Bob is the innocent traveler, in line at some border crossing. Mallory is the malicious attacker, ahead of Bob in line at the same border crossing, who is going to impersonate Bob to Alice. Mallory's equipment includes an RFID reader and transmitter.
Assume that the card has to be activated in some way. Maybe the cover has to be opened, or the card taken out of a sleeve. Maybe the card has a button to push in order to activate it. Also assume the card has come challenge-reply security protocol and an encrypted key exchange protocol of some sort.
Defending against this attack is hard. (I talk more about the attack in Applied Cryptography, Second Edition, page 109.) Time stamps don't help. Encryption doesn't help. It works because Mallory is simply acting as an amplifier. Mallory might not be able to read the messages. He might not even know who Bob is. But he doesn't care. All he knows is that Alice thinks he's Bob.
Precise timing can catch this attack, because of the extra delay that Mallory's relay introduces. But I don't think this is part of the spec.
The attack can be easily countered if Alice looks at Mallory's card and compares the information printed on it with what she's receiving over the RFID link. But near as I can tell, the point of the 25-foot read distance is so cards can be authenticated in bulk, from a distance.
From the News.com article:
Homeland Security has said, in a government procurement notice posted in September, that "read ranges shall extend to a minimum of 25 feet" in RFID-equipped identification cards used for border crossings. For people crossing on a bus, the proposal says, "the solution must sense up to 55 tokens."
If Mallory is on that bus, he can impersonate any nearby Bob who activates his RFID card early. And at a crowded border crossing, the odds of some Bob doing that are pretty good.
More detail here:
If that were done, the PASS system would automatically screen the cardbearers against criminal watch lists and put the information on the border guard's screen by the time the vehicle got to the station, Williams said.
And would predispose the guard to think that everything's okay, even if it isn't.
I don't think people are thinking this one through.
Paul Thurrott has posted an excellent essay on the problems with Windows Vista. Most interesting to me is how they implement UAP (User Account Protection):
Modern operating systems like Linux and Mac OS X operate under a security model where even administrative users don't get full access to certain features unless they provide an in-place logon before performing any task that might harm the system. This type of security model protects users from themselves, and it is something that Microsoft should have added to Windows years and years ago.
The problem with lots of warning dialog boxes is that they don't provide security. Users stop reading them. They think of them as annoyances, as an extra click required to get a feature to work. Clicking through gets embedded into muscle memory, and when it actually matters the user won't even realize it.
The problem with the Security Through Endless Warning Dialogs school of thought is that it doesn't work. All those earnest warning dialogs eventually blend together into a giant "click here to get work done" button that nobody bothers to read any more. The operating system cries wolf so much that when a real wolf-- in the form of a virus or malware-- rolls around, you'll mindlessly allow it access to whatever it wants, just out of habit.
Then there are the security dialogs. Ah yes, now we're making progress: Ask users on EVERY program you launch that isn't signed whether they want to elevate permissions. Uh huh, this is going to work REAL WELL. We know how well that worked with unsigned ActiveX controls in Internet Explorer so well that even Microsoft isn't signing most of its own ActiveX controls. Give too many warnings that are not quite reasonable and people will never read the dialogs and just click them anyway… I know I started doing that in the short use I've had on Vista.
These dialog boxes are not security for the user, they're CYA security from the user. When some piece of malware trashes your system, Microsoft can say: "You gave the program permission to do that; it's not our fault."
Warning dialog boxes are only effective if the user has the ability to make intelligent decisions about the warnings. If the user cannot do that, they're just annoyances. And they're annoyances that don't improve security.
EDITED TO ADD (5/8): Commentary.
At least one coded note, published in the Web site's biography, has a strong resemblance to what's known as Caesar cipher, an encryption scheme used by Julius Caesar to protect important military messages.
I got a nice quote:
"Looks like kindergarten cryptography to me. It will keep your kid sister out, but it won't keep the police out. But what do you expect from someone who is computer illiterate?" security guru Bruce Schneier, author of several books on cryptography, told Discovery News.
On the first of this month, I announced my (possibly First) Movie-Plot Threat Contest.
Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.
As of this morning, the blog post has 580 comments. I expected a lot of submissions, but the response has blown me away.
Looking over the different terrorist plots, they seem to fall into several broad categories. The first category consists of attacks against our infrastructure: the food supply, the water supply, the power infrastructure, the telephone system, etc. The idea is to cripple the country by targeting one of the basic systems that make it work.
The second category consists of big-ticket plots. Either they have very public targets -- blowing up the Super Bowl, the Oscars, etc. -- or they have high-tech components: nuclear waste, anthrax, chlorine gas, a full oil tanker, etc. And they are often complex and hard to pull off. This is the 9/11 idea: a single huge event that affects the entire nation.
The third category consists of low-tech attacks that go on and on. Several people imagined a version of the DC sniper scenario, but with multiple teams. The teams would slowly move around the country, perhaps each team starting up after the previous one was captured or killed. Other people suggested a variant of this with small bombs in random public locations around the country.
(There's a fourth category: actual movie plots. Some entries are comical, unrealistic, have science fiction premises, etc. I'm not even considering those.)
The better ideas tap directly into public fears. In my book, Beyond Fear, I discusse five different tendencies people have to exaggerate risks: to believe that something is more risky than it actually is.
The best plot ideas leverage one or more of those tendencies. Big-ticket attacks leverage the first. Infrastructure and low-tech attacks leverage the fourth. And every attack tries to leverage the fifth, especially those attacks that go on and on. I'm willing to bet that when I find a winner, it will be the plot that leverages the greatest number of those tendencies to the best possible advantage.
I also got a bunch of e-mails from people with ideas they thought too terrifying to post publicly. Some of them wouldn't even tell them to me. I also received e-mails from people accusing me of helping the terrorists by giving them ideas.
But if there's one thing this contest demonstrates, it's that good terrorist ideas are a dime a dozen. Anyone can figure out how to cause terror. The hard part is execution.
Some of the submitted plots require minimal skill and equipment. Twenty guys with cars and guns -- that sort of thing. Reading through them, you have to wonder why there have been no terrorist attacks in the U.S. since 9/11. I don't believe the "flypaper theory," that the terrorists are all in Iraq instead of in the U.S. And despite all the ineffectual security we've put in place since 9/11, I'm sure we have had some successes in intelligence and investigation -- and have made it harder for terrorists to operate both in the U.S. and abroad.
But mostly, I think terrorist attacks are much harder than most of us think. It's harder to find willing recruits than we think. It's harder to coordinate plans. It's harder to execute those plans. Terrorism is rare, and for all we've heard about 9/11 changing the world, it's still rare.
The submission deadline is the end of this month, so there's still time to submit your entry. And please read through some of the others and comment on them; I'm curious as to what other people think are the most interesting, compelling, realistic, or effective scenarios.
EDITED TO ADD (4/23): The contest made The New York Times.
Cool pictures of the glowing firefly squid.
Last month I wrote about airport passenger screening, and mentioned that the X-ray equipment inserts "test" bags into the stream in order to keep screeners more alert. That system failed pretty badly earlier this week at Atlanta's Hartsfield-Jackson Airport, when a false alarm resulted in a two-hour evacuation of the entire airport.
The screening system injects test images onto the screen. Normally the software flashes the words "This is a test" on the screen after a brief delay, but this time the software failed to indicate that. The screener noticed the image (of a "suspicious device," according to CNN) and, per procedure, screeners manually checked the bags on the conveyor belt for it. They couldn't find it, of course, but they evacuated the airport and spent two hours vainly searching for it.
Hartsfield-Jackson is the country's busiest passenger airport. It's Delta's hub city. The delays were felt across the country for the rest of the day.
Okay, so what went wrong here? Clearly the software failed. Just as clearly the screener procedures didn't fail -- everyone did what they were supposed to do.
What is less obvious is that the system failed. It failed, because it was not designed to fail well. A small failure -- in this case, a software glitch in a single X-ray machine -- cascaded in such a way as to shut down the entire airport. This kind of failure magnification is common in poorly designed security systems. Better would be for there to be individual X-ray machines at the gates -- I've seen this design at several European airports -- so that when there's a problem the effects are restricted to that gate.
Of course, this distributed security solution would be more expensive. But I'm willing to bet it would be cheaper overall, taking into account the cost of occasionally clearing out an airport.
The Kryptos Sculpture is located in the center of the CIA Headquarters in Langley, VA. It was designed in 1990, and contains a four-part encrypted puzzle. The first three parts have been solved, but now we've learned that the second-part solution was wrong and here's the corrected solution.
The fourth part remains unsolved. Wired wrote:
Sanborn has said that clues to the last section, which has only 97 letters, are contained in previously deciphered parts. Therefore getting those first three sections correct has been crucial.
From the Pittsburgh Post-Gazette:
My son and I woke up Sunday morning and drove a rented truck to New York City to move his worldly goods into an apartment there. As we made it to the Holland Tunnel, after traveling the Tony Soprano portion of the Jersey Turnpike with a blue moon in our eyes, the woman in the toll booth informed us that, since 9/11, trucks were not allowed in the tunnel; we'd have to use the Lincoln Tunnel, she said. So if you are a terrorist trying to get into New York from Jersey, be advised that you're going to have to use the Lincoln Tunnel.
California was the first state to pass a law requiring companies that keep personal data to disclose when that data is lost or stolen. Since then, many states have followed suit. Now Congress is debating federal legislation that would do the same thing nationwide.
Except that it won't do the same thing: The federal bill has become so watered down that it won't be very effective. I would still be in favor of it -- a poor federal law is better than none -- if it didn't also pre-empt more-effective state laws, which makes it a net loss.
Identity theft is the fastest-growing area of crime. It's badly named -- your identity is the one thing that cannot be stolen -- and is better thought of as fraud by impersonation. A criminal collects enough personal information about you to be able to impersonate you to banks, credit card companies, brokerage houses, etc. Posing as you, he steals your money, or takes a destructive joyride on your good credit.
Many companies keep large databases of personal data that is useful to these fraudsters. But because the companies don't shoulder the cost of the fraud, they're not economically motivated to secure those databases very well. In fact, if your personal data is stolen from their databases, they would much rather not even tell you: Why deal with the bad publicity?
Disclosure laws force companies to make these security breaches public. This is a good idea for three reasons. One, it is good security practice to notify potential identity theft victims that their personal information has been lost or stolen. Two, statistics on actual data thefts are valuable for research purposes. And three, the potential cost of the notification and the associated bad publicity naturally leads companies to spend more money on protecting personal information -- or to refrain from collecting it in the first place.
Think of it as public shaming. Companies will spend money to avoid the PR costs of this shaming, and security will improve. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.
This public shaming needs the cooperation of the press and, unfortunately, there's an attenuation effect going on. The first major breach after California passed its disclosure law -- SB1386 -- was in February 2005, when ChoicePoint sold personal data on 145,000 people to criminals. The event was all over the news, and ChoicePoint was shamed into improving its security.
Then LexisNexis exposed personal data on 300,000 individuals. And Citigroup lost data on 3.9 million individuals. SB1386 worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. After a while, it was no longer news. And when the press stopped reporting, the "cost" of these breaches to the companies declined.
Today, the only real cost that remains is the cost of notifying customers and issuing replacement cards. It costs banks about $10 to issue a new card, and that's money they would much rather not have to spend. This is the agenda they brought to the federal bill, cleverly titled the Data Accountability and Trust Act, or DATA.
Lobbyists attacked the legislation in two ways. First, they went after the definition of personal information. Only the exposure of very specific information requires disclosure. For example, the theft of a database that contained people's first initial, middle name, last name, Social Security number, bank account number, address, phone number, date of birth, mother's maiden name and password would not have to be disclosed, because "personal information" is defined as "an individual's first and last name in combination with ..." certain other personal data.
Second, lobbyists went after the definition of "breach of security." The latest version of the bill reads: "The term 'breach of security' means the unauthorized acquisition of data in electronic form containing personal information that establishes a reasonable basis to conclude that there is a significant risk of identity theft to the individuals to whom the personal information relates."
Get that? If a company loses a backup tape containing millions of individuals' personal information, it doesn't have to disclose if it believes there is no "significant risk of identity theft." If it leaves a database exposed, and has absolutely no audit logs of who accessed that database, it could claim it has no "reasonable basis" to conclude there is a significant risk. Actually, the company could point to a study that showed the probability of fraud to someone who has been the victim of this kind of data loss to be less than 1 in 1,000 -- which is not a "significant risk" -- and then not disclose the data breach at all.
Even worse, this federal law pre-empts the 23 existing state laws -- and others being considered -- many of which contain stronger individual protections. So while DATA might look like a law protecting consumers nationwide, it is actually a law protecting companies with large databases from state laws protecting consumers.
So in its current form, this legislation would make things worse, not better.
Of course, things are in flux. They're always in flux. The language of the bill has changed regularly over the past year, as various committees got their hands on it. There's also another bill, HR3997, which is even worse. And even if something passes, it has to be reconciled with whatever the Senate passes, and then voted on again. So no one really knows what the final language will look like.
But the devil is in the details, and the only way to protect us from lobbyists tinkering with the details is to ensure that the federal bill does not pre-empt any state bills: that the federal law is a minimum, but that states can require more.
That said, disclosure is important, but it's not going to solve identity theft. As I've written previously, the reason theft of personal information is so common is that the data is so valuable. The way to mitigate the risk of fraud due to impersonation is not to make personal information harder to steal, it's to make it harder to use.
Disclosure laws only deal with the economic externality of data brokers protecting your personal information. What we really need are laws prohibiting credit card companies and other financial institutions from granting credit to someone using your name with only a minimum of authentication.
But until that happens, we can at least hope that Congress will refrain from passing bad bills that override good state laws -- and helping criminals in the process.
This essay originally appeared on Wired.com.
EDITED TO ADD (4/20): Here's a comparison of state disclosure laws.
Excellent blog post. Well worth reading.
The Department of Homeland Security has released a Request for Proposal -- that's the document asking industry if anyone can do what it wants -- for the Secure Border Initiative. Washington Technology has the story:
The long-awaited request for proposals for Secure Border Initiative-Net was released today by the Homeland Security Department, which is calling the project the "most comprehensive effort in the nation's history" to gain control of the borders.
Here's a video of a bunch of graffiti artists breaching security at Andrew's Air Force Base, and tagging an Air Force One plane.
I know there are multiple planes -- four, I think -- and that they are in different states of active service at any one time. And, presumably, the different planes have different security levels depending on their status. Still, part of me thinks this is a hoax.
One, this is the sort of stunt that can get you shot at. And two, posting a video of this can get you arrested.
Anyone know anything about this?
EDITED TO ADD (4/21): It's a hoax.
Some years ago I did some design work on something I called a Deniable File System. The basic idea was the fact that the existence of ciphertext can in itself be incriminating, regardless of whether or not anyone can decrypt it. I wanted to create a file system that was deniable: where encrypted files looked like random noise, and where it was impossible to prove either the existence or non-existence of encrypted files.
This turns out to be a very hard problem for a whole lot of reasons, and I never pursued the project. But I just discovered a file system that seems to meet all of my design criteria -- Rubberhose:
Rubberhose transparently and deniably encrypts disk data, minimising the effectiveness of warrants, coersive interrogations and other compulsive mechanims, such as U.K RIP legislation. Rubberhose differs from conventional disk encryption systems in that it has an advanced modular architecture, self-test suite, is more secure, portable, utilises information hiding (steganography / deniable cryptography), works with any file system and has source freely available.
The devil really is in the details with something like this, and I would hesitate to use this in places where it really matters without some extensive review. But I'm pleased to see that someone is working on this problem.
Next request: A deniable file system that fits on a USB token, and leaves no trace on the machine it's plugged into.
Someone filed change-of-address forms with the post office to divert other peoples' mail to himself. 170 times.
Postal Service spokeswoman Patricia Licata said a credit card is required for security reasons. "We have systems in place to prevent this type of occurrence," she said, but declined further comment on the specific case until officials have time to analyze what happened.
Sounds like those systems don't work very well.
It's a provocative headline: "Triple DES Upgrades May Introduce New ATM Vulnerabilities." Basically, at the same time they're upgrading their encryption to triple-DES, they're also moving the communications links from dedicated lines to the Internet. And while the protocol encrypts PINs, it doesn't encrypt any of the other information, such as card numbers and expiration dates.
So it's the move from dedicated lines to the Internet that's adding the insecurities.
A Humboldt squid, native to Mexico, was found off the coast of Washington.
Interesting details emerging from EFF's lawsuit:
According to a statement released by Klein's attorney, an NSA agent showed up at the San Francisco switching center in 2002 to interview a management-level technician for a special job. In January 2003, Klein observed a new room being built adjacent to the room housing AT&T's #4ESS switching equipment, which is responsible for routing long distance and international calls.
More about what the Narus box can do.
EDITED TO ADD (4/14): More about Narus.
Really nice social engineering example. Note his repeated efforts to ensure that if he's stopped again, he can rely on the cop to vouch for him.
Smooth-talking escapee evades police
More frightening than my experience is the possibility that the company might do this to an existing customer. What good is a security product if the vendor refuses to sell you service on it? Without updates, most of these products are barely useful as doorstops.
The article demonstrates that a vendor might refuse to sell you a product, for reasons you can't understand. And that you might not get any warning of that fact. The moral is that you're not only buying a security product, you're buying a security company.
In our tests, we look at products, not companies. Things such as training, finances and corporate style don't come into it. But when it comes to buying products, our tests aren't enough. It's important to investigate all those peripheral aspects of the vendor before you sign a purchase order. I was reminded of that the hard way.
Stolen goods are being sold in the markets, including hard drives filled with classified data.
A reporter recently obtained several drives at the bazaar that contained documents marked "Secret." The contents included documents that were potentially embarrassing to Pakistan, a U.S. ally, presentations that named suspected militants targeted for "kill or capture" and discussions of U.S. efforts to "remove" or "marginalize" Afghan government officials whom the military considered "problem makers."
EDITED TO ADD (4/12): NPR story.
Last week the San Francisco Chronicle broke the story that Air Force One's defenses were exposed on a public Internet site:
Thus, the Air Force reacted with alarm last week after The Chronicle told the Secret Service that a government document containing specific information about the anti-missile defenses on Air Force One and detailed interior maps of the two planes -- including the location of Secret Service agents within the planes -- was posted on the Web site of an Air Force base.
And a few days later:
Air Force and Pentagon officials scrambled Monday to remove highly sensitive security details about the two Air Force One jetliners after The Chronicle reported that the information had been posted on a public Web site.
Turns out that this story involves a whole lot more hype than actual security.
The document Caffera found is part of the Air Force’s Technical Order 00-105E-9 - Aerospace Emergency Rescue and Mishap Response Information (Emergency Services) Revision 11. It resided, until recently, on the web site of the Air Logistics Center at Warner Robins Air Force Base. The purpose is pretty straight-ahead: "Recent technological advances in aviation have caused concern for the modern firefighter." So the document gives "aircraft hazards, cabin configurations, airframe materials, and any other information that would be helpful in fighting fires."
Another news report.
Some blogs criticized the San Francisco Chronicle for publishing this, because it gives the terrorists more information. I think they should be criticized for publishing this, because there's no story here.
EDITED TO ADD (4/11): Much of the document is here.
Sometimes I wonder about "security experts." Here's one who thinks Google Earth is a terrorism risk because it allows people to learn the GPS coordinates of soccer stadiums. (English blog entry on the topic here.)
Basically, Klaus Dieter Matschke is worried because Google Earth provides the location of buildings within 20 meters, whereas before coordinates had an error range of one kilometer. He's worried that this information will provide terrorists with the exact target coordinates for missile attacks.
I have no idea how anyone could print this drivel. Anyone can attend a football game with a GPS receiver in his pocket and get the coordinates down to one meter. Or buy a map.
Google Earth is not the problem; the problem is the availability of short-range missiles on the black market.
You've all seen CAPTCHAs. Those are those distorted pictures of letters and numbers you sometimes see on web forms. The idea is that it's hard for computers to identify the characters, but easy for people to do. The goal of CAPTCHAs is to authenticate that there's a person sitting in front of the computer.
KittenAuth works with images. The system shows you nine pictures of cute little animals, and the person authenticates himself by clicking on the three kittens. A computer clicking at random has only a 1 in 84 chance of guessing correctly.
Of course you could increase the security by adding more images or requiring the person to choose more images. Another worry -- which I didn't see mentioned -- is that the computer could brute-force a static database. If there are only a small fixed number of actual kittens, the computer could be told -- by a person -- that they're kittens. Then, the computer would know that whenever it sees that image it's a kitten.
Still, it's an interesting idea that warrants more research.
You've all heard of the "No Fly List." Did you know that there's a "No-Buy List" as well?
The so-called "Bad Guy List" is hardly a secret. The U.S. Treasury's Office of Foreign Assets Control maintains its "Specially Designated Nationals and Blocked Persons List" to be easily accessible on its public Web site.
I was in New York yesterday, and I saw a sign at the entrance to the Midtown Tunnel that said: "See something? Say something." The problem with a nation of amateur spies is that it results in these sorts of results. "I know he's a terrorist because he's dressing funny and he always has white wires hanging out of his pocket." "They all talk in a funny language, and their cooking smells bad."
Amateur spies perform amateur spying. If everybody does it, the false alarms will overwhelm the police.
Good information from EPIC on the security of tax data in the IRS.
This is a great idea:
Lawmakers in Iowa are proposing a special "passport" meant to protect victims of identity theft against false criminal action and credit charges.
I wrote about something similar in Beyond Fear:
In Singapore, some names are so common that the police issue He's-not-the-guy-we're-looking-for documents exonerating innocent people with the same names as wanted criminals.
EDITED TO ADD (4/7): Of course it will be forged; all documents are forged. And yes, I've recently written that documents are hard to verify. This is a still good idea, even though it's not perfect.
There are basically four ways to eavesdrop on a telephone call.
One, you can listen in on another phone extension. This is the method preferred by siblings everywhere. If you have the right access, it's the easiest. While it doesn't work for cell phones, cordless phones are vulnerable to a variant of this attack: A radio receiver set to the right frequency can act as another extension.
Two, you can attach some eavesdropping equipment to the wire with a pair of alligator clips. It takes some expertise, but you can do it anywhere along the phone line's path -- even outside the home. This used to be the way the police eavesdropped on your phone line. These days it's probably most often used by criminals. This method doesn't work for cell phones, either.
Three, you can eavesdrop at the telephone switch. Modern phone equipment includes the ability for someone to listen in this way. Currently, this is the preferred police method. It works for both land lines and cell phones. You need the right access, but if you can get it, this is probably the most comfortable way to eavesdrop on a particular person.
Four, you can tap the main trunk lines, eavesdrop on the microwave or satellite phone links, etc. It's hard to eavesdrop on one particular person this way, but it's easy to listen in on a large chunk of telephone calls. This is the sort of big-budget surveillance that organizations like the National Security Agency do best. They've even been known to use submarines to tap undersea phone cables.
That's basically the entire threat model for traditional phone calls. And when most people think about IP telephony -- voice over internet protocol, or VOIP -- that's the threat model they probably have in their heads.
Unfortunately, phone calls from your computer are fundamentally different from phone calls from your telephone. Internet telephony's threat model is much closer to the threat model for IP-networked computers than the threat model for telephony.
And we already know the threat model for IP. Data packets can be eavesdropped on anywhere along the transmission path. Data packets can be intercepted in the corporate network, by the internet service provider and along the backbone. They can be eavesdropped on by the people or organizations that own those computers, and they can be eavesdropped on by anyone who has successfully hacked into those computers. They can be vacuumed up by nosy hackers, criminals, competitors and governments.
It's comparable to threat No. 3 above, but with the scope vastly expanded.
My greatest worry is the criminal attacks. We already have seen how clever criminals have become over the past several years at stealing account information and personal data. I can imagine them eavesdropping on attorneys, looking for information with which to blackmail people. I can imagine them eavesdropping on bankers, looking for inside information with which to make stock purchases. I can imagine them stealing account information, hijacking telephone calls, committing identity theft. On the business side, I can see them engaging in industrial espionage and stealing trade secrets. In short, I can imagine them doing all the things they could never have done with the traditional telephone network.
This is why encryption for VOIP is so important. VOIP calls are vulnerable to a variety of threats that traditional telephone calls are not. Encryption is one of the essential security technologies for computer data, and it will go a long way toward securing VOIP.
The last time this sort of thing came up, the U.S. government tried to sell us something called "key escrow." Basically, the government likes the idea of everyone using encryption, as long as it has a copy of the key. This is an amazingly insecure idea for a number of reasons, mostly boiling down to the fact that when you provide a means of access into a security system, you greatly weaken its security.
A recent case in Greece demonstrated that perfectly: Criminals used a cell-phone eavesdropping mechanism already in place, designed for the police to listen in on phone calls. Had the call system been designed to be secure in the first place, there never would have been a backdoor for the criminals to exploit.
Fortunately, there are many VOIP-encryption products available. Skype has built-in encryption. Phil Zimmermann is releasing Zfone, an easy-to-use open-source product. There's even a VOIP Security Alliance.
Encryption for IP telephony is important, but it's not a panacea. Basically, it takes care of threats No. 2 through No. 4, but not threat No. 1. Unfortunately, that's the biggest threat: eavesdropping at the end points. No amount of IP telephony encryption can prevent a Trojan or worm on your computer -- or just a hacker who managed to get access to your machine -- from eavesdropping on your phone calls, just as no amount of SSL or e-mail encryption can prevent a Trojan on your computer from eavesdropping -- or even modifying -- your data.
So, as always, it boils down to this: We need secure computers and secure operating systems even more than we need secure transmission.
This essay originally appeared on Wired.com.
I simply don't have the science to evaluate this claim:
Since conventional sound waves disperse when traveling through a medium, the possibility of focusing sound waves could have applications in several areas. In cryptography, for example, when sending a secret message, the sender could ensure that only one location would receive the message. Interceptors at other locations would only pick up noise due to unfocused waves. Other potential uses include antisubmarine warfare and underwater communications that benefit from targeted signaling.
According to The New York Times:
Undercover Congressional investigators successfully smuggled into the United States enough radioactive material to make two dirty bombs, even after it set off alarms on radiation detectors installed at border checkpoints, a new report says.
The reason is interesting:
The alarms went off in both locations, and the investigators were pulled aside for questioning. In both cases, they showed the agents from the Customs and Border Protection agency forged import licenses from the Nuclear Regulatory Commission, based on an image of the real document they found on the Internet.
I've written about this problem before, and it's one I think will get worse in the future. Verification systems are often the weakest link of authentication. Improving authentication tokens won't improve security unless the verification systems improve as well.
Here's an article on the paper.
There's a helicopter shuttle that runs from Lower Manhattan to Kennedy Airport. It's basically a luxury item: for $139 you can avoid the drive to the airport. But, of course, security screeners are required for passengers, and that's causing some concern:
At the request of U.S. Helicopter's executives, the federal Transportation Security Administration set up a checkpoint, with X-ray and bomb-detection machines, to screen passengers and their luggage at the heliport.
This is not a security problem; it's an economics problem. And it's a good illustration of the concept of "externalities." An externality is an effect of a decision not borne by the decision-maker. In this example, U.S. Helicopter made a business decision to offer this service at a certain price. And customers will make a decision about whether or not the service is worth the money. But there is more to the cost than the $139. The cost of that checkpoint is an externality to both U.S. Helicopter and its customers, because the $560,000 spent on the security checkpoint is paid for by taxpayers. Taxpayers are effectively subsidizing the true cost of the helicopter trip.
The only way to solve this is for the government to bill the airline passengers for the cost of security screening. It wouldn't be much per ticket, maybe $15. And it would be much less at major airports, because the economies of scale are so much greater.
The article even points out that customers would gladly pay the extra $15 because of another externality: the people who decide whether or not to take the helicopter trip are not the people actually paying for it.
Bobby Weiss, a self-employed stock trader and real estate broker who was U.S. Helicopter's first paying customer yesterday, said he would pay $300 for a round trip to Kennedy, and he expected most corporate executives would, too.
What Mr. Weiss is saying is that the costs -- both the direct cost and the cost of the security checkpoint -- are externalities to him, so he really doesn't care. Exactly.
It's a really clever idea: bolts and latches that fasten and unfasten in response to remote computer commands.
What Rudduck developed are fasteners analogous to locks in doors, only in this case messages are sent electronically to engage the parts to lock or unlock. A quick electrical charge triggered remotely by a device or computer may move the part to lock, while another jolt disengages the unit.
Pretty clever, actually. The whole article is interesting.
But this part scares me:
A potential security breach threat apparently doesn't exist.
Clearly this Harrison guy knows nothing about computer security.
EDITED TO ADD: Slashdot has a thread on the topic.
Last week the Government Accounting Office released three new reports on homeland security.
I don't know if this is an April Fool's Day joke, but it's funny all the same.
NOTE: If you have a blog, please spread the word.
For a while now, I have been writing about our penchant for "movie-plot threats": terrorist fears based on very specific attack scenarios. Terrorists with crop dusters, terrorists exploding baby carriages in subways, terrorists filling school buses with explosives -- these are all movie-plot threats. They're good for scaring people, but it's just silly to build national security policy around them.
But if we're going to worry about unlikely attacks, why can't they be exciting and innovative ones? If Americans are going to be scared, shouldn't they be scared of things that are really scary? "Blowing up the Super Bowl" is a movie plot to be sure, but it's not a very good movie. Let's kick this up a notch.
It is in this spirit I announce the (possibly First) Movie-Plot Threat Contest. Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.
Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.
Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.
Post your movie plots here on this blog.
Judging will be by me, swayed by popular acclaim in the blog comments section. The prize will be an autographed copy of Beyond Fear. And if I can swing it, a phone call with a real live movie producer.
Entries close at the end of the month -- April 30 -- so Crypto-Gram readers can also play.
This is not an April Fool's joke, although it's in the spirit of the season. The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we're not any safer through security measures that require us to correctly guess what the terrorists are going to do next.
EDITED TO ADD (4/4): There are hundreds of ideas here.
EDITED TO ADD (4/22): Update here.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.