Schneier on Security
A blog covering security and security technology.
June 2007 Archives
"The only thing we have to fear is the 'culture of fear' itself," by Frank Furedi:
Fear plays a key role in twenty-first century consciousness. Increasingly, we seem to engage with various issues through a narrative of fear. You could see this trend emerging and taking hold in the last century, which was frequently described as an 'Age of Anxiety'. But in recent decades, it has become more and better defined, as specific fears have been cultivated.
At the beach, sand is more deadly than sharks:
Since 1985, at least 20 children and young adults in the United States have died in beach or backyard sand submersions.
And this is important enough to become someone's crusade?
We learned the news in March: Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.
The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.
The whole incident serves as a poignant illustration of one of the thorniest problems of the information age: data collected for one purpose and then used for another, or "data reuse."
When we think about our personal data, what bothers us most is generally not the initial collection and use, but the secondary uses. I personally appreciate it when Amazon.com suggests books that might interest me, based on books I have already bought. I like it that my airline knows what type of seat and meal I prefer, and my hotel chain keeps records of my room preferences. I don't mind that my automatic road-toll collection tag is tied to my credit card, and that I get billed automatically. I even like the detailed summary of my purchases that my credit card company sends me at the end of every year. What I don't want, though, is any of these companies selling that data to brokers, or for law enforcement to be allowed to paw through those records without a warrant.
There are two bothersome issues about data reuse. First, we lose control of our data. In all of the examples above, there is an implied agreement between the data collector and me: It gets the data in order to provide me with some sort of service. Once the data collector sells it to a broker, though, it's out of my hands. It might show up on some telemarketer's screen, or in a detailed report to a potential employer, or as part of a data-mining system to evaluate my personal terrorism risk. It becomes part of my data shadow, which always follows me around but I can never see.
This, of course, affects our willingness to give up personal data in the first place. The reason U.S. census data was declared off-limits for other uses was to placate Americans' fears and assure them that they could answer questions truthfully. How accurate would you be in filling out your census forms if you knew the FBI would be mining the data, looking for terrorists? How would it affect your supermarket purchases if you knew people were examining them and making judgments about your lifestyle? I know many people who engage in data poisoning: deliberately lying on forms in order to propagate erroneous data. I'm sure many of them would stop that practice if they could be sure that the data was only used for the purpose for which it was collected.
The second issue about data reuse is error rates. All data has errors, and different uses can tolerate different amounts of error. The sorts of marketing databases you can buy on the web, for example, are notoriously error-filled. That's OK; if the database of ultra-affluent Americans of a particular ethnicity you just bought has a 10 percent error rate, you can factor that cost into your marketing campaign. But that same database, with that same error rate, might be useless for law enforcement purposes.
Understanding error rates and how they propagate is vital when evaluating any system that reuses data, especially for law enforcement purposes. A few years ago, the Transportation Security Administration's follow-on watch list system, Secure Flight, was going to use commercial data to give people a terrorism risk score and determine how much they were going to be questioned or searched at the airport. People rightly rebelled against the thought of being judged in secret, but there was much less discussion about whether the commercial data from credit bureaus was accurate enough for this application.
An even more egregious example of error-rate problems occurred in 2000, when the Florida Division of Elections contracted with Database Technologies (since merged with ChoicePoint) to remove convicted felons from the voting rolls. The databases used were filled with errors and the matching procedures were sloppy, which resulted in thousands of disenfranchised voters -- mostly black -- and almost certainly changed a presidential election result.
Of course, there are beneficial uses of secondary data. Take, for example, personal medical data. It's personal and intimate, yet valuable to society in aggregate. Think of what we could do with a database of everyone's health information: massive studies examining the long-term effects of different drugs and treatment options, different environmental factors, different lifestyle choices. There's an enormous amount of important research potential hidden in that data, and it's worth figuring out how to get at it without compromising individual privacy.
This is largely a matter of legislation. Technology alone can never protect our rights. There are just too many reasons not to trust it, and too many ways to subvert it. Data privacy ultimately stems from our laws, and strong legal protections are fundamental to protecting our information against abuse. But at the same time, technology is still vital.
Both the Japanese internment and the Florida voting-roll purge demonstrate that laws can change ... and sometimes change quickly. We need to build systems with privacy-enhancing technologies that limit data collection wherever possible. Data that is never collected cannot be reused. Data that is collected anonymously, or deleted immediately after it is used, is much harder to reuse. It's easy to build systems that collect data on everything -- it's what computers naturally do -- but it's far better to take the time to understand what data is needed and why, and only collect that.
History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people's freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It's bad civic hygiene to build an infrastructure that can be used to facilitate a police state.
This article originally appeared on Wired.com
If someone wants to buy your vote, he'd like some proof that you've delivered the goods. Camera phones are one way for you to prove to your buyer that you voted the way he wants. Belgian voting machines have been designed to minimize that risk.
Once you have confirmed your vote, the next screen doesn't display how you voted. So if one is coerced and has to deliver proof, one just has to take a picture of the vote one was coerced into, and then back out from the screen and change ones vote. The only workaround I see is for the coercer to demand a video of the complete voting process, in stead of a picture of the ballot.
The author is wrong that this is an advantage electronic ballots have over paper ballots. Paper voting systems can be designed with the same security features.
Really good Washington Post article on secrecy:
But the notion that information is more credible because it's secret is increasingly unfounded. In fact, secret information is often more suspect because it hasn't been subjected to open debate. Those with their own agendas can game the system, over-classifying or stove-piping self-serving intelligence to shield it from scrutiny. Those who cherry-picked intelligence in the run-up to the Iraq war could ignore anything that contradicted it. Even now, some members of Congress tell me that they avoid reading classified reports for fear that if they do, the edicts of secrecy will bar them from discussing vital public issues.
Back in 2002 I wrote about the relationship between secrecy and security.
Here's an interesting phenomenon: rising gas costs have pushed up a lot of legitimate transactions to the "anti-fraud" ceiling.
Security is a trade-off, and now the ceiling is annoying more and more legitimate gas purchasers. But to me the real question is: does this ceiling have any actual security purpose?
In general, credit card fraudsters like making gas purchases because the system is automated: no signature is required, and there's no need to interact with any other person. In fact, buying gas is the most common way a fraudster tests that a recently stolen card is valid. The anti-fraud ceiling doesn't actually prevent any of this, but limits the amount of money at risk.
But so what? How many perps are actually trying to get more gas than is permitted? Are credit-card-stealing miscreants also swiping cars with enormous gas tanks, or merely filling up the passenger cars they regularly drive? I'd love to know how many times, prior to the run-up in gas prices, a triggered cutoff actually coincided with a subsequent report of a stolen card. And what's the effect of a ceiling, apart from a gas shut-off? Surely the smart criminals know about smurfing, if they need more gas than the ceiling will allow.
The Visa spokesperson said, "We get more calls, questions, when gas prices increase." He/she didn't say: "We make more calls to see if fraud is occurring." So the only inquiries made may be in the cases where fraud isn't occurring.
From Technology Review:
A camera developed by computer scientists at the University of California, Berkeley, would obscure, with an oval, the faces of people who appear on surveillance videos. These so-called respectful cameras, which are still in the research phase, could be used for day-to-day surveillance applications and would allow for the privacy oval to be removed from a given set of footage in the event of an investigation.
An interesting privacy-enhancing technology.
This is a great piece of news in the U.S. For the first time, e-mail has been granted the same constitutional protections as telephone calls and personal papers: the police need a warrant to get at it. Now it's only a circuit court decision -- the Sixth U.S. Circuit Court of Appeals in Ohio -- it's pretty narrowly defined based on the attributes of the e-mail system, and it has a good chance of being overturned by the Supreme Court...but it's still great news.
The way to think of the warrant system is as a security device. The police still have the ability to get access to e-mail in order to investigate a crime. But in order to prevent abuse, they have to convince a neutral third party -- a judge -- that accessing someone's e-mail is necessary to investigate that crime. That judge, at least in theory, protects our interests.
Clearly e-mail deserves the same protection as our other personal papers, but -- like phone calls -- it might take the courts decades to figure that out. But we'll get there eventually.
Does this seem real to anyone?
Somehow, the callers have gained control of the family cell phones, Price and Kuykendall say. Messages received by the sisters include snatches of conversation overheard on cell-phone mikes, replayed and transmitted via voice mail. Phone records show many of the messages coming from Courtney’s phone, even when she’s not using it even when it’s turned off.
Here's another report.
There's something going on here, but I just don't believe it's entirely cell phone hacking. Something else is going on.
They're protective covers that go over your drink and "protect" against someone trying to slip a Mickey Finn (or whatever they're called these days):
The concept behind the cocktail cover is fairly simply. About the size of a coaster, it can be used to cap a drink that goes unattended. When a person returns to a beverage, there is a layer that can be pulled back, leaving a thin sheath protecting the cocktail. That can be punctured with a straw or pulled off entirely -- either way the drinker will know that the cocktail has not been tampered with.
I'm sure there are many ways to defeat this security device if you're so inclined: a syringe, affixing a new cover after you tamper with the drink, and so on. And this is exactly the sort of rare risk we're likely to overreact to. But to me, the most interesting aspect of this story is the agenda. If these things become common, it won't be because of security. It will be because of advertising:
Barry said that companies could advertise on the cocktail covers, likely covering the cost of production. Each cover, he said, costs less than 10 cents to make.
"We remain wholly committed to the destruction of America, the Great Satan," al-Sharif said. "But now is not a good time for us. The season finale of Lost was such a cliff- hanger that we have to at least catch the first episode of the new season. After that, though, death to the infidels."
It's a 21-foot-long giant squid.
Does this make sense to anyone?
TSA said Boeing would use its Monte Carlo simulation model "to identify U.S. commercial aviation system vulnerabilities against a wide variety of attack scenarios."
I can't imagine how random simulations are going to be all that useful in evaluating airplane threats, as the adversary we're worried about isn't particularly random -- and, in fact, is motivated to target his attacks directly at the weak points in any security measures.
Maybe "chatter" has tipped the TSA off to a Muta al-Stochastic.
Wired.com has the story:
Congress asked Homeland Security's chief information officer, Scott Charbo, who has a Masters in plant science, to account for more than 800 self-reported vulnerabilities over the last two years and for recently uncovered systemic security problems in US-VISIT, the massive computer network intended to screen and collect the fingerprints and photos of visitors to the United States.
The French government wants to ban BlackBerry e-mail devices, because of worries of eavesdropping by U.S. intelligence.
Someone claims to have hacked the Bloomsbury Publishing network, and has posted what he says is the ending to the last Harry Potter book.
I don't believe it, actually. Sure, it's possible -- probably even easy. But the posting just doesn't read right to me.
The attack strategy was the easiest one. The usual milw0rm downloaded exploit delivered by email/click-on-the-link/open-browser/click-on-this-animated-icon/back-connect to some employee of Bloomsbury Publishing, the company that's behind the Harry crap.
And I would expect someone who really got their hands on a copy of the manuscript to post the choice bits of text, not just a plot summary. It's easier, and it's more proof.
Sorry; I don't buy it.
EDITED TO ADD (7/25): I was right; none of his "predictions" were correct.
Ask anybody who's made money robbing houses, and they'll tell you straight up: you can get away with a lot of loot in the 10 minutes before the cops come.
The website appears not to be a joke.
EDITED TO ADD (6/23): In the comments, a lot of people have taken me to task for calling this security silly. I stand by my statement: not because it's not effective, but because it's not a good trade-off. I can certainly imagine scenarios where filling your house with vision-impairing fog is just the thing to foil a would-be burglar, but it seems awfully specific a countermeasure to me.
Home security -- like all security, really -- is a combination of protection, detection, and response. Locks and bars are the protection system, and the alarm is the detection/response system. Fogshield is a protection system: after the locks and bars have failed, Fogshield 1) makes it harder for the burglar to navagate around the house, and 2) potentially delays him until the response system (police or whomever) arrives.
But it has problems as a protection system. For one, false alarms are way worse than before. It's one thing to have a loud bell annoy the neighbors until you turn it off, it's another to fill your house with fog in less than 15 seconds (plus the cost to replace the canister).
This whole thing feels real "movie-plot threat" to me: great special effect in a movie, but not really a good security trade-off for home use. An alarm system is going to make an average burglar go to the house next door instead, and a dedicated burglar isn't going to be deterred by this.
Read this essay by Randy Farmer, a pioneer of virtual online worlds, explaining something called Disney's ToonTown.
Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that's inappropriate for children to hear.
Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called "Speedchat," a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.
Users found ways to pass code strings to strangers anyway. This page describes several protocols, using gestures, canned sentences, or movement of objects in the game.
After you read the ways above to make secret friends, look here. Another way to make secret friends with toons you don't know is to form letters/numbers with the picture frames in your house. Around you may see toons who have alot of picture frames at their toon estates, they are usually looking for secret friends. This is how to do it! So, lets say you wanted to make secret friends with a toon named Lily. Your "pretend" secret friend code is 4yt 56s.
Randy writes: "By hook, or by crook, customers will always find a way to connect with each other."
In 1748, the painter William Hogarth was arrested as a spy for sketching fortifications at Calais.
A $100K National Science Foundation grant to Geosemble Technologies, Inc.
SBIR Phase I: Exploiting High-Resolution Imagery, Geospatial Data, and Online Sources to Automatically Identify Direct Marketing Leads
It seems like "We want to protect children" really means, We want to give the appearance that we've made an effort to protect children. If they really wanted to protect children, they wouldn't use the honor system as the sole safeguard standing between previews filled with sex and violence and Internet-savvy kids who can, in a matter of seconds, beat the impotent little system.
Another National Science Foundation grant, this one for $150K to a company called Bridger Photonics:
This Small Business Technology Transfer (STTR) Phase I research project addresses the need for sensitive, portable, low-cost, laser-based remote sensing devices to detect chemical effluents of illicit methamphetamine (meth) production from a distance. The proposed project will develop an innovative correlated-mode laser source for high-resolution mid-infrared differential absorption lidar. To accomplish this the research team will base the research on a compact, monolithic, passively Q-switched laser/optical parametric oscillator design that has proven incredibly effective for ranging purposes (no spectroscopy) in demanding environments. This source, in its present state, is unsuitable for high-resolution mid-infrared spectroscopy. The team will therefore advance the laser by targeting the desired effluent mid-IR wavelengths, significantly improving the spectral, spatial, and temporal emission characteristics, and incorporating dual mode operation. Realization of the laser source will enable real-time remote detection of meth labs in widely varying environments, locations, and circumstances with quantum-limited detection sensitivity, spectral selectivity for the desired molecules in a spectral region that is difficult to access, and differential measurement capabilities for effective self calibration.
This story is pretty disgusting:
"I demanded to speak to a TSA [Transportation Security Administration] supervisor who asked me if the water in the sippy cup was 'nursery water or other bottled water.' I explained that the sippy cup water was filtered tap water. The sippy cup was seized as my son was pointing and crying for his cup. I asked if I could drink the water to get the cup back, and was advised that I would have to leave security and come back through with an empty cup in order to retain the cup. As I was escorted out of security by TSA and a police officer, I unscrewed the cup to drink the water, which accidentally spilled because I was so upset with the situation.
This story portrays the TSA as jack-booted thugs. The story hit the Internet last Thursday, and quickly made the rounds. I saw it on BoingBoing. But, as it turns out, it's not entirely true.
The TSA has a webpage up, with both the incident report and video.
TSO [REDACTED] took the female to the exit lane with the stroller and her bag. When she got past the exit lane podium she opened the child's drink container and held her arm out and poured the contents (approx. 6 to 8 ounces) on the floor. MWAA Officer [REDACTED] was manning the exit lane at the time and observed the entire scene and approached the female passenger after observing this and stopped her when she tried to re-enter the sterile area after trying to come back through after spilling the fluids on the floor. The female passenger flashed her badge and credentials and told the MWAA officer "Do you know who I am?" An argument then ensued between the officer and the passenger of whether the spilling of the fluid was intentional or accidental. Officer [REDACTED] asked the passenger to clean up the spill and she did.
Watch the second video. TSO [REDACTED] is partially blocking the scene, but at 2:01:00 PM it's pretty clear that Monica Emmerson -- that's the female passenger -- spills the liquid on the floor on purpose, as a deliberate act of defiance. What happens next is more complicated; you can watch it for yourself, or you can read BoingBoing's somewhat sarcastic summary.
In this instance, the TSA is clearly in the right.
But there's a larger lesson here. Remember the Princeton professor who was put on the watch list for criticizing Bush? That was also untrue. Why is it that we all -- myself included -- believe these stories? Why are we so quick to assume that the TSA is a bunch of jack-booted thugs, officious and arbitrary and drunk with power?
It's because everything seems so arbitrary, because there's no accountability or transparency in the DHS. Rules and regulations change all the time, without any explanation or justification. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states. It's not what we expect out of 21st century America.
The problem is larger than the TSA, but the TSA is the part of "homeland security" that the public comes into contact with most often -- at least the part of the public that writes about these things most. They're the public face of the problem, so of course they're going to get the lion's share of the finger pointing.
It was smart public relations on the TSA's part to get the video of the incident on the Internet quickly, but it would be even smarter for the government to restore basic constitutional liberties to our nation's counterterrorism policy. Accountability and transparency are basic building blocks of any democracy; and the more we lose sight of them, the more we lose our way as a nation.
Just in time for Father's Day.
At the kickoff reception for the IT Security Summit in Johannesburg, there was a bit of industrial theater about identity theft. Someone tried to pretend he was me; it was pretty funny, really. Someone captured my discussion after on video.
On April 1, I announced the Second Annual Movie-Plot Threat Contest:
Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.
On June 5, I posted three semi-finalists out of the 334 comments:
Well, we have a winner. I can't divulge the exact formula -- because you'll all hack the system next year -- but it was a combination of my opinion, popular acclaim in blog comments, and the opinion of Tom Grant (the previous year's winner).
I present to you: Butterflies and Beverages, posted by Ron:
It must have been a pretty meadow, Wilkes thought, just a day before. He tried to picture how it looked then: without the long, wide wound in the earth, without the charred and broken fuselage of the jet that gouged it out, before the rolling ground was strewn with papers and cushions and random bits of plastic and fabric and all the things inside the plane that lay like the confetti from a brief, fiery parade.
Ron gets signed copies of my books, a $50 Amazon gift certificate contributed by a reader, and -- if I can find one -- an interview with a real-live movie director. (Does anyone know one?) We hope that one of his prizes isn't a visit by the FBI.
EDITED TO ADD (6/27): There's an article on Slate about the contest.
They build an alternate reality where every cryptographic algorithm has been broken, and the only thing left is their own system. "The weakening of public crypto systems commenced in 1997. First it was the 40-bit key, a few months later the 48-bit key, followed by the 56-bit key, and later the 512 bit has been broken..." What are they talking about? Would you trust a cryptographer who didn't know the difference between symmetric and public-key cryptography? "Our technology... is the only unbreakable encryption commercially available." The company's founder quoted in a news article: "All other encryption methods have been compromised in the last five to six years." Maybe in their alternate reality, but not in the one we live in.
Read the whole thing; it's pretty funny.
They're still around, and they're still touting their snake-oil "virtual matrix encryption." (The patent is finally public, and if someone can reverse-engineer the combination of patentese and gobbledygook into an algorithm, we can finally see how actually awful it really is.) The tech on their website is better than it was in 2003, but it's still pretty hokey.
Back in 2005, they got their product FIPS 140-1 certified (#505 on this page). The certification was for their AES implementation, but they're sneakily implying that VME was certified. From their website: "The Strength of a Megabit Encryption (VME). The Assurance of a 256 Bit Standard (AES). Both Technologies Combined in One Certified Module! FIPS 140-2 CERTIFICATE # 505."
Just goes to show that with a bit of sleight-of-hand you can get anything FIPS 140 certified.
The recently publicized terrorist plot to blow up John F. Kennedy International Airport, like so many of the terrorist plots over the past few years, is a study in alarmism and incompetence: on the part of the terrorists, our government and the press.
Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots -- and worse, allowing our essential freedoms to be lost by using them as an excuse -- is wrong.
The alleged plan, to blow up JFK's fuel tanks and a small segment of the 40-mile petroleum pipeline that supplies the airport, was ridiculous. The fuel tanks are thick-walled, making them hard to damage. The airport tanks are separated from the pipelines by cutoff valves, so even if a fire broke out at the tanks, it would not back up into the pipelines. And the pipeline couldn't blow up in any case, since there's no oxygen to aid combustion. Not that the terrorists ever got to the stage -- or demonstrated that they could get there -- where they actually obtained explosives. Or even a current map of the airport's infrastructure.
But read what Russell Defreitas, the lead terrorist, had to say: "Anytime you hit Kennedy, it is the most hurtful thing to the United States. To hit John F. Kennedy, wow.... They love JFK -- he's like the man. If you hit that, the whole country will be in mourning. It's like you can kill the man twice."
If these are the terrorists we're fighting, we've got a pretty incompetent enemy.
You couldn't tell that from the press reports, though. "The devastation that would be caused had this plot succeeded is just unthinkable," U.S. Attorney Roslynn R. Mauskopf said at a news conference, calling it "one of the most chilling plots imaginable." Sen. Arlen Specter (R-Pennsylvania) added, "It had the potential to be another 9/11."
These people are just as deluded as Defreitas.
The only voice of reason out there seemed to be New York's Mayor Michael Bloomberg, who said: "There are lots of threats to you in the world. There's the threat of a heart attack for genetic reasons. You can't sit there and worry about everything. Get a life.... You have a much greater danger of being hit by lightning than being struck by a terrorist."
This isn't the first time a bunch of incompetent terrorists with an infeasible plot have been painted by the media as poised to do all sorts of damage to America. In May we learned about a six-man plan to stage an attack on Fort Dix by getting in disguised as pizza deliverymen and shooting as many soldiers and Humvees as they could, then retreating without losses to fight again another day. Their plan, such as it was, went awry when they took a videotape of themselves at weapons practice to a store for duplication and transfer to DVD. The store clerk contacted the police, who in turn contacted the FBI. (Thank you to the video store clerk for not overreacting, and to the FBI agent for infiltrating the group.)
I don't think these nut jobs, with their movie-plot threats, even deserve the moniker "terrorist." But in this country, while you have to be competent to pull off a terrorist attack, you don't have to be competent to cause terror. All you need to do is start plotting an attack and -- regardless of whether or not you have a viable plan, weapons or even the faintest clue -- the media will aid you in terrorizing the entire population.
The most ridiculous JFK Airport-related story goes to the New York Daily News, with its interview with a waitress who served Defreitas salmon; the front-page headline blared, "Evil Ate at Table Eight."
Following one of these abortive terror misadventures, the administration invariably jumps on the news to trumpet whatever ineffective "security" measure they're trying to push, whether it be national ID cards, wholesale National Security Agency eavesdropping or massive data mining. Never mind that in all these cases, what caught the bad guys was old-fashioned police work -- the kind of thing you'd see in decades-old spy movies.
The administration repeatedly credited the apprehension of Faris to the NSA's warrantless eavesdropping programs, even though it's just not true. The 9/11 terrorists were no different; they succeeded partly because the FBI and CIA didn't follow the leads before the attacks.
Even the London liquid bombers were caught through traditional investigation and intelligence, but this doesn't stop Secretary of Homeland Security Michael Chertoff from using them to justify (.pdf) access to airline passenger data.
Of course, even incompetent terrorists can cause damage. This has been repeatedly proven in Israel, and if shoe-bomber Richard Reid had been just a little less stupid and ignited his shoes in the lavatory, he might have taken out an airplane.
So these people should be locked up ... assuming they are actually guilty, that is. Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it's unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.
The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn't have ordinarily done. The Miami gang's Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.
The rest of them stink of exaggeration. Jose Padilla was not actually prepared to detonate a dirty bomb in the United States, despite histrionic administration claims to the contrary. Now that the trial is proceeding, the best the government can charge him with is conspiracy to murder, kidnap and maim, and it seems unlikely that the charges will stick. An alleged ringleader of the U.K. liquid bombers, Rashid Rauf, had charges of terrorism dropped for lack of evidence (of the 25 arrested, only 16 were charged). And now it seems like the JFK mastermind was more talk than action, too.
Remember the "Lackawanna Six," those terrorists from upstate New York who pleaded guilty in 2003 to "providing support or resources to a foreign terrorist organization"? They entered their plea because they were threatened with being removed from the legal system altogether. We have no idea if they were actually guilty, or of what.
Even under the best of circumstances, these are difficult prosecutions. Arresting people before they've carried out their plans means trying to prove intent, which rapidly slips into the province of thought crime. Regularly the prosecution uses obtuse religious literature in the defendants' homes to prove what they believe, and this can result in courtroom debates on Islamic theology. And then there's the issue of demonstrating a connection between a book on a shelf and an idea in the defendant's head, as if your reading of this article -- or purchasing of my book -- proves that you agree with everything I say. (The Atlantic recently published a fascinating article on this.)
I'll be the first to admit that I don't have all the facts in any of these cases. None of us do. So let's have some healthy skepticism. Skepticism when we read about these terrorist masterminds who were poised to kill thousands of people and do incalculable damage. Skepticism when we're told that their arrest proves that we need to give away our own freedoms and liberties. And skepticism that those arrested are even guilty in the first place.
There is a real threat of terrorism. And while I'm all in favor of the terrorists' continuing incompetence, I know that some will prove more capable. We need real security that doesn't require us to guess the tactic or the target: intelligence and investigation -- the very things that caught all these terrorist wannabes -- and emergency response. But the "war on terror" rhetoric is more politics than rationality. We shouldn't let the politics of fear make us less safe.
This essay originally appeared on Wired.com.
EDITED TO ADD (6/14): Another essay on the topic.
According to the website:
Stand alone GPS equipment is not permitted on property.
It's okay if they're embedded in your phone or computer, though.
Whoa, is this dorky.
Over two years ago, George Ledin wrote an essay in Communications of the ACM, where he advocated teaching worms and viruses to computer science majors:
Computer science students should learn to recognize, analyze, disable, and remove malware. To do so, they must study currently circulating viruses and worms, and program their own. Programming is to computer science what field training is to police work and clinical experience is to surgery. Reading a book is not enough. Why does industry hire convicted hackers as security consultants? Because we have failed to educate our majors.
No one wrote a virus for a class project. No new malware got into the wild. No new breed of supervillian graduated.
Teaching this stuff is just plain smart.
Watch this video very carefully; it's President Bush working the crowds in Albania. At 0.50 minutes into the clip, Bush has a watch. At 1.04 minutes into the clip, he had a watch.
The U.S. is denying that his watch was stolen:
Photographs showed Bush, surrounded by five bodyguards, putting his hands behind his back so one of the bodyguards could remove his watch.
I simply don't see that in the video. Bush's arm is out in front of him during the entire nine seconds between those stills.
An Albanian bodyguard who accompanied Bush in the town told The Associated Press he had seen one of his U.S. colleagues close to Bush bend down and pick up the watch.
That's certainly possible; it may have fallen off.
But possibly the pickpocket of the century. (Although would anyone actually be stupid enough to try? There must be a zillion easier-to-steal watches in that crowd, many of them nicer than Bush's.)
EDITED TO ADD (6/12): This article says that he wears ar $50 Timex. It also has some more odd denials.
EDITED TO ADD (6/13): In this video, from another angle, it seems clear that Bush removes the watch himself.
Good paper: "Data Mining and the Security-Liberty Debate," by Daniel J. Solove.
Abstract: In this essay, written for a symposium on surveillance for the University of Chicago Law Review, I examine some common difficulties in the way that liberty is balanced against security in the context of data mining. Countless discussions about the trade-offs between security and liberty begin by taking a security proposal and then weighing it against what it would cost our civil liberties. Often, the liberty interests are cast as individual rights and balanced against the security interests, which are cast in terms of the safety of society as a whole. Courts and commentators defer to the government's assertions about the effectiveness of the security interest. In the context of data mining, the liberty interest is limited by narrow understandings of privacy that neglect to account for many privacy problems. As a result, the balancing concludes with a victory in favor of the security interest. But as I argue, important dimensions of data mining's security benefits require more scrutiny, and the privacy concerns are significantly greater than currently acknowledged. These problems have undermined the balancing process and skewed the results toward the security side of the scale.
My only complaint: it's not a liberty vs. security debate. Liberty is security. It's a liberty vs. control debate.
It's a growing problem in the UK:
"There are different levels of cloning. There is the simple cloning, just stealing a plate to drive into say the Congestion Charge zone or evade a speed camera.
Back in 2005, I wrote about Laszlo Kish's encryption scheme, which promises the security of quantum encryption using thermal noise. I found, and continue to find, the research fascinating -- although I don't have the electrical engineering expertise to know whether or not it's secure.
There have been developments. Kish has a new paper that not only describes a physical demonstration of the scheme, but also addresses many of the criticisms of his earlier work. And Feng Hao has a new paper that claims the scheme is totally insecure.
Again, I don't have the EE background to know who's right. But this is exactly the sort of back-and-forth I want to see.
Friday Squid Blogging: "Invisibility Cloak Materials Made from Reflective Self-Assembling Squid Proteins"
A new study into the biophysical properties of a highly reflective and self-organizing squid protein called reflectin will inform researchers about the process of "bottom-up" synthesis of nanoscale structures and could lead to the development of thin-film coatings for microstructured materials, bringing scientists one step closer to the development of an "invisibility cloak."
New developments in malware:
Finjan reports an increasing trend for "evasive" web attacks, which keep track of visitors' IP addresses. Attack toolkits restrict access to a single-page view from each unique IP address. The second time an IP address tries to access the malicious page, a benign page is displayed in its place.
Just another step in the neverending arms race of network security.
It's not cryptography -- despite the name -- but it's interesting:
DNA-based watermarks using the DNA-Crypt algorithm
The DHS wants universities to inventory a long list of chemicals:
Unusual paranoia over chemical attack in the US takes many forms. It can be seen in a recent piece of trouble from the Department of Homeland Security, a long list of "chemicals of interest" it wishes to require all university settings to inventory.
Interesting stuff about specific chemicals in the article.
Somehow, I don't see either becoming a mass-market consumer item, although I can certainly imagine military facilities installing the latter.
Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department's Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue's articles on managing organizational security make this point clear.
Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There's also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.
Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else's asset instead.
Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal— that some combination of trusted people and trusted systems carries out. Owners are affected by risks ... but really, only by perceived risks. They're also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.
Looking over the diagram, it's obvious that the effectiveness of security is only a minor consideration in an asset owner's security decision. And that's how it should be.
Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.
This essay originally appeared in IEEE Security and Privacy.
Great article on perceived vs actual risks to children:
The risk of abduction remains tiny. In Britain, there are now half as many children killed every year in road accidents as there were in 1922 -- despite a more than 25-fold increase in traffic.
EDITED TO ADD (6/9): More commentary.
The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security has issued an excellent report on REAL ID:
The REAL ID Act is one of the largest identity management undertakings in history. It would bring more than 200 million people from a large, diverse, and mobile country within a uniformly defined identity system, jointly operated by state governments. This has never been done before in the USA, and it raises numerous policy, privacy, and data security issues that have had only brief scrutiny, particularly given the scope and scale of the undertaking.
I've written about REAL ID here.
Interesting use of the technology, although I'm sure it has more value on the battlefield than to detect poachers.
The system consists of a network of foot-long metal detectors similar to those used in airports. When moving metal objects such as a machete or a rifle trip the sensor, it sends a radio signal to a wireless Internet gateway camouflaged in the tree canopy as far as a kilometer away. This signal is transmitted via satellite to the Internet, where the incident is logged and messages revealing the poachers' position and direction are sent instantly to park headquarters, where patrols can then be dispatched.
Lots of good stuff. The nine research areas:
And this implies they've accepted the problem:
Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation's critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.
It's good to see research money going to this stuff.
The majority of terrorist attacks result in no fatalities, with just 1 percent of such attacks causing the deaths of 25 or more people.
A lot of this depends on your definition of "terrorism," but it's interesting stuff.
The database was developed by the National Consortium for the Study of Terrorism and Responses to Terrorism (START) based at the University of Maryland, with funding from the U.S. Department of Homeland Security. It includes unclassified information about 80,000 terror incidents that occurred from 1970 through 2004.
The database is here:
The Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world since 1970 (currently updated through 2004). Unlike many other event databases, the GTD includes systematic data on international as well as domestic terrorist incidents that have occurred during this time period and now includes almost 80,000 cases. For each GTD incident, information is available on the date and location of the incident, the weapons used and nature of the target, the number of casualties, and -- when identifiable -- the identity of the perpetrator.
On April 1, I announced the Second Annual Movie-Plot Threat Contest:
Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.
Well, the submissions are in; the blog entry has 334 comments. I've read them all, and here are the semi-finalists:
Cast your vote; I'll announce the winner on the 15th.
U.S. courts are weighing in with opinions:
When Ray Andrus' 91-year-old father gave federal agents permission to search his son's password-protected computer files and they found child pornography, the case turned a spotlight on how appellate courts grapple with third-party consents to search computers.
Excellent commentary from Jennifer Granick:
The Fourth Amendment generally prohibits warrantless searches of an individual's home or possessions. There is an exception to the warrant requirement when someone consents to the search. Consent can be given by the person under investigation, or by a third party with control over or mutual access to the property being searched. Because the Fourth Amendment only prohibits "unreasonable searches and seizures," permission given by a third party who lacks the authority to consent will nevertheless legitimize a warrantless search if the consenter has "apparent authority," meaning that the police reasonably believed that the person had actual authority to control or use the property.
...despite the use of encryption, a passive eavesdropper can still learn private information about what someone is watching via their Slingbox Pro.
More details in the paper.
I haven't posted anything about the cyberwar between Russia and Estonia because, well, because I didn't think there was anything new to say. We know that this kind of thing is possible. We don't have any definitive proof that Russia was behind it. But it would be foolish to think that the various world's militaries don't have capabilities like this.
And anyway, I wrote about cyberwar back in January 2005.
But it seems that the essay never made it into the blog. So here it is again.
The first problem with any discussion about cyberwar is definitional. I've been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.
I think the restrictive definition is more useful, and would like to define four different terms as follows:
Cyberwar -- Warfare in cyberspace. This includes warfare attacks against a nation's military -- forcing critical communications channels to fail, for example -- and attacks against the civilian population.
Cyberterrorism -- The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.
Cybercrime -- Crime in cyberspace. This includes much of what we've already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.
Cybervandalism -- The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They're like the kids who spray paint buses: in it more for the thrill than anything else.
At first glance, there's nothing new about these terms except the "cyber" prefix. War, terrorism, crime, even vandalism are old concepts. That's correct, the only thing new is the domain; it's the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.
One thing that hasn't changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even -- if it's done by some fourteen-year-old who doesn't really understand what he's doing -- cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack...just as in the real world.
For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.
The Waging of Cyberwar
There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.
This implies that at least some of our world's militaries have Internet attack tools that they're saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we're seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.
Script kiddies are attackers who run exploit code written by others, but don't really understand the intricacies of what they're doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don't release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.
The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B's portion of the Internet, or remove connections between Country B's Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they're using as access. Could Country A's military turn its own Internet into a domestic-only network if they wanted?
For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations' networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation's military headquarters, or the computer networks that handle logistical information.
One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy's network down if they aren't getting useful information from it. The best thing to do is to infiltrate the enemy's computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can't do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.
Properties of Cyberwar
Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the "equities issue," and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It's not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.
Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily "hot.") Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There's considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country's computer networks might be as well.
Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today's smart bombs, are likely to have collateral effects.
Cyberattacks can be used to wage information war. Information war is another topic that's received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It's not hard to imagine cyberattacks designed to co-opt the enemy's communications channels and use them as a vehicle for information war.
Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.
Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.
Cyberattacks also make effective surprise attacks. For years we've heard dire warnings of an "electronic Pearl Harbor." These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn't sufficiently vulnerable in that way.
Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It's possible to have damage being done, but not know where it's coming from. This is a significant difference; there's something terrifying about not knowing your opponent -- or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?
Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That's true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.
And finally, cyberwar is a multifaceted concept. It's part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy's communications infrastructure through both physical attack -- bombings of selected communications facilities and transmission cables -- and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country's Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn't replace war; it's just another arena in which the larger war is fought.
People overplay the risks of cyberwar and cyberterrorism. It's sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it's getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they'll do the right thing.
Here's my previous essay on cyberterrorism.
For the third time in ten years, massive amounts of Humboldt squid have been flourishing in the waters of Southern California.
Newfoundland naturalist Moses Harvey collected the first complete specimen of a giant squid in December 1873.
More from camera-happy England.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.