Blog: June 2007 Archives

Essay on Fear

The only thing we have to fear is the ‘culture of fear’ itself,” by Frank Furedi:

Fear plays a key role in twenty-first century consciousness. Increasingly, we seem to engage with various issues through a narrative of fear. You could see this trend emerging and taking hold in the last century, which was frequently described as an ‘Age of Anxiety’. But in recent decades, it has become more and better defined, as specific fears have been cultivated.

Posted on June 29, 2007 at 6:38 AM30 Comments

Risks of Data Reuse

We learned the news in March: Contrary to decades of denials, the U.S. Census Bureau used individual records to round up Japanese-Americans during World War II.

The Census Bureau normally is prohibited by law from revealing data that could be linked to specific individuals; the law exists to encourage people to answer census questions accurately and without fear. And while the Second War Powers Act of 1942 temporarily suspended that protection in order to locate Japanese-Americans, the Census Bureau had maintained that it only provided general information about neighborhoods.

New research proves they were lying.

The whole incident serves as a poignant illustration of one of the thorniest problems of the information age: data collected for one purpose and then used for another, or “data reuse.”

When we think about our personal data, what bothers us most is generally not the initial collection and use, but the secondary uses. I personally appreciate it when Amazon.com suggests books that might interest me, based on books I have already bought. I like it that my airline knows what type of seat and meal I prefer, and my hotel chain keeps records of my room preferences. I don’t mind that my automatic road-toll collection tag is tied to my credit card, and that I get billed automatically. I even like the detailed summary of my purchases that my credit card company sends me at the end of every year. What I don’t want, though, is any of these companies selling that data to brokers, or for law enforcement to be allowed to paw through those records without a warrant.

There are two bothersome issues about data reuse. First, we lose control of our data. In all of the examples above, there is an implied agreement between the data collector and me: It gets the data in order to provide me with some sort of service. Once the data collector sells it to a broker, though, it’s out of my hands. It might show up on some telemarketer’s screen, or in a detailed report to a potential employer, or as part of a data-mining system to evaluate my personal terrorism risk. It becomes part of my data shadow, which always follows me around but I can never see.

This, of course, affects our willingness to give up personal data in the first place. The reason U.S. census data was declared off-limits for other uses was to placate Americans’ fears and assure them that they could answer questions truthfully. How accurate would you be in filling out your census forms if you knew the FBI would be mining the data, looking for terrorists? How would it affect your supermarket purchases if you knew people were examining them and making judgments about your lifestyle? I know many people who engage in data poisoning: deliberately lying on forms in order to propagate erroneous data. I’m sure many of them would stop that practice if they could be sure that the data was only used for the purpose for which it was collected.

The second issue about data reuse is error rates. All data has errors, and different uses can tolerate different amounts of error. The sorts of marketing databases you can buy on the web, for example, are notoriously error-filled. That’s OK; if the database of ultra-affluent Americans of a particular ethnicity you just bought has a 10 percent error rate, you can factor that cost into your marketing campaign. But that same database, with that same error rate, might be useless for law enforcement purposes.

Understanding error rates and how they propagate is vital when evaluating any system that reuses data, especially for law enforcement purposes. A few years ago, the Transportation Security Administration’s follow-on watch list system, Secure Flight, was going to use commercial data to give people a terrorism risk score and determine how much they were going to be questioned or searched at the airport. People rightly rebelled against the thought of being judged in secret, but there was much less discussion about whether the commercial data from credit bureaus was accurate enough for this application.

An even more egregious example of error-rate problems occurred in 2000, when the Florida Division of Elections contracted with Database Technologies (since merged with ChoicePoint) to remove convicted felons from the voting rolls. The databases used were filled with errors and the matching procedures were sloppy, which resulted in thousands of disenfranchised voters—mostly black—and almost certainly changed a presidential election result.

Of course, there are beneficial uses of secondary data. Take, for example, personal medical data. It’s personal and intimate, yet valuable to society in aggregate. Think of what we could do with a database of everyone’s health information: massive studies examining the long-term effects of different drugs and treatment options, different environmental factors, different lifestyle choices. There’s an enormous amount of important research potential hidden in that data, and it’s worth figuring out how to get at it without compromising individual privacy.

This is largely a matter of legislation. Technology alone can never protect our rights. There are just too many reasons not to trust it, and too many ways to subvert it. Data privacy ultimately stems from our laws, and strong legal protections are fundamental to protecting our information against abuse. But at the same time, technology is still vital.

Both the Japanese internment and the Florida voting-roll purge demonstrate that laws can change … and sometimes change quickly. We need to build systems with privacy-enhancing technologies that limit data collection wherever possible. Data that is never collected cannot be reused. Data that is collected anonymously, or deleted immediately after it is used, is much harder to reuse. It’s easy to build systems that collect data on everything—it’s what computers naturally do—but it’s far better to take the time to understand what data is needed and why, and only collect that.

History will record what we, here in the early decades of the information age, did to foster freedom, liberty and democracy. Did we build information technologies that protected people’s freedoms even during times when society tried to subvert them? Or did we build technologies that could easily be modified to watch and control? It’s bad civic hygiene to build an infrastructure that can be used to facilitate a police state.

This article originally appeared on Wired.com

Posted on June 28, 2007 at 8:34 AM46 Comments

Designing Voting Machines to Minimize Coercion

If someone wants to buy your vote, he’d like some proof that you’ve delivered the goods. Camera phones are one way for you to prove to your buyer that you voted the way he wants. Belgian voting machines have been designed to minimize that risk.

Once you have confirmed your vote, the next screen doesn’t display how you voted. So if one is coerced and has to deliver proof, one just has to take a picture of the vote one was coerced into, and then back out from the screen and change ones vote. The only workaround I see is for the coercer to demand a video of the complete voting process, in stead of a picture of the ballot.

The author is wrong that this is an advantage electronic ballots have over paper ballots. Paper voting systems can be designed with the same security features.

Posted on June 27, 2007 at 12:09 PM32 Comments

America's Newfound Love of Secrecy

Really good Washington Post article on secrecy:

But the notion that information is more credible because it’s secret is increasingly unfounded. In fact, secret information is often more suspect because it hasn’t been subjected to open debate. Those with their own agendas can game the system, over-classifying or stove-piping self-serving intelligence to shield it from scrutiny. Those who cherry-picked intelligence in the run-up to the Iraq war could ignore anything that contradicted it. Even now, some members of Congress tell me that they avoid reading classified reports for fear that if they do, the edicts of secrecy will bar them from discussing vital public issues.

Real secrets—blueprints for nuclear weapons, specific troop movements, the identities of covert operatives in the field—deserve to be safeguarded. But when secrecy is abused, the result is a dangerous disdain that leads to officials exploiting secrecy for short-term advantage (think of the Valerie Plame affair or the White House leaking selected portions of National Intelligence Estimates to bolster flagging support for the Iraq war). Then disregard for the real need for secrecy spreads to the public. WhosaRat.com reveals the names of government witnesses in criminal cases. Other Web sites seek to out covert operatives or to post sensitive security documents online.

Back in 2002 I wrote about the relationship between secrecy and security.

Posted on June 27, 2007 at 6:58 AM25 Comments

Credit Card Gas Limits

Here’s an interesting phenomenon: rising gas costs have pushed up a lot of legitimate transactions to the “anti-fraud” ceiling.

Security is a trade-off, and now the ceiling is annoying more and more legitimate gas purchasers. But to me the real question is: does this ceiling have any actual security purpose?

In general, credit card fraudsters like making gas purchases because the system is automated: no signature is required, and there’s no need to interact with any other person. In fact, buying gas is the most common way a fraudster tests that a recently stolen card is valid. The anti-fraud ceiling doesn’t actually prevent any of this, but limits the amount of money at risk.

But so what? How many perps are actually trying to get more gas than is permitted? Are credit-card-stealing miscreants also swiping cars with enormous gas tanks, or merely filling up the passenger cars they regularly drive? I’d love to know how many times, prior to the run-up in gas prices, a triggered cutoff actually coincided with a subsequent report of a stolen card. And what’s the effect of a ceiling, apart from a gas shut-off? Surely the smart criminals know about smurfing, if they need more gas than the ceiling will allow.

The Visa spokesperson said, “We get more calls, questions, when gas prices increase.” He/she didn’t say: “We make more calls to see if fraud is occurring.” So the only inquiries made may be in the cases where fraud isn’t occurring.

Posted on June 26, 2007 at 1:21 PM60 Comments

Surveillance Cameras that Obscure Faces

From Technology Review:

A camera developed by computer scientists at the University of California, Berkeley, would obscure, with an oval, the faces of people who appear on surveillance videos. These so-called respectful cameras, which are still in the research phase, could be used for day-to-day surveillance applications and would allow for the privacy oval to be removed from a given set of footage in the event of an investigation.

An interesting privacy-enhancing technology.

Posted on June 26, 2007 at 7:41 AM55 Comments

4th Amendment Rights Extended to E-Mail

This is a great piece of news in the U.S. For the first time, e-mail has been granted the same constitutional protections as telephone calls and personal papers: the police need a warrant to get at it. Now it’s only a circuit court decision—the Sixth U.S. Circuit Court of Appeals in Ohio—it’s pretty narrowly defined based on the attributes of the e-mail system, and it has a good chance of being overturned by the Supreme Court…but it’s still great news.

The way to think of the warrant system is as a security device. The police still have the ability to get access to e-mail in order to investigate a crime. But in order to prevent abuse, they have to convince a neutral third party—a judge—that accessing someone’s e-mail is necessary to investigate that crime. That judge, at least in theory, protects our interests.

Clearly e-mail deserves the same protection as our other personal papers, but—like phone calls—it might take the courts decades to figure that out. But we’ll get there eventually.

Posted on June 25, 2007 at 4:13 PM27 Comments

Cell Phone Stalking

Does this seem real to anyone?

Somehow, the callers have gained control of the family cell phones, Price and Kuykendall say. Messages received by the sisters include snatches of conversation overheard on cell-phone mikes, replayed and transmitted via voice mail. Phone records show many of the messages coming from Courtney’s phone, even when she’s not using it ­ even when it’s turned off.

Price and Kuykendall say the stalkers knew when they visited Fircrest police and sent a voice-mail message that included a portion of their conversation with a detective.

The harassment seems to center on Courtney, but it extends to her parents, her aunt Darcy and Courtney’s friends, including Taylor McKay, who lives across the street in Fircrest. Her mother, Andrea McKay, has received messages similar to those left at the Kuykendall household and cell phone bills approaching $1,000 for one month. She described one recent call: She was slicing limes in the kitchen. The stalkers left a message, saying they preferred lemons.

“Taylor and Courtney seem to be the hub of the harassment, and different people have branched off from there,” Andrea McKay said. “I don’t know how they’re doing it. They were able to get Taylor’s phone number through Courtney’s phone, and every contact was exposed.”

McKay, a teacher in the Peninsula School District, said she and Taylor recently explained the threats to the principal at Gig Harbor High School, which Taylor attends. A Gig Harbor police officer sat in on the conversation, she said.

While the four people talked, Taylor’s and Andrea’s phones, which were switched off, sat on a table. While mother and daughter spoke, Taylor’s phone switched on and sent a text message to her mother’s phone, Andrea said.

Here’s another report.

There’s something going on here, but I just don’t believe it’s entirely cell phone hacking. Something else is going on.

Posted on June 25, 2007 at 1:13 PM

Cocktail Condoms

They’re protective covers that go over your drink and “protect” against someone trying to slip a Mickey Finn (or whatever they’re called these days):

The concept behind the cocktail cover is fairly simply. About the size of a coaster, it can be used to cap a drink that goes unattended. When a person returns to a beverage, there is a layer that can be pulled back, leaving a thin sheath protecting the cocktail. That can be punctured with a straw or pulled off entirely—either way the drinker will know that the cocktail has not been tampered with.

I’m sure there are many ways to defeat this security device if you’re so inclined: a syringe, affixing a new cover after you tamper with the drink, and so on. And this is exactly the sort of rare risk we’re likely to overreact to. But to me, the most interesting aspect of this story is the agenda. If these things become common, it won’t be because of security. It will be because of advertising:

Barry said that companies could advertise on the cocktail covers, likely covering the cost of production. Each cover, he said, costs less than 10 cents to make.

Posted on June 25, 2007 at 6:25 AM42 Comments

The Onion on Terrorist Cell Apathy

Funny:

“We remain wholly committed to the destruction of America, the Great Satan,” al-Sharif said. “But now is not a good time for us. The season finale of Lost was such a cliff- hanger that we have to at least catch the first episode of the new season. After that, though, death to the infidels.”

“Probably,” added al-Sharif, who noted that his nearly $6,000 in credit-card debt from recent purchases of a 52-inch HDTV and a backyard gas grill prevents him from buying needed materials for the attack.

Though the members of the cell said that they “live only to spill the blood of crusaders who oppress Muslims,” they cited additional reasons for the delay, including an unexpired free Netflix trial and nagging lower-back pain.

“I think I’m entitled to a little time to fully enjoy the in-dash MP3 adapter and heads-up display that Allah, in His infinite wisdom, has seen fit to provide me with,” munitions expert Mohammed Akram said of the 2006 Mercury Mariner that is intended to be used as a car bomb during the attack. “Also, I have nine months left on the lease. But after that, I am more than willing to load it with explosives and go to my glory in its all-leather interior and heated seats.”

Posted on June 23, 2007 at 11:57 AM27 Comments

TSA Uses Monte Carlo Simulations to Weigh Airplane Risks

Does this make sense to anyone?

TSA said Boeing would use its Monte Carlo simulation model “to identify U.S. commercial aviation system vulnerabilities against a wide variety of attack scenarios.”

The Monte Carlo method refers to several ways of using randomly generated numbers fed into a computer simulation many times to estimate the likelihood of an event, specialists in the field say.

The Monte Carlo method plays an important role in many statistical techniques used to characterize risks, such as the probabilistic risk analysis approach used to evaluate possible problems at a nuclear power plant and their consequences.

Boeing engineers have pushed the mathematical usefulness of the Monte Carlo method forward largely by applying the technique to evaluating the risks and consequences of aircraft component failures.

A DHS source said the work of the U.S. Commercial Aviation Partnership, a group of government and industry organizations, had made TSA officials aware of the potential applicability of the Monte Carlo method to building an RMAT for the air travel system.

A paper by four Boeing technologists and a TSA official describing the RMAT model appeared recently in Interfaces, a scholarly journal covering operations research.

I can’t imagine how random simulations are going to be all that useful in evaluating airplane threats, as the adversary we’re worried about isn’t particularly random—and, in fact, is motivated to target his attacks directly at the weak points in any security measures.

Maybe “chatter” has tipped the TSA off to a Muta al-Stochastic.

Posted on June 22, 2007 at 12:58 PM47 Comments

Vulnerabilities in the DHS Networks

Wired.com has the story:

Congress asked Homeland Security’s chief information officer, Scott Charbo, who has a Masters in plant science, to account for more than 800 self-reported vulnerabilities over the last two years and for recently uncovered systemic security problems in US-VISIT, the massive computer network intended to screen and collect the fingerprints and photos of visitors to the United States.

Charbo’s main tactic before the House Homeland Security subcommittee Wednesday was to downplay the seriousness of the threats and to characterize the security investigation of US-VISIT as simultaneously old news and news so new he hasn’t had time to meet with the investigators.

“Key systems operated by Customs and Border Patrol were riddled by control weaknesses,” the Government Accountability Office’s director of Information Security issues Gregory Wilshusen told the committee. Poor security practices and a lack of an authoritative internal map of how various systems interconnect increases the risk that contractors, employees or would-be hackers can or have penetrated and disrupted key DHS computer systems, Wilshusen and Keith Rhodes Director, the GAO’s director of the Center for Technology and Engineering told the committee.

Posted on June 22, 2007 at 10:37 AM16 Comments

Seventh Harry Potter Hacked?

Someone claims to have hacked the Bloomsbury Publishing network, and has posted what he says is the ending to the last Harry Potter book.

I don’t believe it, actually. Sure, it’s possible—probably even easy. But the posting just doesn’t read right to me.

The attack strategy was the easiest one. The usual milw0rm downloaded exploit delivered by email/click-on-the-link/open-browser/click-on-this-animated-icon/back-connect to some employee of Bloomsbury Publishing, the company that’s behind the Harry crap.

And I would expect someone who really got their hands on a copy of the manuscript to post the choice bits of text, not just a plot summary. It’s easier, and it’s more proof.

Sorry; I don’t buy it.

EDITED TO ADD (7/25): I was right; none of his “predictions” were correct.

Posted on June 21, 2007 at 2:30 PM

Silly Home Security

Fogshield:

Ask anybody who’s made money robbing houses, and they’ll tell you straight up: you can get away with a lot of loot in the 10 minutes before the cops come.

But the crooks won’t find their way out of the foyer if you hit ’em with the FogSHIELD—an add-on to your home security system that releases a blinding blanket of fog to stop thieves in their tracks. When an intruder triggers the alarm, water mixes in the FogSHIELD’s glycol canister to generate enough dry, non-toxic fog to cover 2,000 square feet in less than 15 seconds. It dissipates 45 minutes later, leaving your furniture unsullied and your electronics intact.

The website appears not to be a joke.

EDITED TO ADD (6/23): In the comments, a lot of people have taken me to task for calling this security silly. I stand by my statement: not because it’s not effective, but because it’s not a good trade-off. I can certainly imagine scenarios where filling your house with vision-impairing fog is just the thing to foil a would-be burglar, but it seems awfully specific a countermeasure to me.

Home security—like all security, really—is a combination of protection, detection, and response. Locks and bars are the protection system, and the alarm is the detection/response system. Fogshield is a protection system: after the locks and bars have failed, Fogshield 1) makes it harder for the burglar to navagate around the house, and 2) potentially delays him until the response system (police or whomever) arrives.

But it has problems as a protection system. For one, false alarms are way worse than before. It’s one thing to have a loud bell annoy the neighbors until you turn it off, it’s another to fill your house with fog in less than 15 seconds (plus the cost to replace the canister).

This whole thing feels real “movie-plot threat” to me: great special effect in a movie, but not really a good security trade-off for home use. An alarm system is going to make an average burglar go to the house next door instead, and a dedicated burglar isn’t going to be deterred by this.

Posted on June 21, 2007 at 6:55 AM116 Comments

Ubiquity of Communication

Read this essay by Randy Farmer, a pioneer of virtual online worlds, explaining something called Disney’s ToonTown.

Designers of online worlds for children wanted to severely restrict the communication that users could have with each other, lest somebody say something that’s inappropriate for children to hear.

Randy discusses various approaches to this problem that were tried over the years. The ToonTown solution was to restrict users to something called “Speedchat,” a menu of pre-constructed sentences, all innocuous. They also gave users the ability to conduct unrestricted conversations with each other, provided they both knew a secret code string. The designers presumed the code strings would be passed only to people a user knew in real life, perhaps on a school playground or among neighbors.

Users found ways to pass code strings to strangers anyway. This page describes several protocols, using gestures, canned sentences, or movement of objects in the game.

After you read the ways above to make secret friends, look here. Another way to make secret friends with toons you don’t know is to form letters/numbers with the picture frames in your house. Around you may see toons who have alot of picture frames at their toon estates, they are usually looking for secret friends. This is how to do it! So, lets say you wanted to make secret friends with a toon named Lily. Your “pretend” secret friend code is 4yt 56s.

  • You: *Move frames around in house to form a 4.* “Okay.”
  • Her: “Okay.” She has now written the first letter down on a piece of paper.
  • You: *Move Frames around to form a y.* “Okay.”
  • Her: “Okay.” She has now written the second number down on paper.
  • You: *Move Frames around in house to form a t* “Okay.”
  • Her: “Okay.” She has now written the third letter down on paper. “Okay.”
  • You: *Do nothing* “Okay” This shows that you have made a space.
  • Repeat process

Randy writes: “By hook, or by crook, customers will always find a way to connect with each other.”

Posted on June 20, 2007 at 12:48 PM16 Comments

Direct Marketing Meets Wholesale Surveillance

A $100K National Science Foundation grant to Geosemble Technologies, Inc.

SBIR Phase I: Exploiting High-Resolution Imagery, Geospatial Data, and Online Sources to Automatically Identify Direct Marketing Leads

Abstract: This Small Business Innovation Research Phase I project will conduct a feasibility study to demonstrate that by combining currently available high-resolution imagery, geospatial data (e.g., parcel data or structure data), and other related online data sources (e.g., property tax data or census data), it is possible to automatically generate highly targeted direct marketing leads for a variety of markets. The plan is to approach this problem by (1) aligning existing geospatial sources with the high-resolution imagery in order to determine the exact location and determine the address of the parcels seen in the imagery, (2) extracting the relevant features from the imagery to provide appropriate leads, such as determining the presence or absence of a swimming pool, the type of roofing materials used, or what types of cars are parked in the driveway, and (3) bringing in other sources of data, such as property tax assessment data to provide additional context.

The primary focus of the phase I project will be to demonstrate the use of machine learning technology for identifying features in high-resolution imagery that can be used for direct marketing. High-resolution aerial imagery is now being widely collected and is available for low cost or in some cases is even free. The challenges are to first to align parcel data with the high resolution imagery to identify the exact address and boundaries of a property, and second to develop feature extraction techniques that can exploit the contextual information to accurately identify novel features, such as roofs, cars, pools, landscaping, etc., that can be used for direct marketing. The ability to accurately identify features in imagery and then relate them to specific properties as well as related sources of information will allow a targeted direct marketing product to be built. The end users of this product will be companies seeking to market products directly to residential consumers. This includes product and services relating to home improvement, both exterior and interior, as well as those products relating to residents of the home, that can be gleaned from imagery available for the parcel in question. This is a large market and includes everyone from home improvement stores to roofing companies, construction companies, automobile dealers, tree trimmers, landscapers, and pool construction companies. Beyond direct marketing, the technology can also be used for other applications that combine imagery, geospatial data, and structured information. For example, it could used for mosquito abatement, which is important to stop the spread of West Nile Virus, by identifying large pools of stagnant water, associating those hazards with the appropriate address, and then mailing abatement notifications to the residents.

Posted on June 19, 2007 at 3:52 PM45 Comments

Age Verification for Movie Trailers

Completely ridiculous:

It seems like “We want to protect children” really means, We want to give the appearance that we’ve made an effort to protect children. If they really wanted to protect children, they wouldn’t use the honor system as the sole safeguard standing between previews filled with sex and violence and Internet-savvy kids who can, in a matter of seconds, beat the impotent little system.

Posted on June 19, 2007 at 6:12 AM38 Comments

Remote Sensing of Meth Labs

Another National Science Foundation grant, this one for $150K to a company called Bridger Photonics:

This Small Business Technology Transfer (STTR) Phase I research project addresses the need for sensitive, portable, low-cost, laser-based remote sensing devices to detect chemical effluents of illicit methamphetamine (meth) production from a distance. The proposed project will develop an innovative correlated-mode laser source for high-resolution mid-infrared differential absorption lidar. To accomplish this the research team will base the research on a compact, monolithic, passively Q-switched laser/optical parametric oscillator design that has proven incredibly effective for ranging purposes (no spectroscopy) in demanding environments. This source, in its present state, is unsuitable for high-resolution mid-infrared spectroscopy. The team will therefore advance the laser by targeting the desired effluent mid-IR wavelengths, significantly improving the spectral, spatial, and temporal emission characteristics, and incorporating dual mode operation. Realization of the laser source will enable real-time remote detection of meth labs in widely varying environments, locations, and circumstances with quantum-limited detection sensitivity, spectral selectivity for the desired molecules in a spectral region that is difficult to access, and differential measurement capabilities for effective self calibration.

Posted on June 18, 2007 at 12:52 PM34 Comments

TSA and the Sippy Cup Incident

This story is pretty disgusting:

“I demanded to speak to a TSA [Transportation Security Administration] supervisor who asked me if the water in the sippy cup was ‘nursery water or other bottled water.’ I explained that the sippy cup water was filtered tap water. The sippy cup was seized as my son was pointing and crying for his cup. I asked if I could drink the water to get the cup back, and was advised that I would have to leave security and come back through with an empty cup in order to retain the cup. As I was escorted out of security by TSA and a police officer, I unscrewed the cup to drink the water, which accidentally spilled because I was so upset with the situation.

“At this point, I was detained against my will by the police officer and threatened to be arrested for endangering other passengers with the spilled 3 to 4 ounces of water. I was ordered to clean the water, so I got on my hands and knees while my son sat in his stroller with no shoes on since they were also screened and I had no time to put them back on his feet. I asked to call back my fiancé, who I could still see from afar, waiting for us to clear security, to watch my son while I was being detained, and the officer threatened to arrest me if I moved. So I yelled past security to get the attention of my fiancé.

“I was ordered to apologize for the spilled water, and again threatened arrest. I was threatened several times with arrest while detained, and while three other police officers were called to the scene of the mother with the 19 month old. A total of four police officers and three TSA officers reported to the scene where I was being held against my will. I was also told that I should not disrespect the officer and could be arrested for this too. I apologized to the officer and she continued to detain me despite me telling her that I would miss my flight. The officer advised me that I should have thought about this before I ‘intentionally spilled the water!'”

This story portrays the TSA as jack-booted thugs. The story hit the Internet last Thursday, and quickly made the rounds. I saw it on BoingBoing. But, as it turns out, it’s not entirely true.

The TSA has a webpage up, with both the incident report and video.

TSO [REDACTED] took the female to the exit lane with the stroller and her bag. When she got past the exit lane podium she opened the child’s drink container and held her arm out and poured the contents (approx. 6 to 8 ounces) on the floor. MWAA Officer [REDACTED] was manning the exit lane at the time and observed the entire scene and approached the female passenger after observing this and stopped her when she tried to re-enter the sterile area after trying to come back through after spilling the fluids on the floor. The female passenger flashed her badge and credentials and told the MWAA officer “Do you know who I am?” An argument then ensued between the officer and the passenger of whether the spilling of the fluid was intentional or accidental. Officer [REDACTED] asked the passenger to clean up the spill and she did.

Watch the second video. TSO [REDACTED] is partially blocking the scene, but at 2:01:00 PM it’s pretty clear that Monica Emmerson—that’s the female passenger—spills the liquid on the floor on purpose, as a deliberate act of defiance. What happens next is more complicated; you can watch it for yourself, or you can read BoingBoing’s somewhat sarcastic summary.

In this instance, the TSA is clearly in the right.

But there’s a larger lesson here. Remember the Princeton professor who was put on the watch list for criticizing Bush? That was also untrue. Why is it that we all—myself included—believe these stories? Why are we so quick to assume that the TSA is a bunch of jack-booted thugs, officious and arbitrary and drunk with power?

It’s because everything seems so arbitrary, because there’s no accountability or transparency in the DHS. Rules and regulations change all the time, without any explanation or justification. Of course this kind of thing induces paranoia. It’s the sort of thing you read about in history books about East Germany and other police states. It’s not what we expect out of 21st century America.

The problem is larger than the TSA, but the TSA is the part of “homeland security” that the public comes into contact with most often—at least the part of the public that writes about these things most. They’re the public face of the problem, so of course they’re going to get the lion’s share of the finger pointing.

It was smart public relations on the TSA’s part to get the video of the incident on the Internet quickly, but it would be even smarter for the government to restore basic constitutional liberties to our nation’s counterterrorism policy. Accountability and transparency are basic building blocks of any democracy; and the more we lose sight of them, the more we lose our way as a nation.

Posted on June 18, 2007 at 6:01 AM146 Comments

Second Movie-Plot Threat Contest Winner

On April 1, I announced the Second Annual Movie-Plot Threat Contest:

Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.

Make the TSA ban wristwatches. Or laptop computers. Or polyester. Or zippers over three inches long. You get the idea.

Your entry will be judged on the common item that the TSA has no choice but to ban, as well as the cleverness of the plot. It has to be realistic; no science fiction, please. And the write-up is critical; last year the best entries were the most entertaining to read.

On June 5, I posted three semi-finalists out of the 334 comments:

Well, we have a winner. I can’t divulge the exact formula—because you’ll all hack the system next year—but it was a combination of my opinion, popular acclaim in blog comments, and the opinion of Tom Grant (the previous year’s winner).

I present to you: Butterflies and Beverages, posted by Ron:

It must have been a pretty meadow, Wilkes thought, just a day before. He tried to picture how it looked then: without the long, wide wound in the earth, without the charred and broken fuselage of the jet that gouged it out, before the rolling ground was strewn with papers and cushions and random bits of plastic and fabric and all the things inside the plane that lay like the confetti from a brief, fiery parade.

Yes, a nice little spot, just far enough from the airport’s runways to be not too noisy, but close enough to watch the planes going in and out, fortunately just a bit too close to have been developed. When the plane rolled over and angled downward, not even a mile past the end of the runway, at least the only people at risk were the ones on the plane. For them, it was mercifully quick, the impact breaking their necks before the breaking wing tanks ignited in sheets of flame, the charred bodies still in their seats.

He spotted the NTSB guy, standing by the forward half of the fuselage, easy to spot among the FAA and local airport people—they were always the only suits in the crowd. Heading over, Wilkes saw this one wasn’t going to be too hard: when planes came down intact like this, breaking in to just a few pieces on impact, the cause was always easier to find. This one looked to be no exception.

He muttered to the suit, “Wilkes,” gesturing at the badge clipped to his shirt. No need to get too friendly, they’d file separate reports anyway. As long as they were remotely on the same page, there wasn’t much need to actually talk to the guy. “What’s this little gem?” he wondered aloud, looking at the hole in the side of the downed jet.

“Explosion,” drawled the NTSB guy; he had that Chuck Yeager slow-play sound, Wilkes thought, like someone who could sound calm describing Armageddon. “Looks like it was from the inside, something just big enough to rip a few square feet out of the side. Enough to throw it on its side”

“And if the plane is low enough, still taking off, with the engines near full thrust, it rolls over and down too fast…” he trailed off, picturing the result.

“Yep, all in a couple of seconds. Too quick for the flight crew to have time to get it back.” The NTSB guy shook his head, the id clipped to his suit jacket swaying back and forth with the motion. “Always the best time if you’re going to take a bird down: takeoff or landing, guess whoever did this one wanted to get it over with sooner rather than later.” He snorted in derision, “Somebody snuck in an explosive, must have been a screener havin’ an off day.”

“Maybe,” said Wilkes, not ready to write it off as just a screener’s error. The NTSB guys were always quick to find a bad decision, one human error, and explain the whole thing away. But Wilkes’ job was to find the flaws in the systems, the procedures, the way to come up with prophylactic precautions. Maybe there was nothing more than a screener who didn’t spot a grenade or a stick of dynamite, something so obvious that there was nothing to do but chalk up a hundred and eighty three dead lives to one madman and one very bad TSA employee.

But maybe not. That’s when Wilkes spotted the first two of the butterflies. Bright yellow against the charred black of the burned wreckage, they seemed like the most incongruous things—and as he thought this, another appeared.

As they took photos and made measurements, more showed up—by ones and twos, a few flying away, but gradually building up to dozens over the course of the morning. Odd, the NTSB rep agreed, but nothing that tells us anything about the terrorist who brought down that plane.

Wilkes wasn’t so sure. Nature was handing out a big fat clue here, he was sure of that. What he wasn’t sure of was what in the hell it could possibly mean.

He leaned in close with the camera on his phone, getting some good close images of the colorful insects, emailing back to the office with a request to reach out to an expert. He needed a phone consult, someone who knew the behavior of this particular butterfly, someone who could put him on the right track.

Within minutes, his phone was buzzing, with a conference call already set up with a professor of entymology, and even better one local to the area; a local might know this bug better than an academic from a more prestigious, but distant university.

He was half-listening during the introductions, Wilkes wasn’t interested in this guy’s particulars, the regional team would have that all available if he needed it later. He just wanted answers.

“Pieridae,” the professor offered, “and all males, I’d bet.”

“Okay,” Wilkes answered, wondering if he this really would tell him anything. “Why are they all over my bomb hole?”

“I can’t be sure, but it must be something attracting them. These are commonly called ‘sulfur butterflies’, could there be sulfur on your wreckage?”

Yeah, Wilkes thought, this is looking like a wild goose chase. “No sulfur, we already did a quick chem test for it. Anything else these little fellas like?”

“Sure, but not something you’d be likely to find in a bomb—just sodium. They package it up with their sperm and deliver it to the female as an extra little bonus—sort of the flowers and candy of the butterfly world.”

“Okay, that’s…wow, the things I learn in this job. Sorry to bother you, sir, I guess it’s just…yeah, thanks.”

Butterfly sperm—now this might set a new record for useless trivia learned in a crash investigation. Unbelievable.

The NTSB guy wandered over, seeing Wilkes was off the phone. “Get anything from your expert?” he queried, trying and failing to suppress a grin. Wilkes suspected there would soon be a story going around the NTSB office about the FAA “butterfly guy”; ah well, better to be infamous than anonymous.

“Nah, not much. The little guys like sulfur,” Wilkes offered, seeing his counterpart give a cynical chuckle at that, “and sodium. Unless there was a whole lot of salt packed around the perp’s explosive, our little yellow friends are just a mystery.”

The NTSB rep got a funny look on his face, a faraway look. “Sodium. An explosive that leaves behind sodium. Well, that could be…”

They looked at each other, both heading to the same conclusion, both reluctant to get there. Wilkes said it first: “Sodium metal. Cheap, easy to get, it would have to be: sodium metal.”

“And easy,” the NTSB rep drawled, “to sneak on the plane. The stuff is soft, but you could fashion it in to any simple things: eyeglass frames, belt buckles, buttons, simple things the screeners would never be lookin’ at.”

“Wouldn’t take much,” Wilkes offered, an old college chemistry-class prank coming to mind. “An couple of ounces, that would be enough to blow out the side of a plane, enough for what we’re seeing here.”

“With the easiest trigger in the world,” the NTSB man added, putting words to the picture forming in Wilkes mind. A cup of water would be enough, just drop the sodium metal in to it and the chemical reaction would quickly release hydrogen gas, with enough heat generated as a byproduct of the reaction to ignite the gas. In just a second or two, you’d have an explosion strong enough to knock the side out of a plane.

“Sounds like a problem for you FAA boys,” his counterpart teased. “What ya gonna do, ban passengers from carrying more than a few grams of anything made of metal? ”

“No,” Wilkes shot back, “we can’t ban everything that could be made of sodium metal. Or all the other water-reactives,” he mused aloud, thinking of all the carbides, anhydrides, and alkali metals that would cover. “Too many ways to hide them, too many types to test for them all. No, it isn’t the metals we’ll have to ban.”

“Naw, you don’t mean,” the NTSB man stared in disbelief, his eyes growing wide. “You couldn’t, I mean, it’s the only other way but it’s ridiculous.”

“No, it’s not so ridiculous, it’s really the only way. We’re going to have to ban water, and anything containing a significant amount of water, from all passenger flights. It’s the only way, otherwise we could have planes dropping out of the sky every time someone is served a beverage.”

Ron gets signed copies of my books, a $50 Amazon gift certificate contributed by a reader, and—if I can find one—an interview with a real-live movie director. (Does anyone know one?) We hope that one of his prizes isn’t a visit by the FBI.

EDITED TO ADD (6/27): There’s an article on Slate about the contest.

Posted on June 15, 2007 at 6:43 AM58 Comments

Perpetual Doghouse: Meganet

I first wrote about Meganet in 1999, in a larger article on cryptographic snake-oil, and formally put them in the doghouse in 2003:

They build an alternate reality where every cryptographic algorithm has been broken, and the only thing left is their own system. “The weakening of public crypto systems commenced in 1997. First it was the 40-bit key, a few months later the 48-bit key, followed by the 56-bit key, and later the 512 bit has been broken…” What are they talking about? Would you trust a cryptographer who didn’t know the difference between symmetric and public-key cryptography? “Our technology… is the only unbreakable encryption commercially available.” The company’s founder quoted in a news article: “All other encryption methods have been compromised in the last five to six years.” Maybe in their alternate reality, but not in the one we live in.

Their solution is to not encrypt data at all. “We believe there is one very simple rule in encryption: if someone can encrypt data, someone else will be able to decrypt it. The idea behind VME is that the data is not being encrypted nor transferred. And if it’s not encrypted and not transferred, there is nothing to break. And if there’s nothing to break, it’s unbreakable.” Ha ha; that’s a joke. They really do encrypt data, but they call it something else.

Read the whole thing; it’s pretty funny.

They’re still around, and they’re still touting their snake-oil “virtual matrix encryption.” (The patent is finally public, and if someone can reverse-engineer the combination of patentese and gobbledygook into an algorithm, we can finally see how actually awful it really is.) The tech on their website is better than it was in 2003, but it’s still pretty hokey.

Back in 2005, they got their product FIPS 140-1 certified (#505 on this page). The certification was for their AES implementation, but they’re sneakily implying that VME was certified. From their website: “The Strength of a Megabit Encryption (VME). The Assurance of a 256 Bit Standard (AES). Both Technologies Combined in One Certified Module! FIPS 140-2 CERTIFICATE # 505.”

Just goes to show that with a bit of sleight-of-hand you can get anything FIPS 140 certified.

Posted on June 14, 2007 at 1:05 PM29 Comments

Portrait of the Modern Terrorist as an Idiot

The recently publicized terrorist plot to blow up John F. Kennedy International Airport, like so many of the terrorist plots over the past few years, is a study in alarmism and incompetence: on the part of the terrorists, our government and the press.

Terrorism is a real threat, and one that needs to be addressed by appropriate means. But allowing ourselves to be terrorized by wannabe terrorists and unrealistic plots—and worse, allowing our essential freedoms to be lost by using them as an excuse—is wrong.

The alleged plan, to blow up JFK’s fuel tanks and a small segment of the 40-mile petroleum pipeline that supplies the airport, was ridiculous. The fuel tanks are thick-walled, making them hard to damage. The airport tanks are separated from the pipelines by cutoff valves, so even if a fire broke out at the tanks, it would not back up into the pipelines. And the pipeline couldn’t blow up in any case, since there’s no oxygen to aid combustion. Not that the terrorists ever got to the stage—or demonstrated that they could get there—where they actually obtained explosives. Or even a current map of the airport’s infrastructure.

But read what Russell Defreitas, the lead terrorist, had to say: “Anytime you hit Kennedy, it is the most hurtful thing to the United States. To hit John F. Kennedy, wow…. They love JFK—he’s like the man. If you hit that, the whole country will be in mourning. It’s like you can kill the man twice.”

If these are the terrorists we’re fighting, we’ve got a pretty incompetent enemy.

You couldn’t tell that from the press reports, though. “The devastation that would be caused had this plot succeeded is just unthinkable,” U.S. Attorney Roslynn R. Mauskopf said at a news conference, calling it “one of the most chilling plots imaginable.” Sen. Arlen Specter (R-Pennsylvania) added, “It had the potential to be another 9/11.”

These people are just as deluded as Defreitas.

The only voice of reason out there seemed to be New York’s Mayor Michael Bloomberg, who said: “There are lots of threats to you in the world. There’s the threat of a heart attack for genetic reasons. You can’t sit there and worry about everything. Get a life…. You have a much greater danger of being hit by lightning than being struck by a terrorist.”

And he was widely excoriated for it.

This isn’t the first time a bunch of incompetent terrorists with an infeasible plot have been painted by the media as poised to do all sorts of damage to America. In May we learned about a six-man plan to stage an attack on Fort Dix by getting in disguised as pizza deliverymen and shooting as many soldiers and Humvees as they could, then retreating without losses to fight again another day. Their plan, such as it was, went awry when they took a videotape of themselves at weapons practice to a store for duplication and transfer to DVD. The store clerk contacted the police, who in turn contacted the FBI. (Thank you to the video store clerk for not overreacting, and to the FBI agent for infiltrating the group.)

The “Miami 7,” caught last year for plotting—among other things—to blow up the Sears Tower, were another incompetent group: no weapons, no bombs, no expertise, no money and no operational skill. And don’t forget Iyman Faris, the Ohio trucker who was convicted in 2003 for the laughable plot to take out the Brooklyn Bridge with a blowtorch. At least he eventually decided that the plan was unlikely to succeed.

I don’t think these nut jobs, with their movie-plot threats, even deserve the moniker “terrorist.” But in this country, while you have to be competent to pull off a terrorist attack, you don’t have to be competent to cause terror. All you need to do is start plotting an attack and—regardless of whether or not you have a viable plan, weapons or even the faintest clue—the media will aid you in terrorizing the entire population.

The most ridiculous JFK Airport-related story goes to the New York Daily News, with its interview with a waitress who served Defreitas salmon; the front-page headline blared, “Evil Ate at Table Eight.”

Following one of these abortive terror misadventures, the administration invariably jumps on the news to trumpet whatever ineffective “security” measure they’re trying to push, whether it be national ID cards, wholesale National Security Agency eavesdropping or massive data mining. Never mind that in all these cases, what caught the bad guys was old-fashioned police work—the kind of thing you’d see in decades-old spy movies.

The administration repeatedly credited the apprehension of Faris to the NSA’s warrantless eavesdropping programs, even though it’s just not true. The 9/11 terrorists were no different; they succeeded partly because the FBI and CIA didn’t follow the leads before the attacks.

Even the London liquid bombers were caught through traditional investigation and intelligence, but this doesn’t stop Secretary of Homeland Security Michael Chertoff from using them to justify (.pdf) access to airline passenger data.

Of course, even incompetent terrorists can cause damage. This has been repeatedly proven in Israel, and if shoe-bomber Richard Reid had been just a little less stupid and ignited his shoes in the lavatory, he might have taken out an airplane.

So these people should be locked up … assuming they are actually guilty, that is. Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it’s unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn’t have ordinarily done. The Miami gang’s Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

The rest of them stink of exaggeration. Jose Padilla was not actually prepared to detonate a dirty bomb in the United States, despite histrionic administration claims to the contrary. Now that the trial is proceeding, the best the government can charge him with is conspiracy to murder, kidnap and maim, and it seems unlikely that the charges will stick. An alleged ringleader of the U.K. liquid bombers, Rashid Rauf, had charges of terrorism dropped for lack of evidence (of the 25 arrested, only 16 were charged). And now it seems like the JFK mastermind was more talk than action, too.

Remember the “Lackawanna Six,” those terrorists from upstate New York who pleaded guilty in 2003 to “providing support or resources to a foreign terrorist organization”? They entered their plea because they were threatened with being removed from the legal system altogether. We have no idea if they were actually guilty, or of what.

Even under the best of circumstances, these are difficult prosecutions. Arresting people before they’ve carried out their plans means trying to prove intent, which rapidly slips into the province of thought crime. Regularly the prosecution uses obtuse religious literature in the defendants’ homes to prove what they believe, and this can result in courtroom debates on Islamic theology. And then there’s the issue of demonstrating a connection between a book on a shelf and an idea in the defendant’s head, as if your reading of this article—or purchasing of my book—proves that you agree with everything I say. (The Atlantic recently published a fascinating article on this.)

I’ll be the first to admit that I don’t have all the facts in any of these cases. None of us do. So let’s have some healthy skepticism. Skepticism when we read about these terrorist masterminds who were poised to kill thousands of people and do incalculable damage. Skepticism when we’re told that their arrest proves that we need to give away our own freedoms and liberties. And skepticism that those arrested are even guilty in the first place.

There is a real threat of terrorism. And while I’m all in favor of the terrorists’ continuing incompetence, I know that some will prove more capable. We need real security that doesn’t require us to guess the tactic or the target: intelligence and investigation—the very things that caught all these terrorist wannabes—and emergency response. But the “war on terror” rhetoric is more politics than rationality. We shouldn’t let the politics of fear make us less safe.

This essay originally appeared on Wired.com.

EDITED TO ADD (6/14): Another essay on the topic.

Posted on June 14, 2007 at 8:28 AM107 Comments

Teaching Viruses and Worms

Over two years ago, George Ledin wrote an essay in Communications of the ACM, where he advocated teaching worms and viruses to computer science majors:

Computer science students should learn to recognize, analyze, disable, and remove malware. To do so, they must study currently circulating viruses and worms, and program their own. Programming is to computer science what field training is to police work and clinical experience is to surgery. Reading a book is not enough. Why does industry hire convicted hackers as security consultants? Because we have failed to educate our majors.

This spring semester, he taught the course at Sonoma State University. It got a lot of press coverage.

No one wrote a virus for a class project. No new malware got into the wild. No new breed of supervillian graduated.

Teaching this stuff is just plain smart.

Posted on June 12, 2007 at 2:30 PM26 Comments

Bush's Watch Stolen?

Watch this video very carefully; it’s President Bush working the crowds in Albania. At 0.50 minutes into the clip, Bush has a watch. At 1.04 minutes into the clip, he had a watch.

The U.S. is denying that his watch was stolen:

Photographs showed Bush, surrounded by five bodyguards, putting his hands behind his back so one of the bodyguards could remove his watch.

I simply don’t see that in the video. Bush’s arm is out in front of him during the entire nine seconds between those stills.

Another denial:

An Albanian bodyguard who accompanied Bush in the town told The Associated Press he had seen one of his U.S. colleagues close to Bush bend down and pick up the watch.

That’s certainly possible; it may have fallen off.

But possibly the pickpocket of the century. (Although would anyone actually be stupid enough to try? There must be a zillion easier-to-steal watches in that crowd, many of them nicer than Bush’s.)

EDITED TO ADD (6/12): This article says that he wears ar $50 Timex. It also has some more odd denials.

EDITED TO ADD (6/13): In this video, from another angle, it seems clear that Bush removes the watch himself.

Posted on June 12, 2007 at 10:52 AM103 Comments

"Data Mining and the Security-Liberty Debate"

Good paper: “Data Mining and the Security-Liberty Debate,” by Daniel J. Solove.

Abstract: In this essay, written for a symposium on surveillance for the University of Chicago Law Review, I examine some common difficulties in the way that liberty is balanced against security in the context of data mining. Countless discussions about the trade-offs between security and liberty begin by taking a security proposal and then weighing it against what it would cost our civil liberties. Often, the liberty interests are cast as individual rights and balanced against the security interests, which are cast in terms of the safety of society as a whole. Courts and commentators defer to the government’s assertions about the effectiveness of the security interest. In the context of data mining, the liberty interest is limited by narrow understandings of privacy that neglect to account for many privacy problems. As a result, the balancing concludes with a victory in favor of the security interest. But as I argue, important dimensions of data mining’s security benefits require more scrutiny, and the privacy concerns are significantly greater than currently acknowledged. These problems have undermined the balancing process and skewed the results toward the security side of the scale.

My only complaint: it’s not a liberty vs. security debate. Liberty is security. It’s a liberty vs. control debate.

Posted on June 12, 2007 at 7:11 AM16 Comments

License Plate Cloning

It’s a growing problem in the UK:

“There are different levels of cloning. There is the simple cloning, just stealing a plate to drive into say the Congestion Charge zone or evade a speed camera.

“It ranges up to a higher level which is the car criminal who wants to sell on a stolen car.”

Tony Bullock’s car was cloned even though his plates were not physically stolen, and he was threatened with prosecution after “his” car was repeatedly caught speeding in Leicester.

He said: “It was horrendous. You are guilty until you can prove you’re not. It’s the first time that I’ve thought that English law is on its head.”

Metropolitan Police Federation chairman Glen Smyth said the problem has grown because of the amount of camera-based enforcement of traffic offences, which relies on computer records on who owns which car.

Posted on June 11, 2007 at 1:52 PM58 Comments

More on Kish's Encryption Scheme

Back in 2005, I wrote about Laszlo Kish’s encryption scheme, which promises the security of quantum encryption using thermal noise. I found, and continue to find, the research fascinating—although I don’t have the electrical engineering expertise to know whether or not it’s secure.

There have been developments. Kish has a new paper that not only describes a physical demonstration of the scheme, but also addresses many of the criticisms of his earlier work. And Feng Hao has a new paper that claims the scheme is totally insecure.

Again, I don’t have the EE background to know who’s right. But this is exactly the sort of back-and-forth I want to see.

Posted on June 11, 2007 at 6:49 AM29 Comments

Friday Squid Blogging: "Invisibility Cloak Materials Made from Reflective Self-Assembling Squid Proteins"

Security and squid:

A new study into the biophysical properties of a highly reflective and self-organizing squid protein called reflectin will inform researchers about the process of “bottom-up” synthesis of nanoscale structures and could lead to the development of thin-film coatings for microstructured materials, bringing scientists one step closer to the development of an “invisibility cloak.”

Posted on June 8, 2007 at 5:53 PM6 Comments

Evasive Malicious Code

New developments in malware:

Finjan reports an increasing trend for “evasive” web attacks, which keep track of visitors’ IP addresses. Attack toolkits restrict access to a single-page view from each unique IP address. The second time an IP address tries to access the malicious page, a benign page is displayed in its place.

Evasive attacks can also identify the IP addresses of crawlers used by URL filtering, reputation services and search engines, and reply to these engines with legitimate content such as news. The malicious code on the host website accesses a database of IP addresses to determine whether to serve up malware or legitimate content.

Just another step in the neverending arms race of network security.

Posted on June 8, 2007 at 1:53 PM24 Comments

Watermarking DNA

It’s not cryptography—despite the name—but it’s interesting:

DNA-based watermarks using the DNA-Crypt algorithm

Background

The aim of this paper is to demonstrate the application of watermarks based on DNA sequences to identify the unauthorized use of genetically modified organisms (GMOs) protected by patents. Predicted mutations in the genome can be corrected by the DNA-Crypt program leaving the encrypted information intact. Existing DNA cryptographic and steganographic algorithms use synthetic DNA sequences to store binary information however, although these sequences can be used for authentication, they may change the target DNA sequence when introduced into living organisms.

Results

The DNA-Crypt algorithm and image steganography are based on the same watermark-hiding principle, namely using the least significant base in case of DNA-Crypt and the least significant bit in case of the image steganography. It can be combined with binary encryption algorithms like AES, RSA or Blowfish. DNA-Crypt is able to correct mutations in the target DNA with several mutation correction codes such as the Hamming-code or the WDH-code. Mutations which can occur infrequently may destroy the encrypted information, however an integrated fuzzy controller decides on a set of heuristics based on three input dimensions, and recommends whether or not to use a correction code. These three input dimensions are the length of the sequence, the individual mutation rate and the stability over time, which is represented by the number of generations. In silico experiments using the Ypt7 in Saccharomyces cerevisiae shows that the DNA watermarks produced by DNA-Crypt do not alter the translation of mRNA into protein.

Conclusions

The program is able to store watermarks in living organisms and can maintain the original information by correcting mutations itself. Pairwise or multiple sequence alignments show that DNA-Crypt produces few mismatches between the sequences similar to all steganographic algorithms.

Paper here.

Posted on June 8, 2007 at 11:47 AM14 Comments

Inventorying "Dangerous" Chemicals for the DHS

The DHS wants universities to inventory a long list of chemicals:

Unusual paranoia over chemical attack in the US takes many forms. It can be seen in a recent piece of trouble from the Department of Homeland Security, a long list of “chemicals of interest” it wishes to require all university settings to inventory.

“Academic institutions across the country claim they will have to spend countless hours and scarce resources on documenting very small amounts of chemicals in many different labs that are scattered across sometimes sprawling campuses,” reported a recent Chemical & Engineering News, the publication of the American Chemical Society.

“For 104 chemicals on the list, the threshold is ‘any amount.'”

[…]

If one has a little bit of background in chemical weapons synthesis, one can see DHS is possessed by the idea that terrorists might storm into universities and plunder chem labs for precursors to nerve gases.

Interesting stuff about specific chemicals in the article.

Posted on June 8, 2007 at 6:12 AM45 Comments

Nonsecurity Considerations in Security Decisions

Security decisions are generally made for nonsecurity reasons. For security professionals and technologists, this can be a hard lesson. We like to think that security is vitally important. But anyone who has tried to convince the sales VP to give up her department’s Blackberries or the CFO to stop sharing his password with his secretary knows security is often viewed as a minor consideration in a larger decision. This issue’s articles on managing organizational security make this point clear.

Below is a diagram of a security decision. At its core are assets, which a security system protects. Security can fail in two ways: either attackers can successfully bypass it, or it can mistakenly block legitimate users. There are, of course, more users than attackers, so the second kind of failure is often more important. There’s also a feedback mechanism with respect to security countermeasures: both users and attackers learn about the security and its failings. Sometimes they learn how to bypass security, and sometimes they learn not to bother with the asset at all.

Threats are complicated: attackers have certain goals, and they implement specific attacks to achieve them. Attackers can be legitimate users of assets, as well (imagine a terrorist who needs to travel by air, but eventually wants to blow up a plane). And a perfectly reasonable outcome of defense is attack diversion: the attacker goes after someone else’s asset instead.

Asset owners control the security system, but not directly. They implement security through some sort of policy—either formal or informal—that some combination of trusted people and trusted systems carries out. Owners are affected by risks … but really, only by perceived risks. They’re also affected by a host of other considerations, including those legitimate users mentioned previously, and the trusted people needed to implement the security policy.

Looking over the diagram, it’s obvious that the effectiveness of security is only a minor consideration in an asset owner’s security decision. And that’s how it should be.

Whether a security countermeasure repels or allows attacks (green and red arrows, respectively) is just a small consideration when making a security trade-off.

This essay originally appeared in IEEE Security and Privacy.

Posted on June 7, 2007 at 11:25 AM24 Comments

Childhood Risks: Perception vs. Reality

Great article on perceived vs actual risks to children:

The risk of abduction remains tiny. In Britain, there are now half as many children killed every year in road accidents as there were in 1922—despite a more than 25-fold increase in traffic.

Today the figure is under 9%. Escorting children is now the norm—often in the back of a 4×4.

We are rearing our children in captivity—their habitat shrinking almost daily.

In 1970 the average nine-year-old girl would have been free to wander 840 metres from her front door. By 1997 it was 280 metres.

Now the limit appears to have come down to the front doorstep.

[…]

The picket fence marks the limit of their play area. They wouldn’t dare venture beyond it.

“You might get kidnapped or taken by a stranger,” says Jojo.

“In the park you might get raped,” agrees Holly.

Don’t they yearn to go off to the woods, to climb trees and get muddy?

No, they tell me. The woods are scary. Climbing trees is dangerous. Muddy clothes get you in trouble.

One wonders what they think of Just William, Swallows And Amazons or The Famous Five—fictional tales of strange children from another time, an age of adventures where parents apparently allowed their offspring to be out all day and didn’t worry about a bit of mud.

There is increasing concern that today’s “cotton-wool kids” are having their development hampered.

They are likely to be risk-averse, stifled by fears which are more phobic than real.

EDITED TO ADD (6/9): More commentary.

Posted on June 7, 2007 at 5:54 AM68 Comments

DHS Data Privacy and Integrity Advisory Committee's Report on REAL ID

The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security has issued an excellent report on REAL ID:

The REAL ID Act is one of the largest identity management undertakings in history. It would bring more than 200 million people from a large, diverse, and mobile country within a uniformly defined identity system, jointly operated by state governments. This has never been done before in the USA, and it raises numerous policy, privacy, and data security issues that have had only brief scrutiny, particularly given the scope and scale of the undertaking.

It is critical that specific issues be carefully considered before developing and deploying a uniform identity management system in the 21st century. These include, but are not limited to, the implementation costs, the privacy consequences, the security of stored identity documents and personal information, redress and fairness, “mission creep”, and, perhaps most importantly, provisions for national security protections.

The Department of Homeland Security’s Notice of Proposed Rulemaking touched on some of these issues, though it did not explore them in the depth necessary for a system of such magnitude and such consequence. Given that these issues have not received adequate consideration, the Committee feels it is important that the following comments do not constitute an endorsement of REAL ID or the regulations as workable or appropriate.

I’ve written about REAL ID here.

Posted on June 6, 2007 at 2:55 PM15 Comments

Remote Metal Sensors Used to Detect Poachers

Interesting use of the technology, although I’m sure it has more value on the battlefield than to detect poachers.

The system consists of a network of foot-long metal detectors similar to those used in airports. When moving metal objects such as a machete or a rifle trip the sensor, it sends a radio signal to a wireless Internet gateway camouflaged in the tree canopy as far as a kilometer away. This signal is transmitted via satellite to the Internet, where the incident is logged and messages revealing the poachers’ position and direction are sent instantly to park headquarters, where patrols can then be dispatched.

Posted on June 6, 2007 at 11:06 AM25 Comments

Department of Homeland Security Research Solicitation

Interesting document.

Lots of good stuff. The nine research areas:

  • Botnets and Other Malware: Detection and Mitigation
  • Composable and Scalable Secure Systems
  • Cyber Security Metrics
  • Network Data Visualization for Information Assurance
  • Internet Tomography/Topography
  • Routing Security Management Tool
  • Process Control System Security
  • Data Anonymization Tools and Techniques
  • Insider Threat Detection and Mitigation

And this implies they’ve accepted the problem:

Cyber attacks are increasing in frequency and impact. Even though these attacks have not yet had a significant impact on our Nation’s critical infrastructures, they have demonstrated that extensive vulnerabilities exist in information systems and networks, with the potential for serious damage. The effects of a successful cyber attack might include: serious consequences for major economic and industrial sectors, threats to infrastructure elements such as electric power, and disruption of the response and communications capabilities of first responders.

It’s good to see research money going to this stuff.

Posted on June 6, 2007 at 6:07 AM19 Comments

Terrorism Statistics

Interesting:

The majority of terrorist attacks result in no fatalities, with just 1 percent of such attacks causing the deaths of 25 or more people.

And terror incidents began rising some in 1998, and that level remained relatively constant through 2004.

These and other myth-busting facts about global terrorism are now available on a new online database open to the public.

The database identifies more than 30,000 bombings, 13,400 assassinations and 3,200 kidnappings. Also, it details more than 1,200 terrorist attacks within the United States.

A lot of this depends on your definition of “terrorism,” but it’s interesting stuff.

The database was developed by the National Consortium for the Study of Terrorism and Responses to Terrorism (START) based at the University of Maryland, with funding from the U.S. Department of Homeland Security. It includes unclassified information about 80,000 terror incidents that occurred from 1970 through 2004.

The database is here:

The Global Terrorism Database (GTD) is an open-source database including information on terrorist events around the world since 1970 (currently updated through 2004). Unlike many other event databases, the GTD includes systematic data on international as well as domestic terrorist incidents that have occurred during this time period and now includes almost 80,000 cases. For each GTD incident, information is available on the date and location of the incident, the weapons used and nature of the target, the number of casualties, and—when identifiable—the identity of the perpetrator.

Posted on June 5, 2007 at 2:38 PM

Second Annual Movie-Plot Threat Contest Semi-Finalists

On April 1, I announced the Second Annual Movie-Plot Threat Contest:

Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.

Make the TSA ban wristwatches. Or laptop computers. Or polyester. Or zippers over three inches long. You get the idea.

Your entry will be judged on the common item that the TSA has no choice but to ban, as well as the cleverness of the plot. It has to be realistic; no science fiction, please. And the write-up is critical; last year the best entries were the most entertaining to read.

Well, the submissions are in; the blog entry has 334 comments. I’ve read them all, and here are the semi-finalists:

Cast your vote; I’ll announce the winner on the 15th.

Posted on June 5, 2007 at 12:01 PM116 Comments

Third Party Consent and Computer Searches

U.S. courts are weighing in with opinions:

When Ray Andrus’ 91-year-old father gave federal agents permission to search his son’s password-protected computer files and they found child pornography, the case turned a spotlight on how appellate courts grapple with third-party consents to search computers.

[…]

The case was a first for the 10th U.S. Circuit Court of Appeals, and only two other circuits have touched on the issue, the 4th and 6th circuits. The 10th Circuit held that although password-protected computers command a high level of privacy, the legitimacy of a search turns on an officer’s belief that the third party had authority to consent.

The 10th Circuit’s recent 2-1 decision in U.S. v. Andrus, No. 06-3094 (April 25, 2007), recognized for the first time that a password-protected computer is like a locked suitcase or a padlocked footlocker in a bedroom. The digital locks raise the expectation of privacy by the owner. The majority nonetheless refused to suppress the evidence.

Excellent commentary from Jennifer Granick:

The Fourth Amendment generally prohibits warrantless searches of an individual’s home or possessions. There is an exception to the warrant requirement when someone consents to the search. Consent can be given by the person under investigation, or by a third party with control over or mutual access to the property being searched. Because the Fourth Amendment only prohibits “unreasonable searches and seizures,” permission given by a third party who lacks the authority to consent will nevertheless legitimize a warrantless search if the consenter has “apparent authority,” meaning that the police reasonably believed that the person had actual authority to control or use the property.

Under existing case law, only people with a key to a locked closet have apparent authority to consent to a search of that closet. Similarly, only people with the password to a locked computer have apparent authority to consent to a search of that device. In Andrus, the father did not have the password (or know how to use the computer) but the police say they did not have any reason to suspect this because they did not ask and did not turn the computer on. Then, they used forensic software that automatically bypassed any installed password.

The majority held that the police officers not only weren’t obliged to ask whether the father used the computer, they had no obligation to check for a password before performing their forensic search. In dissent, Judge Monroe G. McKay criticized the agents’ intentional blindness to the existence of password protection, when physical or digital locks are such a fundamental part of ascertaining whether a consenting person has actual or apparent authority to permit a police search. “(T)he unconstrained ability of law enforcement to use forensic software such at the EnCase program to bypass password protection without first determining whether such passwords have been enabled … dangerously sidestep(s) the Fourth Amendment.”

[…]

If courts are going to treat computers as containers, and if owners must lock containers in order to keep them private from warrantless searches, then police should be required to look for those locks. Password protected computers and locked containers are an inexact analogy, but if that is how courts are going to do it, then its inappropriate to diminish protections for computers simply because law enforcement chooses to use software that turns a blind eye to owners’ passwords.

Posted on June 5, 2007 at 6:43 AM56 Comments

Information Leakage in the Slingbox

Interesting:

…despite the use of encryption, a passive eavesdropper can still learn private information about what someone is watching via their Slingbox Pro.

[…]

First, in order to conserve bandwidth, the Slingbox Pro uses something called variable bitrate (VBR) encoding. VBR is a standard approach for compressing streaming multimedia. At a very abstract level, the idea is to only transmit the differences between frames. This means that if a scene changes rapidly, the Slingbox Pro must still transmit a lot of data. But if the scene changes slowly, the Slingbox Pro will only have to transmit a small amount of data—a great bandwidth saver.

Now notice that different movies have different visual effects (e.g., some movies have frequent and rapid scene changes, others don’t). The use of VBR encodings therefore means that the amount data transmitted over time can serve as a fingerprint for a movie. And, since encryption alone won’t fully conceal the number of bytes transmitted, this fingerprint can survive encryption!

We experimented with fingerprinting encrypted Slingbox Pro movie transmissions in our lab. We took 26 of our favorite movies (we tried to pick movies from the same director, or multiple movies in a series), and we played them over our Slingbox Pro. Sometimes we streamed them to a laptop attached to a wired network, and sometimes we streamed them to a laptop connected to an 802.11 wireless network. In all cases the laptop was one hop away.

We trained our system on some of those traces. We then took new query traces for these movies and tried to match them to our database. For over half of the movies, we were able to correctly identify the movie over 98% of the time. This is well above the less than 4% accuracy that one would get by random chance.

More details in the paper.

Posted on June 4, 2007 at 1:24 PM27 Comments

Cyberwar

I haven’t posted anything about the cyberwar between Russia and Estonia because, well, because I didn’t think there was anything new to say. We know that this kind of thing is possible. We don’t have any definitive proof that Russia was behind it. But it would be foolish to think that the various world’s militaries don’t have capabilities like this.

And anyway, I wrote about cyberwar back in January 2005.

But it seems that the essay never made it into the blog. So here it is again.


Cyberwar

The first problem with any discussion about cyberwar is definitional. I’ve been reading about cyberwar for years now, and there seem to be as many definitions of the term as there are people who write about the topic. Some people try to limit cyberwar to military actions taken during wartime, while others are so inclusive that they include the script kiddies who deface websites for fun.

I think the restrictive definition is more useful, and would like to define four different terms as follows:

Cyberwar—Warfare in cyberspace. This includes warfare attacks against a nation’s military—forcing critical communications channels to fail, for example—and attacks against the civilian population.

Cyberterrorism—The use of cyberspace to commit terrorist acts. An example might be hacking into a computer system to cause a nuclear power plant to melt down, a dam to open, or two airplanes to collide. In a previous Crypto-Gram essay, I discussed how realistic the cyberterrorism threat is.

Cybercrime—Crime in cyberspace. This includes much of what we’ve already experienced: theft of intellectual property, extortion based on the threat of DDOS attacks, fraud based on identity theft, and so on.

Cybervandalism—The script kiddies who deface websites for fun are technically criminals, but I think of them more as vandals or hooligans. They’re like the kids who spray paint buses: in it more for the thrill than anything else.

At first glance, there’s nothing new about these terms except the “cyber” prefix. War, terrorism, crime, even vandalism are old concepts. That’s correct, the only thing new is the domain; it’s the same old stuff occurring in a new arena. But because the arena of cyberspace is different from other arenas, there are differences worth considering.

One thing that hasn’t changed is that the terms overlap: although the goals are different, many of the tactics used by armies, terrorists, and criminals are the same. Just as all three groups use guns and bombs, all three groups can use cyberattacks. And just as every shooting is not necessarily an act of war, every successful Internet attack, no matter how deadly, is not necessarily an act of cyberwar. A cyberattack that shuts down the power grid might be part of a cyberwar campaign, but it also might be an act of cyberterrorism, cybercrime, or even—if it’s done by some fourteen-year-old who doesn’t really understand what he’s doing—cybervandalism. Which it is will depend on the motivations of the attacker and the circumstances surrounding the attack…just as in the real world.

For it to be cyberwar, it must first be war. And in the 21st century, war will inevitably include cyberwar. For just as war moved into the air with the development of kites and balloons and then aircraft, and war moved into space with the development of satellites and ballistic missiles, war will move into cyberspace with the development of specialized weapons, tactics, and defenses.

The Waging of Cyberwar

There should be no doubt that the smarter and better-funded militaries of the world are planning for cyberwar, both attack and defense. It would be foolish for a military to ignore the threat of a cyberattack and not invest in defensive capabilities, or to disregard the strategic or tactical possibility of launching an offensive cyberattack against an enemy during wartime. And while history has taught us that many militaries are indeed foolish and ignore the march of progress, cyberwar has been discussed too much in military circles to be ignored.

This implies that at least some of our world’s militaries have Internet attack tools that they’re saving in case of wartime. They could be denial-of-service tools. They could be exploits that would allow military intelligence to penetrate military systems. They could be viruses and worms similar to what we’re seeing now, but perhaps country- or network-specific. They could be Trojans that eavesdrop on networks, disrupt network operations, or allow an attacker to penetrate still other networks.

Script kiddies are attackers who run exploit code written by others, but don’t really understand the intricacies of what they’re doing. Conversely, professional attackers spend an enormous amount of time developing exploits: finding vulnerabilities, writing code to exploit them, figuring out how to cover their tracks. The real professionals don’t release their code to the script kiddies; the stuff is much more valuable if it remains secret until it is needed. I believe that militaries have collections of vulnerabilities in common operating systems, generic applications, or even custom military software that their potential enemies are using, and code to exploit those vulnerabilities. I believe that these militaries are keeping these vulnerabilities secret, and that they are saving them in case of wartime or other hostilities. It would be irresponsible for them not to.

The most obvious cyberattack is the disabling of large parts of the Internet, at least for a while. Certainly some militaries have the capability to do this, but in the absence of global war I doubt that they would do so; the Internet is far too useful an asset and far too large a part of the world economy. More interesting is whether they would try to disable national pieces of it. If Country A went to war with Country B, would Country A want to disable Country B’s portion of the Internet, or remove connections between Country B’s Internet and the rest of the world? Depending on the country, a low-tech solution might be the easiest: disable whatever undersea cables they’re using as access. Could Country A’s military turn its own Internet into a domestic-only network if they wanted?

For a more surgical approach, we can also imagine cyberattacks designed to destroy particular organizations’ networks; e.g., as the denial-of-service attack against the Al Jazeera website during the recent Iraqi war, allegedly by pro-American hackers but possibly by the government. We can imagine a cyberattack against the computer networks at a nation’s military headquarters, or the computer networks that handle logistical information.

One important thing to remember is that destruction is the last thing a military wants to do with a communications network. A military only wants to shut an enemy’s network down if they aren’t getting useful information from it. The best thing to do is to infiltrate the enemy’s computers and networks, spy on them, and surreptitiously disrupt select pieces of their communications when appropriate. The next best thing is to passively eavesdrop. After that, the next best is to perform traffic analysis: analyze who is talking to whom and the characteristics of that communication. Only if a military can’t do any of that do they consider shutting the thing down. Or if, as sometimes but rarely happens, the benefits of completely denying the enemy the communications channel outweigh all of the advantages.

Properties of Cyberwar

Because attackers and defenders use the same network hardware and software, there is a fundamental tension between cyberattack and cyberdefense. The National Security Agency has referred to this as the “equities issue,” and it can be summarized as follows. When a military discovers a vulnerability in a common product, they can either alert the manufacturer and fix the vulnerability, or not tell anyone. It’s not an easy decision. Fixing the vulnerability gives both the good guys and the bad guys a more secure system. Keeping the vulnerability secret means that the good guys can exploit the vulnerability to attack the bad guys, but it also means that the good guys are vulnerable. As long as everyone uses the same microprocessors, operating systems, network protocols, applications software, etc., the equities issue will always be a consideration when planning cyberwar.

Cyberwar can take on aspects of espionage, and does not necessarily involve open warfare. (In military talk, cyberwar is not necessarily “hot.”) Since much of cyberwar will be about seizing control of a network and eavesdropping on it, there may not be any obvious damage from cyberwar operations. This means that the same tactics might be used in peacetime by national intelligence agencies. There’s considerable risk here. Just as U.S. U2 flights over the Soviet Union could have been viewed as an act of war, the deliberate penetration of a country’s computer networks might be as well.

Cyberattacks target infrastructure. In this way they are no different than conventional military attacks against other networks: power, transportation, communications, etc. All of these networks are used by both civilians and the military during wartime, and attacks against them inconvenience both groups of people. For example, when the Allies bombed German railroad bridges during World War II, that affected both civilian and military transport. And when the United States bombed Iraqi communications links in both the First and Second Iraqi Wars, that affected both civilian and military communications. Cyberattacks, even attacks targeted as precisely as today’s smart bombs, are likely to have collateral effects.

Cyberattacks can be used to wage information war. Information war is another topic that’s received considerable media attention of late, although it is not new. Dropping leaflets on enemy soldiers to persuade them to surrender is information war. Broadcasting radio programs to enemy troops is information war. As people get more and more of their information over cyberspace, cyberspace will increasingly become a theater for information war. It’s not hard to imagine cyberattacks designed to co-opt the enemy’s communications channels and use them as a vehicle for information war.

Because cyberwar targets information infrastructure, the waging of it can be more damaging to countries that have significant computer-network infrastructure. The idea is that a technologically poor country might decide that a cyberattack that affects the entire world would disproportionately affect its enemies, because rich nations rely on the Internet much more than poor ones. In some ways this is the dark side of the digital divide, and one of the reasons countries like the United States are so worried about cyberdefense.

Cyberwar is asymmetric, and can be a guerrilla attack. Unlike conventional military offensives involving divisions of men and supplies, cyberattacks are carried out by a few trained operatives. In this way, cyberattacks can be part of a guerrilla warfare campaign.

Cyberattacks also make effective surprise attacks. For years we’ve heard dire warnings of an “electronic Pearl Harbor.” These are largely hyperbole today. I discuss this more in that previous Crypto-Gram essay on cyberterrorism, but right now the infrastructure just isn’t sufficiently vulnerable in that way.

Cyberattacks do not necessarily have an obvious origin. Unlike other forms of warfare, misdirection is more likely a feature of a cyberattack. It’s possible to have damage being done, but not know where it’s coming from. This is a significant difference; there’s something terrifying about not knowing your opponent—or knowing it, and then being wrong. Imagine if, after Pearl Harbor, we did not know who attacked us?

Cyberwar is a moving target. In the previous paragraph, I said that today the risks of an electronic Pearl Harbor are unfounded. That’s true; but this, like all other aspects of cyberspace, is continually changing. Technological improvements affect everyone, including cyberattack mechanisms. And the Internet is becoming critical to more of our infrastructure, making cyberattacks more attractive. There will be a time in the future, perhaps not too far into the future, when a surprise cyberattack becomes a realistic threat.

And finally, cyberwar is a multifaceted concept. It’s part of a larger military campaign, and attacks are likely to have both real-world and cyber components. A military might target the enemy’s communications infrastructure through both physical attack—bombings of selected communications facilities and transmission cables—and virtual attack. An information warfare campaign might include dropping of leaflets, usurpation of a television channel, and mass sending of e-mail. And many cyberattacks still have easier non-cyber equivalents: A country wanting to isolate another country’s Internet might find a low-tech solution, involving the acquiescence of backbone companies like Cable & Wireless, easier than a targeted worm or virus. Cyberwar doesn’t replace war; it’s just another arena in which the larger war is fought.

People overplay the risks of cyberwar and cyberterrorism. It’s sexy, and it gets media attention. And at the same time, people underplay the risks of cybercrime. Today crime is big business on the Internet, and it’s getting bigger all the time. But luckily, the defenses are the same. The countermeasures aimed at preventing both cyberwar and cyberterrorist attacks will also defend against cybercrime and cybervandalism. So even if organizations secure their networks for the wrong reasons, they’ll do the right thing.

Here’s my previous essay on cyberterrorism.

Posted on June 4, 2007 at 6:13 AM17 Comments

Friday Squid Blogging: Humboldt Squid Returns to Southern California

They’re back:

For the third time in ten years, massive amounts of Humboldt squid have been flourishing in the waters of Southern California.

“There is more population of Humboldt squid than is naturally proper,” Cassell said.

In Newport Beach, fishermen climbed aboard the Freelance, eager for their chance to land a jumbo squid.

Captain Damon Davis knows exactly where the squid are running.

“What’s unique about the last couple times they’ve been around is they’ve been huge, they’ve been big, they’ve been 20 up to almost 40 pounds, where in years past they’ve been five to 10 pounds,” Davis said.

[…]

But Cassell says the declining number of sharks is probably what’s behind the squid invasion.

“Their population has increased a little bit because we’ve wiped out their predators,” Cassell said.

Posted on June 1, 2007 at 5:33 PM9 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.