May 2009 Archives

Obama's Cybersecurity Speech

I am optimistic about President Obama's new cybersecurity policy and the appointment of a new "cybersecurity coordinator," though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.

His principles were all welcome -- securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he's overly optimistic that legislation won't be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we'll be in a decade.

Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president's commitment to transparency and privacy, both of which are vital for security.

But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won't improve.

As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.

Two, security coordination needs to happen at the highest level possible, whether that's evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.


This essay originally appeared on The New York Times website, along with several others commenting on Obama's speech. All the essays are worth reading, although I want to specifically quote James Bamford making an important point I've repeatedly made:

The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president's ear.

Because the new cyber czar will have neither a checkbook nor direct access to President Obama, the role will be more analogous to a traffic cop than a czar.

Gus Hosein wrote a good essay on the need for privacy:

Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.

The presumption should be that all data collected is vulnerable to abuse or theft. We should therefore collect only what is absolutely required.

As I said, they're all worth reading. And here are some more links.

I wrote something similar in 2002 about the creation of the Department of Homeland Security:

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it's more robust, reliable, and secure. You're alive and reading this because of it.

EDITED TO ADD (6/2): Gene Spafford's opinion.

EDITED TO ADD (6/4): Good commentary from Bob Blakley.

Posted on May 29, 2009 at 3:01 PM21 Comments

No Smiling in Driver's License Photographs

In other biometric news, four states have banned smiling in driver's license photographs.

The serious poses are urged by DMVs that have installed high-tech software that compares a new license photo with others that have already been shot. When a new photo seems to match an existing one, the software sends alarms that someone may be trying to assume another driver's identity.

But there's a wrinkle in the technology: a person's grin. Face-recognition software can fail to match two photos of the same person if facial expressions differ in each photo, says Carnegie Mellon University robotics professor Takeo Kanade.

Posted on May 29, 2009 at 11:19 AM68 Comments

News from the Fingerprint Biometrics World

Wacky:

A Singapore cancer patient was held for four hours by immigration officials in the United States when they could not detect his fingerprints -- which had apparently disappeared because of a drug he was taking.

[...]

The drug, capecitabine, is commonly used to treat cancers in the head and neck, breast, stomach and colorectum.

One side-effect is chronic inflammation of the palms or soles of the feet and the skin can peel, bleed and develop ulcers or blisters -- or what is known as hand-foot syndrome.

"This can give rise to eradication of fingerprints with time," explained Tan, senior consultant in the medical oncology department at Singapore's National Cancer Center.

Posted on May 29, 2009 at 6:37 AM47 Comments

Faking Background Checks for Security Clearances

What do you do if you have too many background checks to do, and not enough time to do them? You fake them, of course:

Eight current and former security clearance investigators say they have been pressured to work faster and take on crushing workloads in recent years, as the government tried to eliminate a backlog that once topped 531,000 cases.

Investigators have eliminated that backlog, but they now are trying to meet congressionally mandated deadlines to speed up the security clearance process. The 2004 Intelligence Reform and Terrorism Prevention Act requires agencies to issue at least 80 percent of initial security clearances within 120 days after receiving a completed application. This December, agencies must issue at least 90 percent of their initial security clearances within 60 days.

"This job is a shredder, and agents are grist for the mill," said K.C. Smith, an OPM investigator in Austin, Texas, with 23 years of experience. "There are people who are getting sick, under a lot of stress, their family life is suffering. They are just beat down."

Investigators say it is common practice to spend nights, weekends and holidays writing up reports, and some don't report the overtime they work for fear it will be held against them in their performance evaluations.

Some say their superiors have made it clear that the priority is to close cases, and they say they have felt pressure to turn in even incomplete cases that lack crucial interviews or records if it will help them keep their numbers up. A recent Government Accountability Office report found that the Defense Department's security clearance process is plagued by such incomplete cases: 87 percent of the 3,500 initial top-secret security clearance cases Defense approved last year were missing at least one interview or important record.

It's all a matter of incentives. The investigators were rewarded for completing investigations, not for doing them well.

Posted on May 28, 2009 at 2:40 PM26 Comments

Steganography Using TCP Retransmission

Research:

Hiding Information in Retransmissions

Wojciech Mazurczyk, Milosz Smolarczyk, Krzysztof Szczypiorski

The paper presents a new steganographic method called RSTEG (Retransmission Steganography), which is intended for a broad class of protocols that utilises retransmission mechanisms. The main innovation of RSTEG is to not acknowledge a successfully received packet in order to intentionally invoke retransmission. The retransmitted packet carries a steganogram instead of user data in the payload field. RSTEG is presented in the broad context of network steganography, and the utilisation of RSTEG for TCP (Transport Control Protocol) retransmission mechanisms is described in detail. Simulation results are also presented with the main aim to measure and compare the steganographic bandwidth of the proposed method for different TCP retransmission mechanisms as well as to determine the influence of RSTEG on the network retransmissions level.

I don't think these sorts of things have any large-scale applications, but they are clever.

Posted on May 28, 2009 at 6:40 AM23 Comments

Automatic Dice Thrower

Impressive:

The Dice-O-Matic is 7 feet tall, 18 inches wide and 18 inches deep. It has an aluminum frame covered with Plexiglas panels. A 6x4 inch square Plexiglas tube runs vertically up the middle almost the entire height. Inside this tube a bucket elevator carries dice from a hopper at the bottom, past a camera, and tosses them onto a ramp at the top. The ramp spirals down between the tube and the outer walls. The camera and synchronizing disk are near the top, the computer, relay board, elevator motor and power supplies are at the bottom."

Click on the link and watch the short video.

As someone who has designed random number generators professionally, I find this to be an overly complex hardware solution to a relatively straightforward software problem. But the sheer beauty of the machine cannot be denied.

What I am curious about is what kind of statistical anomalies there are in the dice themselves. At 1,330,000 rolls a day, we can certainly learn something about the randomness of commercial dice.

Posted on May 27, 2009 at 6:44 AM64 Comments

Defending Against Movie-Plot Threats with Movie Characters

Excellent:

Seeking to quell fears of terrorists somehow breaking out of America's top-security prisons and wreaking havoc on the defenseless heartland, President Barack Obama moved quickly to announce an Anti-Terrorist Strike Force headed by veteran counterterrorism agent Jack Bauer and mutant superhero Wolverine. Already dubbed a "dream team," their appointment is seen by experts as a crucial step in reducing the mounting incidents of national conservatives and congressional Democrats crapping their pants.

"I believe a fictional threat is best met with decisive fictional force," explained President Obama. "Jack Bauer and Wolverine are among the very best we have when in comes to combating fantasy foes." Mr. Bauer said, "We're quite certain that our prisons are secure. Osama bin Laden and his agents wouldn't dare attempt a break-out, and would fail miserably if they tried. But I love this country. And should Lex Luthor, Magneto or the Loch Ness Monster attack, we'll be there to stop them."

Posted on May 26, 2009 at 6:09 AM34 Comments

Secret Questions

In 2004, I wrote about the prevalence of secret questions as backup passwords. The problem is that the answers to these "secret questions" are often much easier to guess than random passwords. Mother's maiden name isn't very secret. Name of first pet, name of favorite teacher: there are some common names. Favorite color: I could probably guess that in no more than five attempts.

The result is that the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

Here's some actual research on the issue:

It's no secret: Measuring the security and reliability of authentication via 'secret' questions

Abstract:

All four of the most popular webmail providers -- AOL, Google, Microsoft, and Yahoo! -- rely on personal questions as the secondary authentication secrets used to reset account passwords. The security of these questions has received limited formal scrutiny, almost all of which predates webmail. We ran a user study to measure the reliability and security of the questions used by all four webmail providers. We asked participants to answer these questions and then asked their acquaintances to guess their answers. Acquaintance with whom participants reported being unwilling to share their webmail passwords were able to guess 17% of their answers. Participants forgot 20% of their own answers within six months. What's more, 13% of answers could be guessed within five attempts by guessing the most popular answers of other participants, though this weakness is partially attributable to the geographic homogeneity of our participant pool.

Posted on May 25, 2009 at 9:56 AM80 Comments

Friday Squid Blogging: How to Capture a Giant Squid

Three methods:

Method 2: Offer Squid a Tasty Treat

If your preferred squid looks hungry, try luring it with a delicious oil tanker. During the course of the 1930s, the Norwegian tanker Brunswick was attacked not once, not twice, but three times by giant squid. Metal boats don't sound especially appetizing, but scientists think squid mistake the large, gray objects for whales—a decidedly yummy entree giant squid have been known to dine upon. Unfortunately, it's more difficult to get a good grip on the steel hull of a tanker, than on the pliable hide of a whale. Whenever a squid tried to put the Brunswick in a choke hold, its tentacles would slip, and the squid would end up making a fatal slide into the ship's propellers.

Posted on May 22, 2009 at 4:00 PM10 Comments

Schneier and Ranum on Face-Off Video

Marcus Ranum and I did two video versions of our Face-Off column: one on cloud computing, and the other on who should be in charge of cyber-security.

Posted on May 22, 2009 at 2:33 PM14 Comments

The Doghouse: Net1

They have technology:

The FTS Patent has been acclaimed by leading cryptographic authorities around the world as the most innovative and secure protocol ever invented to manage offline and online smart card related transactions. Please see the independent report by Bruce Schneider [sic] in his book entitled Applied Cryptography, 2nd Edition published in the late 1990s.

I have no idea what this is referring to.

EDITED TO ADD (5/20): Someone, probably from the company, said in comments that this is referring to the UEPS protocol, discussed on page 589. I still don't like the hyperbole and the implied endorsement in the quote.

Posted on May 22, 2009 at 11:29 AM31 Comments

This Week's Terrorism Arrests

Four points. One: There was little danger of an actual terrorist attack:

Authorities said the four men have long been under investigation and there was little danger they could actually have carried out their plan, NBC News' Pete Williams reported.

[...]

In their efforts to acquire weapons, the defendants dealt with an informant acting under law enforcement supervision, authorities said. The FBI and other agencies monitored the men and provided an inactive missile and inert C-4 to the informant for the defendants, a federal complaint said.

The investigation had been under way for about a year.

"They never got anywhere close to being able to do anything," one official told NBC News. "Still, it's good to have guys like this off the street."

Of course, politicians are using this incident to peddle more fear:

"This was a very serious threat that could have cost many, many lives if it had gone through," Representative Peter T. King, Republican from Long Island, said in an interview with WPIX-TV. "It would have been a horrible, damaging tragedy. There's a real threat from homegrown terrorists and also from jailhouse converts."

Two, they were caught by traditional investigation and intelligence. Not airport security. Not warrantless eavesdropping. But old fashioned investigation and intelligence. This is what works. This is what keeps us safe. Here's an essay I wrote in 2004 that says exactly that.

The only effective way to deal with terrorists is through old-fashioned police and intelligence work -- discovering plans before they're implemented and then going after the plotters themselves.

Three, they were idiots:

The ringleader of the four-man homegrown terror cell accused of plotting to blow up synagogues in the Bronx and military planes in Newburgh admitted to a judge today that he had smoked pot before his bust last night.

When U.S. Magistrate Judge Lisa M. Smith asked James Cromitie if his judgment was impaired during his appearance in federal court in White Plains, the 55-year-old confessed: "No. I smoke it regularly. I understand everything you are saying."

Four, an "informant" helped this group a lot:

In April, Mr. Cromitie and the three other men selected the synagogues as their targets, the statement said. The informant soon helped them get the weapons, which were incapable of being fired or detonated, according to the authorities.

The warning the warning I wrote in "Portrait of the Modern Terrorist as an Idiot" is timely again:

Despite the initial press frenzies, the actual details of the cases frequently turn out to be far less damning. Too often it's unclear whether the defendants are actually guilty, or if the police created a crime where none existed before.

The JFK Airport plotters seem to have been egged on by an informant, a twice-convicted drug dealer. An FBI informant almost certainly pushed the Fort Dix plotters to do things they wouldn't have ordinarily done. The Miami gang's Sears Tower plot was suggested by an FBI undercover agent who infiltrated the group. And in 2003, it took an elaborate sting operation involving three countries to arrest an arms dealer for selling a surface-to-air missile to an ostensible Muslim extremist. Entrapment is a very real possibility in all of these cases.

Actually, that whole 2007 essay is timely again. Some things never change.

Posted on May 22, 2009 at 6:11 AM56 Comments

IEDs Are Now Weapons of Mass Destruction

In an article on the recent arrests in New York:

On Wednesday night, they planted one of the mock improvised explosive devices in a trunk of a car outside the temple and two mock bombs in the back seat of a car outside the Jewish center, the authorities said. Shortly thereafter, police officers swooped in and broke the windows on the suspects' black sport utility vehicle and charged them with conspiracy to use weapons of mass destruction within the United States and conspiracy to acquire and use antiaircraft missiles.

I've covered this before. According to the law, almost any weapon is a weapon of mass destruction.

From the complaint:

... knowingly did combine, conspire, confederate and agree together and with each other to use a weapon of mass destruction, to wit, a surface-to-air guided missile system and an improvised explosive device ("IED") containing over 30 pounds of Composition 4 ('C-4") military grade plastic explosive material against persons and property within the United States.

Posted on May 21, 2009 at 3:54 PM45 Comments

On the Anonymity of Home/Work Location Pairs

Interesting:

Philippe Golle and Kurt Partridge of PARC have a cute paper on the anonymity of geo-location data. They analyze data from the U.S. Census and show that for the average person, knowing their approximate home and work locations -- to a block level -- identifies them uniquely.

Even if we look at the much coarser granularity of a census tract -- tracts correspond roughly to ZIP codes; there are on average 1,500 people per census tract -- for the average person, there are only around 20 other people who share the same home and work location. There's more: 5% of people are uniquely identified by their home and work locations even if it is known only at the census tract level. One reason for this is that people who live and work in very different areas (say, different counties) are much more easily identifiable, as one might expect.

"On the Anonymity of Home/Work Location Pairs," by Philippe Golle and Kurt Partridge:

Abstract:

Many applications benefit from user location data, but location data raises privacy concerns. Anonymization can protect privacy, but identities can sometimes be inferred from supposedly anonymous data. This paper studies a new attack on the anonymity of location data. We show that if the approximate locations of an individual's home and workplace can both be deduced from a location trace, then the median size of the individual's anonymity set in the U.S. working population is 1, 21 and 34,980, for locations known at the granularity of a census block, census track and county respectively. The location data of people who live and work in different regions can be re-identified even more easily. Our results show that the threat of re-identification for location data is much greater when the individual's home and work locations can both be deduced from the data. To preserve anonymity, we offer guidance for obfuscating location traces before they are disclosed.

This is all very troubling, given the number of location-based services springing up and the number of databases that are collecting location data.

Posted on May 21, 2009 at 6:15 AM39 Comments

Me on Full-Body Scanners in Airports

I'm very happy with this quote in a CNN.com story on "whole-body imaging" at airports:

Bruce Schneier, an internationally recognized security technologist, said whole-body imaging technology "works pretty well," privacy rights aside. But he thinks the financial investment was a mistake. In a post-9/11 world, he said, he knows his position isn't "politically tenable," but he believes money would be better spent on intelligence-gathering and investigations.

"It's stupid to spend money so terrorists can change plans," he said by phone from Poland, where he was speaking at a conference. If terrorists are swayed from going through airports, they'll just target other locations, such as a hotel in Mumbai, India, he said.

"We'd be much better off going after bad guys ... and back to pre-9/11 levels of airport security," he said. "There's a huge 'cover your ass' factor in politics, but unfortunately, it doesn't make us safer."

I've written about "cover your ass" security in the past, but it's nice to see it in the press.

Posted on May 20, 2009 at 2:34 PM30 Comments

Microsoft Bans Memcopy()

This seems smart:

Microsoft plans to formally banish the popular programming function that's been responsible for an untold number of security vulnerabilities over the years, not just in Windows but in countless other applications based on the C language. Effective later this year, Microsoft will add memcpy(), CopyMemory(), and RtlCopyMemory() to its list of function calls banned under its secure development lifecycle.

Here's the list of banned function calls. This doesn't help secure legacy code, of course, but you have to start somewhere.

Posted on May 20, 2009 at 6:17 AM85 Comments

"Lost" Puzzle in Wired Magazine

For the April 09 issue of Wired Magazine, I was asked to create a cryptographic puzzle based on the television show Lost. Specifically, I was given a "clue" to encrypt.

Here are details of the puzzle and solving attempts. Near as I can tell, no one has published a solution.

Creating something like this is very hard. The puzzle needs to be hard enough so that people don't figure it out immediately, and easy enough so that people eventually figure it out. To make matters even more complicated, people will share their ideas on the Internet. So if the solution requires -- and I'm making this up -- expertise in Mayan history, carburetor design, algebraic topology, and Russian folk dancing, those people are likely to come together on the Internet. The puzzle has to be challenging for the group mind; not just for individual minds.

Do I need to give people a hint?

EDITED TO ADD (5/20): No hints required; there's a solution posted.

Posted on May 19, 2009 at 1:06 PM19 Comments

Invisible Ink Pen

This is cool. It writes like a normal pen, but if you run a hair dryer over the written words they disappear. And if you put the paper in the freezer the words reappear. Fantastic.

EDITED TO ADD (5/20): This is the same technology as the widely available Pilot Frixion pen. Here's a temperature sensitivity test, and a freezer test.

Posted on May 19, 2009 at 6:49 AM45 Comments

Pirate Terrorists in Chesapeake Bay

This is a great movie-plot threat:

Pirates could soon find their way to the waters of the Chesapeake Bay. That's assuming that a liquefied natural gas terminal gets built at Sparrows Point.

The folks over at the LNG Opposition Team have long said that building an LNG plant on the shores of the bay would surely invite terrorists to attack. They say a recent increase in piracy off the Somali coast is fodder for their argument.

Remember—if you don't like something, claim that it will enable, embolden, or entice terrorists. Works every time.

Posted on May 18, 2009 at 1:38 PM37 Comments

Kylin: New Chinese Operating System

Interesting:

China has developed more secure operating software for its tens of millions of computers and is already installing it on government and military systems, hoping to make Beijing's networks impenetrable to U.S. military and intelligence agencies.

The secure operating system, known as Kylin, was disclosed to Congress during recent hearings that provided new details on how China's government is preparing to wage cyberwarfare with the United States.

"We are in the early stages of a cyber arms race and need to respond accordingly," said Kevin G. Coleman, a private security specialist who advises the government on cybersecurity. He discussed Kylin during a hearing of the U.S. China Economic and Security Review Commission on April 30.

The deployment of Kylin is significant, Mr. Coleman said, because the system has "hardened" key Chinese servers. U.S. offensive cyberwar capabilities have been focused on getting into Chinese government and military computers outfitted with less secure operating systems like those made by Microsoft Corp.

"This action also made our offensive cybercapabilities ineffective against them, given the cyberweapons were designed to be used against Linux, UNIX and Windows," he said.

Is this real, or yet more cybersecurity hype pushed by agencies looking for funding and power? My guess is the latter. Anyone know?

Posted on May 18, 2009 at 6:06 AM72 Comments

No Warrant Required for GPS Tracking

At least, according to a Wisconsin appeals court ruling:

As the law currently stands, the court said police can mount GPS on cars to track people without violating their constitutional rights -- even if the drivers aren't suspects.

Officers do not need to get warrants beforehand because GPS tracking does not involve a search or a seizure, Judge Paul Lundsten wrote for the unanimous three-judge panel based in Madison.

That means "police are seemingly free to secretly track anyone's public movements with a GPS device," he wrote.

The court wants the legislature to fix it:

However, the District 4 Court of Appeals said it was "more than a little troubled" by that conclusion and asked Wisconsin lawmakers to regulate GPS use to protect against abuse by police and private individuals.

I think the odds of that happening are approximately zero.

Posted on May 15, 2009 at 6:30 AM69 Comments

Detecting Liars by Content

Interesting:

Kevin Colwell, a psychologist at Southern Connecticut State University, has advised police departments, Pentagon officials and child protection workers, who need to check the veracity of conflicting accounts from parents and children. He says that people concocting a story prepare a script that is tight and lacking in detail.

"It's like when your mom busted you as a kid, and you made really obvious mistakes," Dr. Colwell said. "Well, now you're working to avoid those."

By contrast, people telling the truth have no script, and tend to recall more extraneous details and may even make mistakes. They are sloppier.

[...]

In several studies, Dr. Colwell and Dr. Hiscock-Anisman have reported one consistent difference: People telling the truth tend to add 20 to 30 percent more external detail than do those who are lying. "This is how memory works, by association," Dr. Hiscock-Anisman said. "If you're telling the truth, this mental reinstatement of contexts triggers more and more external details."

Not so if you've got a concocted story and you're sticking to it. "It's the difference between a tree in full flower in the summer and a barren stick in winter," said Dr. Charles Morgan, a psychiatrist at the National Center for Post-Traumatic Stress Disorder, who has tested it for trauma claims and among special-operations soldiers.

This is new research, and there are limitations to the approach, but it's interesting.

Posted on May 14, 2009 at 1:30 PM52 Comments

Attacking the Food Supply

Terrorists attacking our food supply is a nightmare scenario that has been given new life during the recent swine flu outbreak. Although it seems easy to do, understanding why it hasn't happened is important. G.R. Dalziel, at the Nanyang Technological University in Singapore, has written a report chronicling every confirmed case of malicious food contamination in the world since 1950: 365 cases in all, plus 126 additional unconfirmed cases. What he found demonstrates the reality of terrorist food attacks.

It turns out 72% of the food poisonings occurred at the end of the food supply chain -- at home -- typically by a friend, relative, neighbour, or co-worker trying to kill or injure a specific person. A characteristic example is Heather Mook of York, who in 2007 tried to kill her husband by putting rat poison in his spaghetti.

Most of these cases resulted in fewer than five casualties -- Mook only injured her husband in this incident -- although 16% resulted in five or more. Of the 19 cases that claimed 10 or more lives, four involved serial killers operating over several years.

Another 23% of cases occurred at the retail or food service level. A 1998 incident in Japan, where someone put arsenic in a curry sold at a summer festival, killing four and hospitalising 63, is a typical example. Only 11% of these incidents resulted in 100 or more casualties, while 44% resulted in none.

There are very few incidents of people contaminating the actual food supply. People deliberately contaminated a water supply seven times, resulting in three deaths. There is only one example of someone deliberately contaminating a crop before harvest -- in Australia in 2006 -- and the crops were recalled before they could be sold. And in the three cases of someone deliberately contaminating food during packaging and distribution, including a 2005 case in the UK where glass and needles were baked into loaves of bread, no one died or was injured.

This isn't the stuff of bioterrorism. The closest example occurred in 1984 in the US, where members of a religious group known as the Rajneeshees contaminated several restaurant salad bars with salmonella enterica typhimurium, sickening 751, hospitalising 45, but killing no one. In fact, no one knew this was malicious until a year later, when one of the perpetrators admitted it.

Almost all of the food contaminations used conventional poisons such as cyanide, drain cleaner, mercury, or weed killer. There were nine incidents of biological agents, including salmon­ella, ricin, and faecal matter, and eight cases of radiological matter. The 2006 London poisoning of the former KGB agent Alexander Litvinenko with polonium-210 in his tea is an example of the latter.

And that assassination illustrates the real risk of malicious food poisonings. What is discussed in terrorist training manuals, and what the CIA is worried about, is the use of contaminated food in targeted assassinations. The quantities involved for mass poisonings are too great, the nature of the food supply too vast and the details of any plot too complicated and unpredictable to be a real threat. That becomes crystal clear as you read the details of the different incidents: it's hard to kill one person, and very hard to kill dozens. Hundreds, thousands: it's just not going to happen any time soon. The fear of bioterror is much greater, and the panic from any bioterror scare will injure more people, than bioterrorism itself.

Far more dangerous are accidental contaminations due to negligent industry practices, such as the 2006 spinach E coli and, more recently, peanut salmonella contaminations in the US, the 2008 milk contaminations in China, and the BSE-infected beef from earlier this decade. And the systems we have in place to deal with these accidental contaminations also work to mitigate any intentional ones.

In 2004, the then US secretary of health and human services, Tommy Thompson, said on Fox News: "I cannot understand why terrorists have not attacked our food supply. Because it is so easy to do."

Guess what? It's not at all easy to do.

This essay previously appeared in The Guardian.

Posted on May 14, 2009 at 6:24 AM34 Comments

Software Problems with a Breath Alcohol Detector

This is an excellent lesson in the security problems inherent in trusting proprietary software:

After two years of attempting to get the computer based source code for the Alcotest 7110 MKIII-C, defense counsel in State v. Chun were successful in obtaining the code, and had it analyzed by Base One Technologies, Inc.

Draeger, the manufacturer maintained that the system was perfect, and that revealing the source code would be damaging to its business. They were right about the second part, of course, because it turned out that the code was terrible.

2. Readings are Not Averaged Correctly: When the software takes a series of readings, it first averages the first two readings. Then, it averages the third reading with the average just computed. Then the fourth reading is averaged with the new average, and so on. There is no comment or note detailing a reason for this calculation, which would cause the first reading to have more weight than successive readings. Nonetheless, the comments say that the values should be averaged, and they are not.

3. Results Limited to Small, Discrete Values: The A/D converters measuring the IR readings and the fuel cell readings can produce values between 0 and 4095. However, the software divides the final average(s) by 256, meaning the final result can only have 16 values to represent the five-volt range (or less), or, represent the range of alcohol readings possible. This is a loss of precision in the data; of a possible twelve bits of information, only four bits are used. Further, because of an attribute in the IR calculations, the result value is further divided in half. This means that only 8 values are possible for the IR detection, and this is compared against the 16 values of the fuel cell.

4. Catastrophic Error Detection Is Disabled: An interrupt that detects that the microprocessor is trying to execute an illegal instruction is disabled, meaning that the Alcotest software could appear to run correctly while executing wild branches or invalid code for a period of time. Other interrupts ignored are the Computer Operating Property (a watchdog timer), and the Software Interrupt.

Basically, the system was designed to return some sort of result regardless.

This is important. As we become more and more dependent on software for evidentiary and other legal applications, we need to be able to carefully examine that software for accuracy, reliability, etc. Every government contract for breath alcohol detectors needs to include the requirement for public source code. "You can't look at our code because we don't want you to" simply isn't good enough.

Posted on May 13, 2009 at 2:07 PM108 Comments

Using Surveillance Cameras to Detect Cashier Cheating

It's called "sweethearting": when cashiers pass free merchandise to friends. And some stores are using security cameras to detect it:

Mathematical algorithms embedded in the stores' new security system pick out sweethearting on their own. There's no need for a security guard watching banks of video monitors or reviewing hours of grainy footage. When the system thinks it's spotted evidence, it alerts management on a computer screen and offers up the footage.

[...]

Big Y's security system comes from a Cambridge, Mass.-based company called StopLift Inc. The technology works by scouring video pixels for various gestures and deciding whether they add up to a normal transaction at the register or not.

How good is it? My guess is that it's not very good, but this is an instance where that may be good enough. As long as there aren't a lot of false positives -- as long as a person can quickly review the suspect footage and dismiss it as a false positive -- the cost savings might be worth the expense.

Posted on May 13, 2009 at 7:55 AM38 Comments

Fourth Movie-Plot Threat Contest Winner

For this contest, the goal was to:

...to find an existing event somewhere in the industrialized world—Third World events are just too easy—and provide a conspiracy theory to explain how the terrorists were really responsible.

I thought it was straightforward enough but, honestly, I wasn't very impressed with the submissions. Nothing surprised me with its cleverness. There were scary entries and there were plausible entries, but hardly any were both at the same time. And I was amazed by how many people didn't bother to read the rules at all, and just submitted movie plot threats.

But, after reading through the entries, I have chosen a winner. It's HJohn, for his kidnap-blackmail-terrorist connection:

Though recent shooting sprees in churches, nursing homes, and at family outings appear unrelated, a terrifying link has been discovered. All perpetrators had small children who were abducted by terrorists, and perpetrators received a video of their children with hooded terrorists warning that their children would be beheaded if they do not engage in the suicidal rampage. The terror threat level has been raised to red as profiling, known associations, and criminal history are now useless in detecting who will be the next terrorist sniper or airline hijacker. Anyone who loves their children may be a potential terrorist.

Fairly plausible, and definitely scary. Congratulations, HJohn. E-mail me and I'll get you your fabulous prizes—as soon as I figure out what they are.

For historical purposes: The First Movie-Plot Threat Contest rules and winner. The Second Movie-Plot Threat Contest rules, semifinalists, and winner. The Third Movie-Plot Theat Contest rules, semifinalists, and winner.

Posted on May 12, 2009 at 6:40 AM44 Comments

Zeus Trojan has Self-Destruct Option

From Brian Krebs at The Washington Post:

One of the scarier realities about malicious software is that these programs leave ultimate control over victim machines in the hands of the attacker, who could simply decide to order all of the infected machines to self-destruct. Most security experts will tell you that while this so-called "nuclear option" is an available feature in some malware, it is hardly ever used. Disabling infected systems is counterproductive for attackers, who generally focus on hoovering as much personal and financial data as they can from the PCs they control.

But try telling that to Roman Hüssy, a 21-year-old Swiss information technology expert, who last month witnessed a collection of more than 100,000 hacked Microsoft Windows systems tearing themselves apart at the command of their cyber criminal overlords.

This is bad. I see it as a sign that the botnet wars are heating up, and botnet designers would rather destroy their networks than have them fall into "enemy" hands.

Posted on May 11, 2009 at 12:25 PM39 Comments

Researchers Hijack a Botnet

A bunch of researchers at the University of California Santa Barbara took control of a botnet for ten days, and learned a lot about how botnets work:

The botnet in question is controlled by Torpig (also known as Sinowal), a malware program that aims to gather personal and financial information from Windows users. The researchers gained control of the Torpig botnet by exploiting a weakness in the way the bots try to locate their commands and control servers—the bots would generate a list of domains that they planned to contact next, but not all of those domains were registered yet. The researchers then registered the domains that the bots would resolve, and then set up servers where the bots could connect to find their commands. This method lasted for a full ten days before the botnet's controllers updated the system and cut the observation short.

During that time, however, UCSB's researchers were able to gather massive amounts of information on how the botnet functions as well as what kind of information it's gathering. Almost 300,000 unique login credentials were gathered over the time the researchers controlled the botnet, including 56,000 passwords gathered in a single hour using "simple replacement rules" and a password cracker. They found that 28 percent of victims reused their credentials for accessing 368,501 websites, making it an easy task for scammers to gather further personal information. The researchers noted that they were able to read through hundreds of e-mail, forum, and chat messages gathered by Torpig that "often contain detailed (and private) descriptions of the lives of their authors."

Here's the paper:

Abstract:

Botnets, networks of malware-infected machines that are controlled by an adversary, are the root cause of a large number of security threats on the Internet. A particularly sophisticated and insidious type of bot is Torpig, a malware program that is designed to harvest sensitive information (such as bank account and credit card data) from its victims. In this paper, we report on our efforts to take control of the Torpig botnet for ten days. Over this period, we observed more than 180 thousand infections and recorded more than 70 GB of data that the bots collected. While botnets have been "hijacked" before, the Torpig botnet exhibits certain properties that make the analysis of the data particularly interesting. First, it is possible (with reasonable accuracy) to identify unique bot infections and relate that number to the more than 1.2 million IP addresses that contacted our command and control server. This shows that botnet estimates that are based on IP addresses are likely to report inflated numbers. Second, the Torpig botnet is large, targets a variety of applications, and gathers a rich and diverse set of information from the infected victims. This opens the possibility to perform interesting data analysis that goes well beyond simply counting the number of stolen credit cards.

Another article.

Posted on May 11, 2009 at 6:56 AM21 Comments

Marc Rotenberg on Security vs. Privacy

Nice essay:

In the modern era, the right of privacy represents a vast array of rights that include clear legal standards, government accountability, judicial oversight, the design of techniques that are minimally intrusive and the respect for the dignity and autonomy of individuals.

The choice that we are being asked to make is not simply whether to reduce our expectation of privacy, but whether to reduce the rule of law, whether to diminish the role of the judiciary, whether to cast a shroud of secrecy over the decisions made by government.

In other words, we are being asked to become something other than the strong America that could promote innovation and safeguard privacy that could protect the country and its Constitutional traditions. We are being asked to become a weak nation that accepts surveillance without accountability that cannot defend both security and freedom.

That is a position we must reject. If we agree to reduce our expectation of privacy, we will erode our Constitutional democracy.

Posted on May 8, 2009 at 6:41 AM20 Comments

MI6 and a Lost Memory Stick

Oops:

The United Kingdom's MI6 agency acknowledged this week that in 2006 it had to scrap a multi-million-dollar undercover drug operation after an agent left a memory stick filled with top-secret data on a transit coach.

The general problem. The general solution.

Posted on May 7, 2009 at 1:27 PM29 Comments

Virginia Data Ransom

This is bad:

On Thursday, April 30, the secure site for the Virginia Prescription Monitoring Program (PMP) was replaced with a $US10M ransom demand:

"I have your shit! In *my* possession, right now, are 8,257,378 patient records and a total of 35,548,087 prescriptions. Also, I made an encrypted backup and deleted the original. Unfortunately for Virginia, their backups seem to have gone missing, too. Uhoh :(For $10 million, I will gladly send along the password."

More details:

Hackers last week broke into a Virginia state Web site used by pharmacists to track prescription drug abuse. They deleted records on more than 8 million patients and replaced the site's homepage with a ransom note demanding $10 million for the return of the records, according to a posting on Wikileaks.org, an online clearinghouse for leaked documents.

[...]

Whitley Ryals said the state discovered the intrusion on April 30, after which time it shut down Web site site access to dozens of pages serving the Department of Health Professions. The state also has temporarily discontinued e-mail to and from the department pending the outcome of a security audit, Whitley Ryals said.

More. This doesn't seem like a professional extortion/ransom demand, but still....

EDITED TO ADD (5/13): There are backups, and here's a Q&A with details on exactly what they were storing.

Posted on May 7, 2009 at 7:10 AM63 Comments

Lie Detector Charlatans

This is worth reading:

Five years ago I wrote a Language Log post entitled "BS conditional semantics and the Pinocchio effect" about the nonsense spouted by a lie detection company, Nemesysco. I was disturbed by the marketing literature of the company, which suggested a 98% success rate in detecting evil intent of airline passengers, and included crap like this:

The LVA uses a patented and unique technology to detect "Brain activity finger prints" using the voice as a "medium" to the brain and analyzes the complete emotional structure of your subject. Using wide range spectrum analysis and micro-changes in the speech waveform itself (not micro tremors!) we can learn about any anomaly in the brain activity, and furthermore, classify it accordingly. Stress ("fight or flight" paradigm) is only a small part of this emotional structure

The 98% figure, as I pointed out, and as Mark Liberman made even clearer in a follow up post, is meaningless. There is no type of lie detector in existence whose performance can reasonably be compared to the performance of finger printing. It is meaningless to talk about someone's "complete emotional structure", and there is no interesting sense in which any current technology can analyze it. It is not the case that looking at speech will provide information about "any anomaly in the brain activity": at most it will tell you about some anomalies. Oh, the delicious irony, a lie detector company that engages in wanton deception.

So, ok, Nemesysco, as I said in my earlier post, is clearly trying to pull the wool over people's eyes. Disturbing, yes, but it doesn't follow from the fact that its marketing is wildly misleading that the company's technology is of no merit. However, we now know that the company's technology is, in fact, of no merit. How do we know? Because two phoneticians, Anders Eriksson and Francisco Lacerda, studied the company's technology, based largely on the original patent, and and provided a thorough analysis in a 2007 article Charlatanry in forensic speech science: A problem to be taken seriously, which appeared in the International Journal of Speech Language and the Law (IJSLL), vol 14.2 2007, 169–­193, Equinox Publishing. Eriksson and Lacerda conclude, regarding the original technology on which Nemesysco's products are based, Layered Voice Analysis (LVA), that:

Any qualified speech scientist with some computer background can see at a glance, by consulting the documents, that the methods on which the program is based have no scientific validity.

Most of the lie detector industry is based on, well, lies.

EDITED TO ADD (5/13): The paper is available here. More details here. Nemesyco's systems are being used to bully people out of receiving government aid in the UK.

Posted on May 6, 2009 at 12:14 PM40 Comments

Secure Version of Windows Created for the U.S. Air Force

I have long argued that the government should use its massive purchasing power to pressure software vendors to improve security. Seems like the U.S. Air Force has done just that:

The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn't have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.

Surprisingly, Microsoft quickly agreed to the plan, and Ballmer got personally involved in the project.

"He has half-a-dozen clients that he personally gets involved with, and he saw that this just made a lot of sense," Gilligan said. "They had already done preliminary work themselves trying to identify what would be a more secure configuration. So we fine-tuned and added to that."

The NSA got together with the National Institute of Standards and Technology, the Defense Information Systems Agency and the Center for Internet Security to decide what to lock down in the Air Force special edition.

Many of the changes were complex and technical, but Gilligan says one of the most important and simplest was an obvious fix to how Windows XP handled passwords. The Air Force insisted the system be configured so administrative passwords were unique, and different from general user passwords, preventing an average user from obtaining administrative privileges. Specifications were added to increase the length and complexity of passwords and expire them every 60 days.

It then took two years for the Air Force to catalog and test all the software applications on its networks against the new configuration to uncover conflicts. In some cases, where internally designed software interacted with Windows XP in an insecure way, they had to change the in-house software.

Now I want Microsoft to offer this configuration to everyone.

EDITED TO ADD (5/6): Microsoft responds:

Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn't a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn't deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.

The work from the AF ended up morphing into the Federal Desktop Core Configuration (FDCC) recommendations maintained by NIST. There are differences, but they are essentially the same thing. NIST initially used even more secure settings in the hardening process (many of which have since been relaxed because of operational issues, and is now even closer to what the AF created).

Anyone can download the FDCC settings, documentation, and even complete images. I worked on the FDCC project for little over a year, and Aaron Margosis has been involved for many years, and continues to be involved. He offers all sorts of public knowledge and useful tools. Here, Aaron has written a couple of tools that anyone can use to apply FDCC settings to local group policy. It includes the source code, if anyone wants to customize them.

In the initial article, a lot of the other improvements, such as patching, came from the use of better tools (SCCM, etc.), and were not necessarily solely due to the changes in the base image (although that certainly didn't hurt). So, it seems the author mixed up some of the different technology pushes and wrapped them up into a single story. He also seem to imply that this is something special and secret, but the truth is there is more openness with the FDCC program and the surrounding security outcomes than anything we've ever done before. Even better, there are huge agencies that have already gone first in trying to use these harden settings, and essentially been beta testers for the rest of the world. The FDCC settings may not be the best fit for every company, but it is a good model to compare against.

Let me know if you have any questions.

Roger A. Grimes, Security Architect, ACE Team, Microsoft

EDITED TO ADD (5/12): FDCC policy specs.

Posted on May 6, 2009 at 6:43 AM62 Comments

Security Considerations in the Evolution of the Human Penis

Fascinating bit of evolutionary biology:

So how did natural selection equip men to solve the adaptive problem of other men impregnating their sexual partners? The answer, according to Gallup, is their penises were sculpted in such a way that the organ would effectively displace the semen of competitors from their partner's vagina, a well-synchronized effect facilitated by the "upsuck" of thrusting during intercourse. Specifically, the coronal ridge offers a special removal service by expunging foreign sperm. According to this analysis, the effect of thrusting would be to draw other men's sperm away from the cervix and back around the glans, thus "scooping out" the semen deposited by a sexual rival.

Evolution is the result of a struggle for survival, so you'd expect security considerations to be important.

Posted on May 5, 2009 at 1:39 PM46 Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender's ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization -- domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you're relying on that company to keep your data private. If you use Google Docs, you're relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google's security, but we don't know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it's on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don't even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation -- or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don't need one to read it from the backup tapes at your ISP.

This isn't a technological problem; it's a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don't have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant -- even though it occurred at the phone company switching office and not in the target's home or office -- the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AM58 Comments

Mathematical Illiteracy

This may be the stupidest example of risk assessment I've ever seen. It's a video clip from a recent Daily Show, about he dangers of the Large Hadron Collider. The segment starts off slow, but then there's an exchange with high school science teacher Walter L. Wagner, who insists the device has a 50-50 chance of destroying the world:

"If you have something that can happen, and something that won't necessarily happen, it's going to either happen or it's going to not happen, and so the best guess is 1 in 2."

"I'm not sure that's how probability works, Walter."

This is followed by clips of news shows taking the guy seriously.

In related news, almost four-fifths of Americans don't know that a trillion is a million million, and most think it's less than that. Is it any wonder why we're having so much trouble with national budget debates?

Posted on May 4, 2009 at 6:19 AM138 Comments

Googling Justice Scalia

Nice hack:

Last year, when law professor Joel Reidenberg wanted to show his Fordham University class how readily private information is available on the Internet, he assigned a group project. It was collecting personal information from the Web about himself.

This year, after U.S. Supreme Court Justice Antonin Scalia made public comments that seemingly may have questioned the need for more protection of private information, Reidenberg assigned the same project. Except this time Scalia was the subject, the prof explains to the ABA Journal in a telephone interview.

His class turned in a 15-page dossier that included not only Scalia's home address, home phone number and home value, but his food and movie preferences, his wife's personal e-mail address and photos of his grandchildren, reports Above the Law.

And, as Scalia himself made clear in a statement to Above the Law, he isn't happy about the invasion of his privacy:

"Professor Reidenberg's exercise is an example of perfectly legal, abominably poor judgment. Since he was not teaching a course in judgment, I presume he felt no responsibility to display any," the justice says, among other comments.

Somehow, I don't think "poor judgment" is going to be much of a defense against those with agendas more malicious than Professor Reidenberg.

Posted on May 1, 2009 at 12:52 PM54 Comments

Yet Another New York Times Cyberwar Article

It's the season, I guess:

The United States has no clear military policy about how the nation might respond to a cyberattack on its communications, financial or power networks, a panel of scientists and policy advisers warned Wednesday, and the country needs to clarify both its offensive capabilities and how it would respond to such attacks.

The report, based on a three-year study by a panel assembled by the National Academy of Sciences, is the first major effort to look at the military use of computer technologies as weapons. The potential use of such technologies offensively has been widely discussed in recent years, and disruptions of communications systems and Web sites have become a standard occurrence in both political and military conflicts since 2000.

Here's the report summary, which I have not read yet.

I was particularly disturbed by the last paragraph of the newspaper article:

Introducing the possibility of a nuclear response to a catastrophic cyberattack would be expected to serve the same purpose.

Nuclear war is not a suitable response to a cyberattack.

Posted on May 1, 2009 at 10:46 AM25 Comments

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..