Schneier on Security
A blog covering security and security technology.
« Police Department Privilege Escalation |
| RFID Chips and Viruses »
March 15, 2006
Long, and interesting, article on bioterrorism.
When you read this, don't concentrate too much on what's possible right now. If the techniques discussed in the article are beyond the reach of government laboratories now, they won't be in five or ten years. And then they'll become cheaper and easier. Attackers look for leverage, and technology gives attackers leverage.
Posted on March 15, 2006 at 1:46 PM
• 19 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Interesting... They make synthesizing DNA sound about as simple as writing a script.
Hopefully counter-defence mechanisms will improve at a similar speed: if anybody can produce these virii in the near future, then we should expect things like portable biological firewalls to detect areas of deployment, etc.
"If the techniques discussed in the article are beyond the reach of government laboratories now, they won't be in five or ten years."
So, they should be, in no way confused with a movie-plot threat?!
Quite. I was thinking the same thing:
"don't concentrate too much on what's possible"
Beware the impossible.
"So, they should be, in no way confused with a movie-plot threat?!"
You misunderstand. They are movie-plot threats. But as security technologists, we should be thinking about how to deal with them when they are not.
I misunderstand more now than ever before, apparently.
"we should be thinking about how to deal with them when they are not."
So, we SHOULD be thinking about bioterrorism movie-plot threats, but SHOULD NOT be thinking about Bird Flu, Hurricanes, Anti-Missile Defense or Unmanned Aircraft so we know how to deal with them when they are not?
I know I sound like I'm being sarcastic, but I really don't understand. Please help me understand the difference.
"So, we SHOULD be thinking about bioterrorism movie-plot threats, but SHOULD NOT be thinking about Bird Flu, Hurricanes, Anti-Missile Defense or Unmanned Aircraft so we know how to deal with them when they are not?"
This is starting to get weird. I don't know who said we should not be thinking about those other things, but it wasn't me.
I'm all for research, even into things like data mining. I'm against spending money on deployment until there is 1) a real threat, and 2) the countermeasure has been proven to be effective. Bird flu: definitely a real threat. Hurricanes: ditto.
This was a post about an interesting article on bioterrorism. It wasn't meant to be a comprehensive list of things we should worry about, with the implication that everything not on the list should not be worried about.
No, Bruce, the weird part is that typically you call out a movie-plot threat as a "movie-plot threat." This time you didn't. It makes me think further about agenda. Thanks! No further clarification required!
You're right; I didn't call it a movie-plot threat.
I guess that's because I read the piece, and it seemed to be reasoned and non-sensationalist. I was thinking about it as a future possibility, and not a call to immediate action.
I probably should have used the phrase to temper my comments.
Good on 'ya! I appreciate the clarification.
I look forward to hearing the discussion here every day (even though sometimes the opinions expressed are contrary to my own)!
Keep on Bloggin' and long live the cephalopods!
Candidate pathogens could easily be tested in populous poor countries, where the correct diagnosis will never happen. Almost certainly there will be no autopsy, at most somebody picking a likely cause of death, and that's that.
Even if some sharp observer notices a large 'cluster' of apparent auto-immune diseases causing quick death, the hunt will be for some inadvertent poison, say a toxic metal or insecticide or something. When no good answer turns up, the cases will be closed.
Well, here's my humble, probably poorly thought out idea (for a government) that I will submit as a sacrificial lab animal to the forum:
Emulate what was done (at least as the conventional thinking goes) with cryptography - hire as many "big brains" as you can, jam them into a large, ridiculously funded research / development center, and make the thing as black as you possibly can. Essentially build an NSA of biochemistry, or Fort Detrick only an order or two of magnitude larger. If you can't know exactly what's coming or completely control proliferation of knowledge and technology, the next best thing is to be as far ahead of your opponents as possible, from a knowledge and capabilities standpoint. This gives you the ability to better predict actual threats (and avoid 'movie plots'), and gives you the best chance of being able to counter whatever comes your way.
Essentially, you leverage your main advantage - having the size and resources of a nation-state - to the greatest degree possible. The idea of having hundreds of separate, scattered research facilities creates more security problems and allows less (and sometimes prohibits) interaction / communication between teams.
I do think that it is possible to create "designer pathogens" now; it might require a big government funded lab now, there will be a general availability in a few years.
Will terrorist groups or rogue nations be able to use "designer biological weapons" in an attack soon? I don't think that a nation will decide to use a biological weapon, unless in the most dire of circumstances: It is so hard to control the spread of the infection that the country deploying the weapon is likely to be hit just as hard as his opponent.
That leaves us the terrorists, they might be able to build designer microbes, but with a rather small chance of building a successfull one. They will have to try out dozens of potential pathogens to find a successfull one; the best candidates will wipe out their makers before they've produced enough for deployment.
After clearing the hurdle of finding the "right" pathogen it has to be produced in sufficient quantity (with the risk of making errors, fatal to the project here) and one has to find a way of spreading the microbe at the target location. This last part is significantly harder than just sending in an infected terrorist, standard medical (quarantine) procedures could stop the attack before it reaches critical mass.
Yes; I agree with MathFox that the real danger is a state deciding to try to use biological weapons (likely a non-nuclear state in response to a nuclear attack); building a successful virus against humans is far more difficult than building a successful computer virus.
Even worse, most viruses are difficult to control. Unless your faith believes in killing all humans on the planet, you'd want a virus which would be unlikely to hit yourself or your country/faith/organization. This means that you'd either want immunization, or an agent that is self-limiting. Either reduces the risk of the agent.
I read the article. I wonder how much expertise might be involved in "helping" the bird flu virus over whatever barrier is preventing it from recombining with a humanly-transmissible flu virus. Since we keep hearing that this is likely or even inevitable, one might guess that it would be easier than creating a virus from mail-order parts.
It seems to me that the best defense is to improve our anti-viral and anti-bacterial medicine. Even if no terrorists were about, we would benefit from better prevention and treatment of wild diseases.
Our present system depends on for-profit big pharm to do the R&D. This is a great mechanism for producing many competing boner pills. For making vaccines, it's hopelessly pathetic. Everybody knows you don't get rich curing real diseases, or even worse, preventing them in the first place.
Starting with an existing virus is, of course, easier than creating one from scratch. However, it will likely have the weaknesses of the existing virus. Case in point, it is far easier to create a new avian strain of flu from an old one than one that will jump to humans and can then continue to jump between humans.
Making the arbitrary assumption that your computer science is better grounded than your organic chemistry, think of it like creating a computer virus with the following targets:
Bird = Windows 98/ Intel x86 hardware
Human = Multics/GE-645 hardware
Given those two targets, how do you create a computer virus that goes between the two, and thrives and replicates in both? Is it possible? Almost certaintely. However, it's more difficult than creating a virus for only one target. As the article suggests, it would probably be better to start with an existing successful human virus (like chickenpox) and modify it. Of course, then you have another barrier - that many countries now vaccinate against that virus, and/or that your base virus is studied, which gives your opponent a greater head start to building a defense (like a vaccine).
While my computer knowledge is much greater than my biochemistry, the latter is not quite zero.
The gene expression mechanisms of birds and mammals are far more similar than the instruction execution mechanisms of x86 and GE-645 machines. In the biological systems, much of the "code" is directly "executable" in both platforms without modification, which is certainly not true in the computers.
With H5N1 bird flu, we are, in fact presented with a "program" that already "runs" on both platforms, replicates, and kills.
What the bird flu virus is missing is a suite of "features" possessed by human flu viruses that enhance its transmission between humans (like the runny nose stuff.)
As I understand the "species jumping" scenario, recombination is expected to spontaneously occur when a human is simultaneously infected with both viruses. In that case, the assembly process in some cells would be presented with "parts" of both viruses, and would combine them randomly.
With sufficient trials, a workable recombination of the two viruses would occur, and a lethal pandemic would be off and running.
My speculation is that some lower-knowledge and cruder sort of labwork might produce favorable conditions for this recombination to occur. Maybe something as crude as a human population deliberately (probably not voluntarily) infected with both viruses?
Here are three examples of scenarios I worry about:
1) A crazed national leader secretly includes smallpox vaccine in a routine nationwide (or oligarchy-wide) vaccination, then releases smallpox in a few major cities around the world. See "A Planet for the President."
2) Criminals or terrorists release crop pathogens (easy to get and safe to handle; see BioScience 52:569) to profit from commodity trading. They could then threaten to attack humans next unless we meet their demands.
3) Radical environmentalists develop and release a virus designed to reduce human fertility. See "The Tide Turners." People have already done this with mice, although the virus turned out to be lethal. Actually, that would be less scary in a way, because you could at least track the epidemic more easily.
I think these are all either possible today or plausible extrapolations of current trends in biotechnology. What to do? Assuming, for the sake of argument, that a police state could prevent this sort of activity on its own territory, the problem is that a human-transmitted virus released anywhere will spread everywhere.
The only solution I've thought of is social distancing, i.e., taking measures to reduce spread of viruses among people, without knowing who's infected with what. But social distancing has its own problems. See "Bowling Alone."
First, I am now confused by what "movie plot threat" means. I had thought that Bruce meant focussing on some specific, elaborate scenario at the expense of developing a comprehensive strategy -- a useful concept with a catchy name.
But in this thread, everyone, including Bruce at one point, seemed to be using it to mean "frankly incredible; ridiculous". If that's what's its supposed to mean, then a lot of things that have been called "movie plot threats" on this blog, aren't; and in fact there arguably aren't any movie plot threats at all, just attack strategies that require very high payoffs to be justified. I would go so far as to say that *if* that is what the phrase is meant to mean (and I was pretty sure it wasn't, until a moment ago) then it's actually quite a naive concept, and is the reason hapless householders keep tripping themselves up with the old "but no-one would look for cash in the freezer!" line.
If you hear some zany attack strategy and find yourself thinking "that's just ridiculous", then I'm afraid you're probably one of the suckers; if you find yourself thinking, "hmm, that's so elaborate it would take $75,000 cash and two man-years to set up, so it wouldn't be done for an expected payoff of less than a quarter mill", then you're probably a security guy. Or a reasonably competent thief.
The second thing is that the journal is clearly labelled as being about the impact of emerging technologies. As Bruce points out, it is clearly appropriate to at least think about such things before they hit us. For example, because it takes a long time to change an installed cryptographic infrastructure, it is appropriate to speculate about future computational power and design algorithms to resist speculated future attacks. Designing an algorithm solely to resist what is possible right now would be a complete waste of time.
The third thing, I'll get back to later 'coz I've got work to do. 8^)
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.