Schneier on Security
A blog covering security and security technology.
« Nuclear Fears |
| FireDogLake Book Salon for Liars and Outliers »
June 29, 2012
On Securing Potentially Dangerous Virology Research
Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.
Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of "dual use" research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.
My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.
First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs' principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.
Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.
Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.
Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.
Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant's system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is "advanced persistent threat." It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker's job harder.
This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.
Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding -- or ban -- this sort of research, it will still happen in another country.
The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called "the Crypto Wars." Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.
Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.
The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called "full disclosure" and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about "patch Tuesday"; the once-a-month automatic download and installation of security patches.
Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called "responsible disclosure." Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.
Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.
Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.
On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.
Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.
This essay was originally published in Science.
EDITED TO ADD (7/14): Related article: "What Biology Can Learn from Infosec."
Posted on June 29, 2012 at 6:35 AM
• 28 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Great essay. I'm pleasantly surprised to see that I used pretty much the same arguments in my post What Biosecurity Can Learn from Infosec. Though you make the points more thoroughly and clearly.
Why no mention of social engineering attacks? Basically, unless your data is secure against an insider being tricked, bribed, or threatened with the death of a family member then it's not secure - regardless of how clever your encryption may be.
"That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released?"
I fear this idea may vastly underestimate the complexity of drug/vaccine development. Once a vulnerability in a computer system is known, it is usually reasonable to expect that a fix, patch or other defense can be achieved with a few weeks of concentrated engineer effort. In medicine, we might be talking several years (or decades) of intense research and widespread international collaboration from the mechanism of a disease is first understood until that understanding yields a cure or prophylactic that can be administered to humans. Keeping secrecy during that period sounds like a non-starter.
The biggest difference I see between computer security and this sort of biological research is that, for the most part, people can't be patched. My (albeit limited) understanding of the problem with the H5N1 research wasn't that the created a virus that could be used as a bioweapon (and could have a vaccine created to counter), but that they created a process for creating a virus that could be used as a bioweapon. So until they create a process that could create a cure, or find a general solution to the problem of viruses (good luck!), there's a threat of bad guys using this process.
Of course, the other possibility is that this is a case of responsible disclosure. They've established that they have created an exploit to vulnerabilities that could be an existential threat to the entire userbase (humanity) in the hopes of forcing the vendor to issue a patch. If you believe the vendor is God, that's a risky bet, as He's been very restrained on the subject of patching. If you believe the vendor is evolution, that's even worse, because believe me, you don't want to be a part of the evolutionary patching process.
The original virus was deadly because it infected the deeper lung, where it caused a deadly pneumonia. However, getting deep into the lungs is difficult, and the virus cannot get out anymore to infect others.
The virus could only become more infectious by infecting different tissues higher up the air tract. But that made the infection less severe as it was the deep-lung infection site that made the virus dangerous. It had become like a common cold.
However, all the security stuff made it a crime for the researchers to tell the media that the virus had become unable to kill even a single lab animal.
In the end, the research was classified because the initial press reports said it was dangerous. Because it was classified, the researchers were not allowed to tell the media that the virus had become as harmless as the common cold.
>people can't be patched.
That is good analogy to what a vaccine is.
Some people at the Singularity Institute have been suggesting hiding AI research, maybe even regulating it, in order to ensure "Friendly" AI is created before an "Unfriendly" AI.
I wonder what the situation would have been if the researchers had sought a patent on their work rather than shilling it out to academic journals?
[reminds me of the comment made by a work-colleague: "Unsuccessful researchers publish; successful researchers patent].
@billswift: No, how a vaccine works is more closely analogous to updating a malware signature database. If you encounter a threat that enters through a channel that your anti-malware is not even scanning for matches (or that it is ineffective at clearing out even when it does find a match), then there is no way to patch how the human system actually works.
So, if a researcher is withholding publication until a vaccine can be developed, with whom do they share the research?
With a software vulnerability, that's easy - you share with the software vendor, the one party with the ability to create and distribute a patch.
With a virus or bacterium, there is no single party responsible for the vulnerability and uniquely able to produce a patch. You could try to share the research only with God, but that's unlikely to produce a vaccine in any useful timeframe.
@ lazlo "believe me, you don't want to be a part of the evolutionary patching process."
Believe me, you already are. Or more accurately, we already are.
There are quite a number of viruses (most famously AIDS) where there has been no success whatsoever in developing a vaccine. This has certainly not been for want of trying, funding or intelligent involved people. Until that is solved there isn't much chance of guaranteeing a vaciine to match every publication.
There is a great difference between computer programs which are almost infinitely malleable but very brittle and viruses which are extremely difficult to manipulate in a controllable way but are very resillient.
@Tauki: the effect would be that people who follow the law wouldn't do further research till the patent ran out, but people who don't care about the law would be able to. I'm not sure that's a good idea.
Bruce--It seems to me a counter-argument to your argument could be developed along the lines of the 'classified at birth' policy for nuclear weapons technology. Why shouldn't virus research be modeled on nuclear research rather than computer viruses? I'm not sure that it is a valid line of attack against your position but I think it behooves you to answer such a line of attack.
The guru in blue
Thank you for this interesting article.
I read the article and remember the rationale behind security through obscurity.
One that still is used and still disappoints the user.
Consider the "evil" virus. First, one has to have specific knowledge about genetics and genetic engineering to use that genome. It isn't like there are gene kits out there! One then needs a rather complicated lab to "build" the "evil" virus. Then one has to figure out how to weaponize it, something even governments have had grave difficulty doing with many pathogens.
THEN, one has to deliver it to the target.
So, for simplicity, we'll consider some disgruntled PhD has managed to acquire the genome and expensive lab, that isn't very hard to imagine at all. That researcher now has to manage to develop that virus from a raw genome, which is SUBSTANTIALLY more effort and error prone than writing mere software. Said researcher has to then figure out how to weaponize it. For simplicity, we'll use a willing volunteer to become infected and share the joy with others.
Now, it's a straightforward reverse of epidemiology. Spreading a pathogen isn't as easy as one may think. ESPECIALLY in these days of hand sterilizers all over the place! Airborne transmission is highly limited, as one needs a lot of potential victims in a low air circulation area.
Let's say, an airplane with 300 people on it. Of them, one can count on (some old research gives numbers) perhaps 10% infection rate, so we'll be generous and make it 25%. Those people now have to become infectious and transmit the virus further.
For a lethal or high morbidity disease, that is highly limited. For a low mortality or morbidity disease, it is not, but then is only a nuisance.
We experience biological warfare every year during influenza season, though it's nature warring against us. Every year, dozens of new strains proliferate and a few infect humans, typically one becomes the dominant pathogen of the year.
This is more of a movie script threat than a real one.
But then, reactive "security" measures make for good security theater. And good security theater makes people feel safe.
Even IF it's utterly ineffective.
I work on this sort of thing. I even worked on some of the SARS and H5N1 data.
First lets get facts straight. H5N1 was a normal seasonal flu outbreak. This is after the fact data. WHO even commissioned a review on how the hell it got labeled a outbreak. Turns out the consultants that pushed for it are big shareholders and board members of tamiflu. It has been kept under wraps very well.
Even this data is highly suspect. Yes that is right even with the data that shows that H5N1 was nothing but a normal flu season is even in doubt. The best example is in NZ where large numbers of cases where reported. No one tested anyone. If you had flu symptoms you had H5N1 "cus we know that is what everyone gets this year".
The next thing is how far away we are from really understanding what would even make a superbug. Oh we will get there... if such a thing is possible, but its a way off yet.
Finally where are all these so called Terrorists?
Very disappointed by this essay. Mr Schneier's mental rules of thumb about what works and what doesn't have been honed in the field of computer security whose assumptions just don't transfer to bioweapons. As several commenters have already noted, people can't be patched and that's the nub of the matter. Security through obscurity might be weak, but it might also be all we've got.
My opinion is we have it bass akwards. Computer malware was modeled after biological pathogens.
The nomenclature was borrowed from the biological domain to start with. Computer viruses are not really viruses. They just share some attributes.
We quaranteen a virus infected computer (by keeping it off the network until it's cleaned) just like we quarantine a sick human away from the population. We can immunize humans, which some AV solution providers adopted the same terminology to "immunize" a file or a program. There are viruses and bacteria in the biological world. We have viruses and worms (and other device pathogens) in the silicon based world. We can not stretch the analogy too far. We can encrypt a media in the computer world, but we cannot encrypt the DNA of a human to protect against a pathogen without turning the human into a different creature (a vegetable, for example). We try to decrypt DNA of biological viruses to understand how they work or how they relate to other strains. But that kind of DNA decryption is not the same as a file decryption.
We cannot reboot a human from clean media to have a known state. A human cannot infect another human by "talking" to her on a phone.
What we can do is see where common security principles are shared between the two worlds. There is no guarantee that what works for one will work for the other. They are two completely different domains -- so far, until androids are more common and can be infected with either type of pathogens.
Actually, there are gene kits out there, everywhere. We call them "life".
The contentious papers really showed that, almost all you needed to do get a highly-transmissible flu virus was... force the virus to go through a couple of generations of airborne transmission. Equipment required: a bunch of animals and maybe a cotton swab. A farmer could do it. A farmer could do it *by accident*.
If you wanted to make it deadly as well, that's selection for two factors (probably!) - harder but still not impossible.
Unlike computer software and nuclear weapons, there isn't some magic sequence of amino acids that's required to make a weapon...
I don't think that computer security is a good analog for virus/genomic research security.
One problem is that a weapon wielder and friends are usually just as vulnerable as the target. There is no IFF (Identification Friend or Foe) that keeps 2nd missiles from targeting oneself or one's allies after missile 1 has taken out the target.
The best way to defend against a disease (or computer virus) outbreak is to identify, isolate and cure. For biological diseases, we should preemptively reduce exposure, by sterilising recycled air in planes, by using head bows rather than handshakes, by having innately antiseptic surfaces, etc. This would help whether the biologic danger were natural or man-made.
We know more now, and learn faster, than when the plague killed half of Europe, or when the influenza pandemic killed millions after WW1.
It would take a very intelligent, skilled, and rich misanthrope to make a big dent in humanity.
@ common visitors
There is a catch here, humans are the most intelligent natural machines, and computers are the most intelligent human-made machines. Humans can learn exponentially, so do the virus. Chritof Paar has mentioned in his books, Computers will learn exponentially when we efficiently use quantum technology in hardware and software algorithms. But it would take decades atleast. Bruce's friday blog, is a nice attempt towards the race between natural computing versus quantum computing.
I work in crypto and my wife works in infectious diseases. We talked about the two papers a lot.
There's another way to make an analogy with computer security: The bad guy here is actually "mother nature" (MN). MN evolves her viruses continually. The question studied by those two research papers is not "how can we weaponize a virus", it is "what might MN do next?". Hence, their research is exactly analogous to standard cryptanalysis: they are trying to predict a possible next move of the adversary. As we have learned from computer security research, the right thing to do is to put that knowledge in the public domain. So it was correct that those papers were published in the literature.
Terrorists replicating the work is a red-herring. The argument that "not publishing the result prevents attackers from knowing it" does not apply as MN is developing many such attacks in a massively parallel way all the time. MN herself does not read the scientific literature -- she is much better at creating deadly diseases than humans are.
First off, Happy 4th July for all those with the day off
Secondly, Hopefully for those in DC and similar areas the temprature has dropped and the power is back on.
@ Steven Galbraith,
MN [Mother Nature] herself does not read the scientific literature -- she is much better at creating deadly diseases than humans
She's also one heck of a ssight better at delivery mechanisms as well, which is much more important.
For those who don't get why the delivery mechanism is actually more important, look at it this way,
On it's own a round of ammunition is not particularly dangerous to anyone more than a very short distance away. However put it in a good delivery mechanism such as a sniper rifle made by the likes of Accuracy International and then it becomes deadly at anything up to 2500M (approx current combat sniper record held by "corporal of hourse" (sniper) Harrison of the UK's Blues and Royals, and much to the short lived surprise of a couple of Talib).
Since people can't really be patched, the analogy breaks down. People have been studying HIV for almost three decades and there's still no vaccine in sight.
classified nuclear weapons technology? I don't see any sense in classifying that.
The principles are well understood, and given the material just about anyone can build a nuclear bomb.
However, the fission-material is rather difficult to get, or to produce. And I think the ingredients have to be kept secure, not the know-how secret.
Same with biological weapons, but there's also a more pressing reason to have this know-how in the open: Everyone doing research on viruses or bacteria is a potential ally, can potentially find a cure. Just as amateur computer-security people are an asset. The more people who know, the better off is society as a whole.
Secrecy does not equal security; often secrecy is even the arch-enemy of security.
His argument lacks self consistency. He believes bio-research should be treated more like computer security research. One of his reasons is this: "This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it." However, if everything get hacked, then computer security research has been a failure in which case why should it be considered a model for other research.
I'm also disappointed in this article. An analogy proves nothing. The fundamental difficulty of bioweapons research is that we have techniques for creating weapons, but our ability to create defenses runs far behind.
We can't cure AIDS. If somebody is considering whether to publish a paper describing a simple method, requiring $50,000 worth of lab equipment, for making AIDS transmissible by mosquitoes... then all these grand lessons learned from cryptography seem worth [expletive deleted], except to claim that this paper should not be put on a computer. Humanity would have an extremely serious problem in this case, and taking "lessons" from computer science so as to claim the answer is obviously "Publish an advisory, then release the details 2 months later" would be suicidally stupid.
When Leo Szilard realized that you could get net nuclear energy from fission via a chain reaction, he didn't tell the world, he told Rabi and Fermi. When they discovered that purified graphite was a cheap and effective neutron moderator, Fermi wanted to publish for the sake of the great international project of science that was above nationalism. Rabi voted for secrecy and Fermi abided by the majority. Years later, the only neutron moderator the Nazis knew about was deuterium, and the Allies destroyed their main deuterium source, at a cost of some innocent civilian lives, and that was the end of the Nazi nuclear weapons program. I do not think humanity would have been better off if Szilard had blurted out everything and the Manhattan Project's results had been published openly; the Nazis might have really gotten started earlier, and devoted enough resources in the right direction, to have nuclear weapons when WWII started.
Of course, that analogy doesn't prove anything either, since biotech is so much cheaper and widely available than nuclear materials. Humanity has a problem here, and this article did not solve it.
Can't solve it until enough people are convinced by articles like this that a problem exists.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.