Securing Medical Research: A Cybersecurity Point of View

ABSTRACT: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.


Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon (1, 2). This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.

My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.

First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm (3, 4). The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.

Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.

Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.

Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.

Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat” (5). It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.

This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.

Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.

The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose (6). The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars” (7). Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.

Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard (8). When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.

The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly (9). Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday,” the once-a-month automatic download and installation of security patches.

Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called “responsible disclosure.” Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities (1011).

Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.

Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.

Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.

References and Notes

  1. M. Imai et al., Experimental adaptation of an influenza H5 HA confers respiratory droplet transmission to a reassortant H5 HA/H1N1 virus in ferrets. Nature; published online 2 May 2012, 10.1038/nature10831. doi:10.1038/nature10831 CrossRef
  2. S. Herfst et al., Airborne transmission of influenza A/H5N1 virus between ferrets. Science 336, 1534 (2012). Abstract/FREE Full Text
  3. A. Kerckhoffs, La cryptographie militaire. J. Sci. Milit. 9, 5 (1883).
  4. A. Kerckhoffs, La cryptographie militaire. J. Sci. Milit. 9, 161 (1883).
  5. M. K. Daly, presentation at the 23rd Large Installation System Administration (LISA) Conference, Baltimore, MD, 4 November 2009.
  6. B. Schneier, Applied Cryptography: Protocols, Algorithms and Source Code in C (Wiley, New York, 1994).
  7. S. Levy, Crypto: How the Code Rebels Beat the Government–Saving Privacy in the Digital Age (Penguin, New York, 2001).
  8. National Institute of Standards and Technology, Fed. Regist. 72, 62212 (7 November 2007).
  9. B. Schneier, “Full disclosure and the window of exposure,” Crypto-Gram, 15 September 2000; www.schneier.com/crypto-gram-0009.html#1.
  10. B. Schneier, “Publicizing vulnerabilities,” Crypto-Gram, 15 February 2000; www.schneier.com/crypto-gram-0002.html#PublicizingVulnerabilities.
  11. B. Schneier, “Full disclosure,” Crypto-Gram, 15 November 2001; www.schneier.com/crypto-gram-0111.html#1.
  12. This article was based on a talk given at the meeting H5N1 Research, Biosafety, Biosecurity, and Bioethics, Royal Society, London, 3 April 2012.

Categories: Computer and Information Security, Terrorism

Sidebar photo of Bruce Schneier by Joe MacInnis.