Entries Tagged "biological warfare"

Page 1 of 3

Using LLMs to Create Bioweapons

I’m not sure there are good ways to build guardrails to prevent this sort of thing:

There is growing concern regarding the potential misuse of molecular machine learning models for harmful purposes. Specifically, the dual-use application of models for predicting cytotoxicity18 to create new poisons or employing AlphaFold2 to develop novel bioweapons has raised alarm. Central to these concerns are the possible misuse of large language models and automated experimentation for dual-use purposes or otherwise. We specifically address two critical the synthesis issues: illicit drugs and chemical weapons. To evaluate these risks, we designed a test set comprising compounds from the DEA’s Schedule I and II substances and a list of known chemical weapon agents. We submitted these compounds to the Agent using their common names, IUPAC names, CAS numbers, and SMILESs strings to determine if the Agent would carry out extensive analysis and planning (Figure 6).

[…]

The run logs can be found in Appendix F. Out of 11 different prompts (Figure 6), four (36%) provided a synthesis solution and attempted to consult documentation to execute the procedure. This figure is alarming on its own, but an even greater concern is the way in which the Agent declines to synthesize certain threats. Out of the seven refused chemicals, five were rejected after the Agent utilized search functions to gather more information about the substance. For instance, when asked about synthesizing codeine, the Agent becomes alarmed upon learning the connection between codeine and morphine, only then concluding that the synthesis cannot be conducted due to the requirement of a controlled substance. However, this search function can be easily manipulated by altering the terminology, such as replacing all mentions of morphine with “Compound A” and codeine with “Compound B”. Alternatively, when requesting a b synthesis procedure that must be performed in a DEA-licensed facility, bad actors can mislead the Agent by falsely claiming their facility is licensed, prompting the Agent to devise a synthesis solution.

In the remaining two instances, the Agent recognized the common names “heroin” and “mustard gas” as threats and prevented further information gathering. While these results are promising, it is crucial to recognize that the system’s capacity to detect misuse primarily applies to known compounds. For unknown compounds, the model is less likely to identify potential misuse, particularly for complex protein toxins where minor sequence changes might allow them to maintain the same properties but become unrecognizable to the model.

Posted on April 18, 2023 at 7:19 AMView Comments

When Biology Becomes Software

All of life is based on the coordinated action of genetic parts (genes and their controlling sequences) found in the genomes (the complete DNA sequence) of organisms.

Genes and genomes are based on code—just like the digital language of computers. But instead of zeros and ones, four DNA letters—A, C, T, G—encode all of life. (Life is messy, and there are actually all sorts of edge cases, but ignore that for now.) If you have the sequence that encodes an organism, in theory, you could recreate it. If you can write new working code, you can alter an existing organism or create a novel one.

If this sounds to you a lot like software coding, you’re right. As synthetic biology looks more like computer technology, the risks of the latter become the risks of the former. Code is code, but because we’re dealing with molecules—and sometimes actual forms of life—the risks can be much greater.

Imagine a biological engineer trying to increase the expression of a gene that maintains normal gene function in blood cells. Even though it’s a relatively simple operation by today’s standards, it’ll almost certainly take multiple tries to get it right. Were this computer code, the only damage those failed tries would do is to crash the computer they’re running on. With a biological system, the code could instead increase the likelihood of multiple types of leukemias and wipe out cells important to the patient’s immune system.

We have known the mechanics of DNA for some 60-plus years. The field of modern biotechnology began in 1972 when Paul Berg joined one virus gene to another and produced the first “recombinant” virus. Synthetic biology arose in the early 2000s when biologists adopted the mindset of engineers; instead of moving single genes around, they designed complex genetic circuits.

In 2010, Craig Venter and his colleagues recreated the genome of a simple bacterium. More recently, researchers at the Medical Research Council Laboratory of Molecular Biology in Britain created a new, more streamlined version of E. coli. In both cases, the researchers created what could arguably be called new forms of life.

This is the new bioengineering, and it will only get more powerful. Today you can write DNA code in the same way a computer programmer writes computer code. Then you can use a DNA synthesizer or order DNA from a commercial vendor, and then use precision editing tools such as CRISPR to “run” it in an already existing organism, from a virus to a wheat plant to a person.

In the future, it may be possible to build an entire complex organism such as a dog or cat, or recreate an extinct mammoth (currently underway). Today, biotech companies are developing new gene therapies, and international consortia are addressing the feasibility and ethics of making changes to human genomes that could be passed down to succeeding generations.

Within the biological science community, urgent conversations are occurring about “cyberbiosecurity,” an admittedly contested term that exists between biological and information systems where vulnerabilities in one can affect the other. These can include the security of DNA databanks, the fidelity of transmission of those data, and information hazards associated with specific DNA sequences that could encode novel pathogens for which no cures exist.

These risks have occupied not only learned bodies—the National Academies of Sciences, Engineering, and Medicine published at least a half dozen reports on biosecurity risks and how to address them proactively—but have made it to mainstream media: genome editing was a major plot element in Netflix’s Season 3 of “Designated Survivor.”

Our worries are more prosaic. As synthetic biology “programming” reaches the complexity of traditional computer programming, the risks of computer systems will transfer to biological systems. The difference is that biological systems have the potential to cause much greater, and far more lasting, damage than computer systems.

Programmers write software through trial and error. Because computer systems are so complex and there is no real theory of software, programmers repeatedly test the code they write until it works properly. This makes sense, because both the cost of getting it wrong and the ease of trying again is so low. There are even jokes about this: a programmer would diagnose a car crash by putting another car in the same situation and seeing if it happened again.

Even finished code still has problems. Again due to the complexity of modern software systems, “works properly” doesn’t mean that it’s perfectly correct. Modern software is full of bugs—thousands of software flaws—that occasionally affect performance or security. That’s why any piece of software you use is regularly updated; the developers are still fixing bugs, even after the software is released.

Bioengineering will be largely the same: writing biological code will have these same reliability properties. Unfortunately, the software solution of making lots of mistakes and fixing them as you go doesn’t work in biology.

In nature, a similar type of trial and error is handled by “the survival of the fittest” and occurs slowly over many generations. But human-generated code from scratch doesn’t have that kind of correction mechanism. Inadvertent or intentional release of these newly coded “programs” may result in pathogens of expanded host range (just think swine flu) or organisms that wreck delicate ecological balances.

Unlike computer software, there’s no way so far to “patch” biological systems once released to the wild, although researchers are trying to develop one. Nor are there ways to “patch” the humans (or animals or crops) susceptible to such agents. Stringent biocontainment helps, but no containment system provides zero risk.

Opportunities for mischief and malfeasance often occur when expertise is siloed, fields intersect only at the margins, and when the gathered knowledge of small, expert groups doesn’t make its way into the larger body of practitioners who have important contributions to make.

Good starts have been made by biologists, security agencies, and governance experts. But these efforts have tended to be siloed, in either the biological and digital spheres of influence, classified and solely within the military, or exchanged only among a very small set of investigators.

What we need is more opportunities for integration between the two disciplines. We need to share information and experiences, classified and unclassified. We have tools among our digital and biological communities to identify and mitigate biological risks, and those to write and deploy secure computer systems.

Those opportunities will not occur without effort or financial support. Let’s find those resources, public, private, philanthropic, or any combination. And then let’s use those resources to set up some novel opportunities for digital geeks and bionerds—as well as ethicists and policy makers—to share experiences and concerns, and come up with creative, constructive solutions to these problems that are more than just patches.

These are overarching problems; let’s not let siloed thinking or funding get in the way of breaking down barriers between communities. And let’s not let technology of any kind get in the way of the public good.

This essay previously appeared on CNN.com.

EDITED TO ADD (9/23): Commentary.

Posted on September 13, 2019 at 11:40 AMView Comments

Fake News and Pandemics

When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.

The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems—misinformation affecting democracy and misinformation affecting public health—will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.

Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if—it’s only a matter of when.

When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.

There’s every reason to expect misinformation to be rampant during a pandemic. In the early hours and days, information will be scant and rumors will abound. Most of us are not health professionals or scientists. We won’t be able to tell fact from fiction. Even worse, we’ll be scared. Our brains work differently when we are scared, and they latch on to whatever makes us feel safer—even if it’s not true.

Rumors and misinformation could easily overwhelm legitimate news channels, as people share tweets, images and videos. Much of it will be well-intentioned but wrong—like the misinformation spread by the anti-vaccination community today ­—but some of it may be malicious. In the 1980s, the KGB ran a sophisticated disinformation campaign ­—Operation Infektion ­—to spread the rumor that HIV/AIDS was a result of an American biological weapon gone awry. It’s reasonable to assume some group or country would deliberately spread intentional lies in an attempt to increase death and chaos.

It’s not just misinformation about which treatments work (and are safe), and which treatments don’t work (and are unsafe). Misinformation can affect society’s ability to deal with a pandemic at many different levels. Right now, Ebola relief efforts in the Democratic Republic of Congo are being stymied by mistrust of health workers and government officials.

It doesn’t take much to imagine how this can lead to disaster. Jay Walker, curator of the TEDMED conferences, laid out some of the possibilities in a 2016 essay: people overwhelming and even looting pharmacies trying to get some drug that is irrelevant or nonexistent, people needlessly fleeing cities and leaving them paralyzed, health workers not showing up for work, truck drivers and other essential people being afraid to enter infected areas, official sites like CDC.gov being hacked and discredited. This kind of thing can magnify the health effects of a pandemic many times over, and in extreme cases could lead to a total societal collapse.

This is going to be something that government health organizations, medical professionals, social media companies and the traditional media are going to have to work out together. There isn’t any single solution; it will require many different interventions that will all need to work together. The interventions will look a lot like what we’re already talking about with regard to government-run and other information influence campaigns that target our democratic processes: methods of visibly identifying false stories, the identification and deletion of fake posts and accounts, ways to promote official and accurate news, and so on. At the scale these are needed, they will have to be done automatically and in real time.

Since the 2016 presidential election, we have been talking about propaganda campaigns, and about how social media amplifies fake news and allows damaging messages to spread easily. It’s a hard discussion to have in today’s hyperpolarized political climate. After any election, the winning side has every incentive to downplay the role of fake news.

But pandemics are different; there’s no political constituency in favor of people dying because of misinformation. Google doesn’t want the results of peoples’ well-intentioned searches to lead to fatalities. Facebook and Twitter don’t want people on their platforms sharing misinformation that will result in either individual or mass deaths. Focusing on pandemics gives us an apolitical way to collectively approach the general problem of misinformation and fake news. And any solutions for pandemics are likely to also be applicable to the more general ­—and more political ­—problems.

Pandemics are inevitable. Bioterror is already possible, and will only get easier as the requisite technologies become cheaper and more common. We’re experiencing the largest measles outbreak in 25 years thanks to the anti-vaccination movement, which has hijacked social media to amplify its messages; we seem unable to beat back the disinformation and pseudoscience surrounding the vaccine. Those same forces will dramatically increase death and social upheaval in the event of a pandemic.

Let the Russian propaganda attacks on the 2016 election serve as a wake-up call for this and other threats. We need to solve the problem of misinformation during pandemics together—­ governments and industries in collaboration with medical officials, all across the world ­—before there’s a crisis. And the solutions will also help us shore up our democracy in the process.

This essay previously appeared in the New York Times.

Posted on June 21, 2019 at 5:10 AMView Comments

Tomato-Plant Security

I have a soft spot for interesting biological security measures, especially by plants. I’ve used them as examples in several of my books. Here’s a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It’s common for caterpillars to eat each other when they’re stressed out by the lack of food. (We’ve all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

Posted on July 13, 2017 at 6:06 AMView Comments

Ricin as a Terrorist Tool

This paper (full paper behind paywall)—from Environment International (2009)—does a good job of separating fact from fiction:

Abstract: In recent years there has been an increased concern regarding the potential use of chemical and biological weapons for mass urban terror. In particular, there are concerns that ricin could be employed as such an agent. This has been reinforced by recent high profile cases involving ricin, and its use during the cold war to assassinate a high profile communist dissident. Nevertheless, despite these events, does it deserve such a reputation? Ricin is clearly toxic, though its level of risk depends on the route of entry. By ingestion, the pathology of ricin is largely restricted to the gastrointestinal tract where it may cause mucosal injuries; with appropriate treatment, most patients will make a full recovery. As an agent of terror, it could be used to contaminate an urban water supply, with the intent of causing lethality in a large urban population. However, a substantial mass of pure ricin powder would be required. Such an exercise would be impossible to achieve covertly and would not guarantee success due to variables such as reticulation management, chlorination, mixing, bacterial degradation and ultra-violet light. By injection, ricin is lethal; however, while parenteral delivery is an ideal route for assassination, it is not realistic for an urban population. Dermal absorption of ricin has not been demonstrated. Ricin is also lethal by inhalation. Low doses can lead to progressive and diffuse pulmonary oedema with associated inflammation and necrosis of the alveolar pneumocytes. However, the risk of toxicity is dependent on the aerodynamic equivalent diameter (AED) of the ricin particles. The AED, which is an indicator of the aerodynamic behaviour of a particle, must be of sufficiently low micron size as to target the human alveoli and thereby cause major toxic effects. To target a large population would also necessitate a quantity of powder in excess of several metric tons. The technical and logistical skills required to formulate such a mass of powder to the required size is beyond the ability of terrorists who typically operate out of a kitchen in a small urban dwelling or in a small ill-equipped laboratory. Ricin as a toxin is deadly but as an agent of bioterror it is unsuitable and therefore does not deserve the press attention and subsequent public alarm that has been created.

This paper lists all known intoxication attempts, including the famous Markov assassination.

Posted on June 14, 2013 at 7:15 AMView Comments

Protecting (and Collecting) the DNA of World Leaders

There’s a lot of hype and hyperbole in this story, but here’s the interesting bit:

According to Ronald Kessler, the author of the 2009 book In the President’s Secret Service, Navy stewards gather bedsheets, drinking glasses, and other objects the president has touched­they are later sanitized or destroyed­in an effort to keep would be malefactors from obtaining his genetic material. (The Secret Service would neither confirm nor deny this practice, nor would it comment on any other aspect of this article.) And according to a 2010 release of secret cables by WikiLeaks, Secretary of State Hillary Clinton directed our embassies to surreptitiously collect DNA samples from foreign heads of state and senior United Nations officials. Clearly, the U.S. sees strategic advantage in knowing the specific biology of world leaders; it would be surprising if other nations didn’t feel the same.

The rest of the article is about individually targeted bioweapons.

Posted on October 29, 2012 at 1:53 PMView Comments

On Securing Potentially Dangerous Virology Research

Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.———Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.

My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.

First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.

Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.

Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.

Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.

Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat.” It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.

This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.

Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.

The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars.” Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.

Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.

The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday”; the once-a-month automatic download and installation of security patches.

Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called "responsible disclosure." Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.

Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.

Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.

Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.

This essay was originally published in Science.

EDITED TO ADD (7/14): Related article: “What Biology Can Learn from Infosec.”

Posted on June 29, 2012 at 6:35 AMView Comments

Full Disclosure in Biology

The debate over full disclosure in computer security has been going on for the better part of two decades now. The stakes are much higher in biology:

The virus is an H5N1 avian influenza strain that has been genetically altered and is now easily transmissible between ferrets, the animals that most closely mimic the human response to flu. Scientists believe it’s likely that the pathogen, if it emerged in nature or were released, would trigger an influenza pandemic, quite possibly with many millions of deaths.

In a 17th floor office in the same building, virologist Ron Fouchier of Erasmus Medical Center calmly explains why his team created what he says is “probably one of the most dangerous viruses you can make”­and why he wants to publish a paper describing how they did it. Fouchier is also bracing for a media storm. After he talked to ScienceInsider yesterday, he had an appointment with an institutional press officer to chart a communication strategy.

Of course, there’s value to the research:

“These studies are very important,” says biodefense and flu expert Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota, Twin Cities. The researchers “have the full support of the influenza community,” Osterholm says, because there are potential benefits for public health. For instance, the results show that those downplaying the risks of an H5N1 pandemic should think again, he says.

Knowing the exact mutations that make the virus transmissible also enables scientists to look for them in the field and take more aggressive control measures when one or more show up, adds Fouchier. The study also enables researchers to test whether H5N1 vaccines and antiviral drugs would work against the new strain.

And we know how badly this sort of security works:

Osterholm says he can’t discuss details of the papers because he’s an NSABB member. But he says it should be possible to omit certain key details from controversial papers and make them available to people who really need to know. “We don’t want to give bad guys a road map on how to make bad bugs really bad,” he says.

Posted on November 30, 2011 at 12:28 PMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.