July 15, 2012
by Bruce Schneier
Chief Security Technology Officer, BT
schneier@schneier.com
http://www.schneier.com
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-1207.html>. These same essays and news items appear in the “Schneier on Security” blog at <http://www.schneier.com/>, along with a lively comment section. An RSS feed is available.
In this issue:
- So You Want to Be a Security Expert
- Rand Paul Takes on the TSA
- News
- On Securing Potentially Dangerous Virology Research
- The Failure of Anti-Virus Companies to Catch Military Malware
- Schneier News
- E-Mail Accounts More Valuable than Bank Accounts
- “Top Secret America’ on the Post-9/11 Cycle of Fear and Funding
So You Want to Be a Security Expert
I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.
First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.
In general, though, I have three pieces of advice to anyone who wants to learn computer security.
*Study.* Studying can take many forms. It can be classwork, either at universities or at training conferences like SANS and Offensive Security. (See below for some good self-starter resources.) It can be reading; there are a lot of excellent books out there—and blogs—that teach different aspects of computer security out there. Don’t limit yourself to computer science, either. You can learn a lot by studying other areas of security, and soft sciences like economics, psychology, and sociology.
*Do.* Computer security is fundamentally a practitioner’s art, and that requires practice. This means using what you’ve learned to configure security systems, design new security systems, and—yes—break existing security systems. This is why many courses have strong hands-on components; you won’t learn much without it.
*Show.* It doesn’t matter what you know or what you can do if you can’t demonstrate it to someone who might want to hire you. This doesn’t just mean sounding good in an interview. It means sounding good on mailing lists and in blog comments. You can show your expertise by making podcasts and writing your own blog. You can teach seminars at your local user group meetings. You can write papers for conferences, or books.
I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.
I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset—one I consider essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.
One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote my guide to becoming a cryptographer and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.
This essay originally appeared on “Krebs on Security,” the second in a series of answers to the question.
http://krebsonsecurity.com/2012/07/…
This is the first. There will be more.
http://krebsonsecurity.com/2012/06/…
Blog entry URL:
https://www.schneier.com/blog/archives/2012/07/…
Training classes:
http://www.sans.org/
http://www.offensive-security.com/
Self-starter training resources:
http://www.offensive-security.com/…
http://www.backtrack-linux.org/tutorials/
http://www.hackthissite.org/
https://www.owasp.org/index.php/…
http://www.irongeek.com/i.php?page=mutillidae/…
Computer security book recommendations:
http://www.schneier.com/book-ce.html
http://www.amazon.com/gp/product/0470068523/…
http://www.amazon.com/gp/product/0321814908/…
http://www.amazon.com/gp/product/0321501950/…
http://www.amazon.com/gp/product/0470395362/…
http://www.amazon.com/gp/product/0929408233/…
http://www.schneier.com/book-sandl.html
http://www.amazon.com/gp/search?…
http://taosecurity.blogspot.com/search/label/bestbook
Other book recommendations:
http://www.schneier.com/book-beyondfear.html
http://www.schneier.com/book-lo.html
http://www.cl.cam.ac.uk/~rja14/econsec.html
Blog:
http://seclists.org/
Psychology of security:
http://www.schneier.com/essay-155.html
http://mail.pauldotcom.com/cgi-bin/mailman/listinfo/…
Mailing lists and blogs:
https://lists.sans.org/mailman/listinfo/dfir
https://lists.sans.org/mailman/listinfo/gpwn-list
http://www.schneier.com
http://www.cigital.com/silver-bullet/podcast
http://www.lightbluetouchpaper.org/
Security certifications:
https://www.schneier.com/blog/archives/2006/07/…
http://www.starmind.org/2012/01/13/…
The security mindset:
https://www.schneier.com/blog/archives/2008/03/…
https://www.schneier.com/blog/archives/2012/06/…
Schneier’s Law:
https://www.schneier.com/blog/archives/2011/04/…
Cryptography resources:
http://www.schneier.com/…
http://www.schneier.com/paper-self-study.pdf
Rand Paul Takes on the TSA
Rand Paul has introduced legislation to rein in the TSA. There are two bills: “One bill would require that the mostly federalized program be turned over to private screeners and allow airports with Department of Homeland Security approval to select companies to handle the work.”
This seems to be a result of a fundamental misunderstanding of the economic incentives involved here, combined with magical thinking that a market solution solves all. In airport screening, the passenger isn’t the customer. (Technically he is, but only indirectly.) The airline isn’t even the customer. The customer is the U.S. government, which is in the grip of an irrational fear of terrorism.
It doesn’t matter if an airport screener receives a paycheck signed by the Department of the Treasury or Private Airport Screening Services, Inc. As long as a terrorized government—one that needs to be seen by voters as “tough on terror” and wants to stop every terrorist attack, regardless of the cost, and is willing to sacrifice all for the illusion of security—gets to set the security standards, we’re going to get TSA-style security.
We can put the airlines, either directly or via airport fees, in charge of security, but that has problems in the other direction. Airlines don’t really care about terrorism; it’s rare, the costs to the airline are relatively small (remember that the government bailed the industry out after 9/11), and the rest of the costs are externalities and are borne by other people. So if airlines are in charge, we’re likely to get less security than makes sense.
It makes sense for a government to be in charge of airport security—either directly or by setting standards for contractors to follow, I don’t care—but we’ll only get sensible security when the government starts behaving sensibly.
“The second bill would permit travelers to opt out of pat-downs and be rescreened, allow them to call a lawyer when detained, increase the role of dogs in explosive detection, let passengers ‘appropriately object to mistreatment,’ allow children 12 years old and younger to avoid ‘unnecessary pat-downs’ and require the distribution of the new rights at airports. That legislation also would let airports decide to privatize if wanted and expand TSA’s PreCheck program for trusted travelers.”
This is a mixed bag. Airports can already privatize security—SFO has done so already—and TSA’s PreCheck is being expanded. Opting out of pat downs and being rescreened only makes sense if the pat down request was the result of an anomaly in the screening process; my guess is that rescreening will just produce the same anomaly and still require a pat down. The right to call a lawyer when detained is a good one, although in reality we passengers just want to make our flights; that’s why we let ourselves be subjected to this sort of treatment at airports. And the phrase “unnecessary pat-downs” all comes down to what is considered necessary. If a 12-year-old goes through a full-body scanner and a gun-shaped image shows up on the screen, is the subsequent pat down necessary? What if it’s a long and thin image? What if he goes through a metal detector and it beeps? And who gets to decide what’s necessary? If it’s the TSA, nothing will change.
And dogs: a great idea, but a logistical nightmare. Dogs require space to eat, sleep, run, poop, and so on. They just don’t fit into your typical airport setup.
The problem isn’t government-run airport security, full-body scanners, the screening of children and the elderly, or even a paucity of dogs. The problem is that we were so terrorized that we demanded our government keep us safe at all costs. The problem is that our government was so terrorized after 9/11 that it gave an enormous amount of power to our security organizations. The problem is that the security-industrial complex has gotten large and powerful—and good at advancing its agenda—and that we’ve scared our public officials into being so scared that they don’t notice when security goes too far.
I too want to rein in the TSA, but the only way to do that is to change the TSA’s mission. And the only way to do that is to change the government that gives the TSA its mission. We need to refuse to be terrorized, and we need to elect non-terrorized legislators.
But that’s a long way off. In the near term, I’d like to see legislation that forces the TSA, the DHS, and anyone working in counterterrorism, to justify their systems, procedures, and expenditures with cost-benefit analyses. It’s not going to magically dismantle the security-industrial complex, eliminate the culture of fear, or imbue our elected officials with common sense—but it’s a start.
The legislation:
http://www.politico.com/news/stories/0612/77475.html
http://images.politico.com/global/2012/06/…
http://images.politico.com/global/2012/06/…
The TSA on privatization:
http://www.tsa.gov/press/releases/2006/…
“Refuse to be Terrorized”:
http://www.schneier.com/essay-124.html
Me on the TSA performing code-benefit analyses:
http://www.schneier.com/essay-395.html
More writings on the DHS and cost-benefit analyses:
http://www.slate.com/articles/news_and_politics/…
http://www.amazon.com/gp/product/0199795762/…
A rebuttal to my essay. It’s too insulting to respond directly to, but there are points worth debating.
http://snallabolaget.com/?p=2256
News
Many roadside farm stands in the U.S. are unstaffed. They work on the honor system: take what you want, and pay what you owe. I like systems that leverage personal moral codes for security. But I’ll bet that the pay boxes are bolted to the tables. It’s one thing for someone to take produce without paying. It’s quite another for him to take the entire day’s receipts.
http://www.npr.org/s/thesalt/2012/06/11/…
Britain’s Prince Philip on banning guns:
https://www.schneier.com/blog/archives/2012/06/…
Clever attack against a point-of-sale terminal: replacing the machine with a modified one. Presumably these hacked point-of-sale terminals look and function normally, and additionally save a copy of the credit card information. Note that this attack works despite any customer-focused security, like chip-and-pin systems.
http://www.thestar.com/news/crime/article/1203949
Interesting blog post about John McPhee’s book about Switzerland’s national defense.
http://bldgblog.blogspot.co.uk/2012/06/…
It’s not a new idea, but Apple Computer has received a patent on “techniques to pollute electronic profiling”:
https://www.schneier.com/blog/archives/2012/06/…
Similar technology and concept has already been developed by Breadcrumbs Solutions, and will be out as a free beta software in a few months.
http://www.youtube.com/watch?v=sOCfvdr3jaY
http://breadcrumbssolutions.com/…
Interesting conclusion by Cormac Herley, in this paper: “Why Do Nigerian Scammers Say They are From Nigeria?”
“Our analysis suggests that is an advantage to the attacker, not a disadvantage. Since his attack has a low density of victims the Nigerian scammer has an over-riding need to reduce false positives. By sending an email that repels all but the most gullible the scammer gets the most promising marks to self-select, and tilts the true to false positive ratio in his favor.”
http://research.microsoft.com/pubs/167719/…
An economic analysis of bank robberies show that it’s not worth it.
http://arstechnica.com/science/2012/06/…
We all kind of knew this—that’s why most of us aren’t bank robbers. The interesting question, at least to me, is why anyone is a bank robber. Why do people do things that, by any rational economic analysis, are irrational? The answer is that people are terrible at figuring this sort of stuff out. They’re terrible at estimating the probability that any of their endeavors will succeed, and they’re terrible at estimating what their reward will be if they do succeed. There is a lot of research supporting this, but the most recent—and entertaining—thing on the topic I’ve seen recently is this TED talk by Daniel Gilbert.
http://www.ted.com/talks/…
Note bonus discussion terrorism at the very end.
Bank robbery and the Dunning-Kruger effect:
http://opinionator.blogs.nytimes.com/2010/06/20/…
Very funny video exposé by Steven Colbert of the cyberthreat posed by giving iPads to orangutans. Best part is near the end, when Richard Clarke suddenly realizes that he’s being interviewed about orangutans—and not the Chinese.
http://www.colbertnation.com/…
Good essay by Max Abrahms explaining why terrorism doesn’t work.
http://www.baltimoresun.com/news/opinion/oped/…
I’ve written about his research before
https://www.schneier.com/blog/archives/2008/10/…
https://www.schneier.com/blog/archives/2012/01/…
There was a conference on resilience earlier this year.
http://www.slate.com/articles/technology/…
http://www.slate.com/s/future_tense/2012/04/03/…
http://futuretense.newamerica.net/events/2012/…
Here’s an interview with professor Sander van der Leeuw on the topic. Although he never mentions security, it’s all about security.
http://www.slate.com/articles/technology/…
And here’s sort of a counter-argument, that resilience in national security is overrated:
http://www.slate.com/articles/technology/…
Honestly, this essay doesn’t make much sense to me. Yes, resilience can be done badly. Yes, relying solely on resilience can be sub-optimal. But that doesn’t make resilience bad, or even overrated.
Paper on resilience and public control systems:
http://www.inl.gov/technicalpublications/Documents/…
Stratfor on the Phoenix serial flashlight bomber:
http://www.stratfor.com/weekly/serial-bomber-phoenix
This article talks about the backup procedure for the Russian nuclear launch codes. If the safe doesn’t open, use a sledgehammer.
http://en.rian.ru/mlitary_news/20120606/173873812.html
British nukes used to be protected by bike locks:
https://www.schneier.com/blog/archives/2007/11/…
http://news.bbc.co.uk/2/hi/7097101.stm
Interesting review—by David Roepik—of “The Rise of Nuclear Fear,” by Spencer Weart.
http://s.scientificamerican.com/guest-blog/2012/…
http://www.amazon.com/gp/product/0674052331/…
Last week, I was at the Workshop on Economics and Information Security in Berlin. Excellent conference, as always. Ross Anderson liveblogged the event; see the comments for summaries of the talks. On the second day, Ross and I debated—well, discussed—cybersecurity spending. At the first WEIS, he and I had a similar discussion: I argued that we weren’t spending enough on cybersecurity, and he argued that we were spending too much. For this discussion, we reversed our positions.
http://weis2012.econinfosec.org/
http://www.lightbluetouchpaper.org/2012/06/25/…
http://www.digitalbond.com/2012/06/26/…
A poet reflects on the nature of fear.
http://www.poetryfoundation.org/poetrymagazine/…
A virus is designed to steal blueprints and send them to China. Note that although this is circumstantial evidence that the virus is from China, it is possible that the Chinese e-mail accounts that are collecting the blueprints are simply drops, and the controllers are elsewhere on the planet.
http://www.telegraph.co.uk/technology/news/9346734/…
Children are being warned that the name of their first pet should contain at least eight characters and a digit.
http://www.newsbiscuit.com/2012/06/08/…
A team at the University of Texas successfully spoofed the GPS and took control of a drone, for about $1,000 in off-the-shelf parts. Does anyone think that the bad guys won’t be able to do this?
http://rt.com/usa/news/texas-1000-us-government-906/
http://www.foxnews.com/scitech/2012/06/25/…
http://s.computerworld.com/security/20593/…
http://www.bbc.com/news/technology-18643134
Two sensible comments about terrorism:
“Bee stings killed as many in UK as terrorists, says watchdog.”
http://www.telegraph.co.uk/news/uknews/…
“Americans Are as Likely to Be Killed by Their Own Furniture as by Terrorism.”
http://m.theatlantic.com/international/archive/2012/…
Is this a new trend in common sense?
In case you forgot, here’s a comprehensive list of ridiculous predictions about terrorist attacks.
http://polisci.osu.edu/faculty/jmueller/predict.pdf
http://nationalinterest.org//the-skeptics/…
And here’s the best data on U.S. terrorism deaths since 9/11.
https://www.schneier.com/blog/archives/2011/08/…
From an article on the cocaine trade between Mexico and the U.S.: “‘They erect this fence,’ he said, ‘only to go out there a few days later and discover that these guys have a catapult, and they’re flinging hundred-pound bales of marijuana over to the other side.’ He paused and looked at me for a second. ‘A catapult,’ he repeated. ‘We’ve got the best fence money can buy, and they counter us with a 2,500-year-old technology.'”
http://www.nytimes.com/2012/06/17/magazine/…
William Gibson’s Agrippa Code is available for cryptanalysis. Break the code, win a prize.
http://www.crackingagrippa.net/
This is important: it’s a petition to try to force the TSA to follow the law. Details here:
http://www.cato-at-liberty.org/…
https://petitions.whitehouse.gov/petition/…
Please sign it.
For years, it’s been a clever trick to drop USB sticks in parking lots of unsuspecting businesses, and track how many people plug them into computers. I have long argued that the problem isn’t that people are plugging the sticks in, but that the computers trust them enough to run software off of them. This is the first time I’ve heard of criminals trying this trick.
http://boingboing.net/2012/07/10/…
This paper looks at access control for mobile phones. Basically, it’s all or nothing: either you have a password that protects everything, or you have no password and protect nothing. The authors argue that there should be more user choice: some applications should be available immediately without a password, and the rest should require a password. This makes a lot of sense to me. Also, if only important applications required a password, people would be more likely to choose strong passwords.
http://cups.cs.cmu.edu/soups/2012/proceedings/…
It’s surprisingly easy to hack BMW’s remote keyless entry system.
https://www.schneier.com/blog/archives/2012/07/…
On Securing Potentially Dangerous Virology Research
Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work. “Science” and “Nature” have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.
My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.
First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.
Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.
Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.
Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.
Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat.” It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.
This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.
Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either “Science” or “Nature” had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.
The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars.” Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.
Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.
The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday”; the once-a-month automatic download and installation of security patches.
Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called “responsible disclosure.” Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.
Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.
Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.
On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.
Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.
This essay was originally published in “Science.”
http://www.sciencemag.org/content/336/6088/1527.full
Related article: “What Biology Can Learn from Infosec.”
http://beauwoods.blogspot.com/2012/04/…
The Failure of Anti-Virus Companies to Catch Military Malware
Mikko Hypponen of F-Secure attempts to explain why anti-virus companies didn’t catch Stuxnet, DuQu, and Flame. His conclusion is simply that the attackers—in this case, military intelligence agencies—are simply better than commercial-grade anti-virus programs.
I don’t buy this. It isn’t just the military that tests its malware against commercial defense products; criminals do it, too. Virus and worm writers do it. Spam writers do it. This is the never-ending arms race between attacker and defender, and it’s been going on for decades. Probably the people who wrote Flame had a larger budget than a large-scale criminal organization, but their evasive techniques weren’t magically better. Note that F-Secure and others had samples of Flame; they just didn’t do anything about them.
I think the difference has more to do with the ways in which these military malware programs spread. That is, slowly and stealthily. It was never a priority to understand—and then write signatures to detect—the Flame samples because they were never considered a problem. Maybe they were classified as a one-off. Or as an anomaly. I don’t know, but it seems clear that conventional non-military malware writers who want to evade detection should adopt the propagation techniques of Flame, Stuxnet, and DuQu.
http://www.wired.com/threatlevel/2012/06/…
http://volokh.com/2012/06/03/…
F-Secure responded. Unfortunately, it’s not a very substantive response. It’s a pity; I think there’s an interesting discussion to be had about why the anti-virus companies all missed Flame for so long.
http://www.f-secure.com/weblog/archives/00002388.html
Schneier News
FireDogLake Book Salon for “Liars and Outliers”:
http://fdlbooksalon.com/2012/06/17/…
I did a short Q&A for Network World on military cyberattacks and cyberweapons treaties.
http://www.networkworld.com/news/2012/…
E-Mail Accounts More Valuable than Bank Accounts
This informal survey produced the following result: “45% of the users found their email accounts more valuable than their bank accounts.”
The author believes this is evidence of some sophisticated security reasoning on the part of users: “From a security standpoint, I can’t agree more with these people. Email accounts are used most commonly to reset other websites’ account passwords, so if it gets compromised, the others will fall like dominos.”
I disagree. I think something a lot simpler is going on. People believe that if their bank account is hacked, the bank will help them clean up the mess and they’ll get their money back. And in most cases, they will. They know that if their e-mail is hacked, all the damage will be theirs to deal with. I think this is public opinion reflecting reality.
“Top Secret America’ on the Post-9/11 Cycle of Fear and Funding
I’m reading “Top Secret America: The Rise of the New American Security State,” by Dana Priest and William M. Arkin. Both work for The Washington Post. The book talks about the rise of the security-industrial complex in post 9/11 America. This short quote is from Chapter 3:
Such dread was a large part of the post-9/11 decade. A culture of fear had created a culture of spending to control it, which, in turn, had led to a belief that the government had to be able to stop every single plot before it took place, regardless of whether it involved one network of twenty terrorists or one single deranged person. This expectation propelled more spending and even more zero-defect expectations. There were tens of thousands of unsolved murders in the United States by 2010, but few newspapers ever blared this across their front pages or even tried to investigate how their police departments had to failed to solve them all over the years. But when it came to terrorism, newspaper and other media outlets amplified each mistake, which amplified the threat, which amplified the fear, which prompted more spending, and on and on and on.
It’s a really good book so far. I recommend it.
http://www.amazon.com/gp/product/0316182214/…
The project’s website has a lot of interesting information as well:
http://projects.washingtonpost.com/top-secret-america/
Since 1998, CRYPTO-GRAM has been a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Schneier on Security,” “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, Threefish, Helix, Phelix, and Skein algorithms. He is the Chief Security Technology Officer of BT BCSG, and is on the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT.
Copyright (c) 2012 by Bruce Schneier.