MOOC on Cybersecurity
The University of Adelaide is offering a new MOOC on “Cyberwar, Surveillance and Security.” Here’s a teaser video. I was interviewed for the class, and make a brief appearance in the teaser.
Page 3 of 6
The University of Adelaide is offering a new MOOC on “Cyberwar, Surveillance and Security.” Here’s a teaser video. I was interviewed for the class, and make a brief appearance in the teaser.
Cory Doctorow argues that computer security is analogous to public health:
I think there’s a good case to be made for security as an exercise in public health. It sounds weird at first, but the parallels are fascinating and deep and instructive.
Last year, when I finished that talk in Seattle, a talk about all the ways that insecure computers put us all at risk, a woman in the audience put up her hand and said, “Well, you’ve scared the hell out of me. Now what do I do? How do I make my computers secure?”
And I had to answer: “You can’t. No one of us can. I was a systems administrator 15 years ago. That means that I’m barely qualified to plug in a WiFi router today. I can’t make my devices secure and neither can you. Not when our governments are buying up information about flaws in our computers and weaponising them as part of their crime-fighting and anti-terrorism strategies. Not when it is illegal to tell people if there are flaws in their computers, where such a disclosure might compromise someone’s anti-copying strategy.
But: If I had just stood here and spent an hour telling you about water-borne parasites; if I had told you about how inadequate water-treatment would put you and everyone you love at risk of horrifying illness and terrible, painful death; if I had explained that our very civilisation was at risk because the intelligence services were pursuing a strategy of keeping information about pathogens secret so they can weaponise them, knowing that no one is working on a cure; you would not ask me ‘How can I purify the water coming out of my tap?'”
Because when it comes to public health, individual action only gets you so far. It doesn’t matter how good your water is, if your neighbour’s water gives him cholera, there’s a good chance you’ll get cholera, too. And even if you stay healthy, you’re not going to have a very good time of it when everyone else in your country is stricken and has taken to their beds.
If you discovered that your government was hoarding information about water-borne parasites instead of trying to eradicate them; if you discovered that they were more interested in weaponising typhus than they were in curing it, you would demand that your government treat your water-supply with the gravitas and seriousness that it is due.
This article, from some internal NSA publication, is about Lambros Callimahos, who taught an intensive 18-week course on cryptology for many years and died in 1977. Be sure to notice the great redacted photo of him and his students on page 17.
A real-world one-way function:
Alice and Bob procure the same edition of the white pages book for a particular town, say Cambridge. For each letter Alice wants to encrypt, she finds a person in the book whose last name starts with this letter and uses his/her phone number as the encryption of that letter.
To decrypt the message Bob has to read through the whole book to find all the numbers.
And a way to break it:
I still use this example, with an assumption that there is no reverse look-up. I recently taught it to my AMSA students. And one of my 8th graders said, “If I were Bob, I would just call all the phone numbers and ask their last names.”
In the fifteen years since I’ve been using this example, this idea never occurred to me. I am very shy so it would never enter my mind to call a stranger and ask for their last name. My student made me realize that my own personality affected my mathematical inventiveness.
I’ve written about the security mindset in the past, and this is a great example of it.
Should companies spend money on security awareness training for their employees? It’s a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry’s focus on training serves to obscure greater failings in security design.
In order to understand my argument, it’s useful to look at training’s successes and failures. One area where it doesn’t work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren’t very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it’s much easier to just click "OK" and start chatting with your friends. In short: security is never salient.
Another reason health training works poorly is that it’s hard to link behaviors with benefits. We can train anyone—even laboratory rats—with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you’re unhealthy, what caused it? It might have been something you did or didn’t do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.
Training laypeople in pharmacology also isn’t very effective. We expect people to make all sorts of medical decisions at the drugstore, and they’re not very good at it. Turns out that it’s hard to teach expertise. We can’t expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can’t expect her to become an expert when most of the advice she’s exposed to comes from manufacturers’ advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.
One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they’re right and maybe they’re wrong, but they’re how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.
On the other hand, we still have trouble teaching people to wash their hands—even though it’s easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it’s not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.
Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test, to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive thirty years ago, but that knowledge is still relevant today. On the other hand, password advice from ten years ago isn’t relevant today. Can I bank from my browser? Are PDFs safe? Are untrusted networks okay? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The ‘interface’ we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.
Food safety is my final example. We have a bunch of simple rules—cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor—that are mostly right, but often ignored. If we can’t get people to follow these rules, what hope do we have for computer security training?
To those who think that training users in security is a good idea, I want to ask: “Have you ever met an actual user?” They’re not experts, and we can’t expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it’s hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don’t really address the threats.
Even if we could invent an effective computer security training program, there’s one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won’t make them more secure.
The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won’t let users choose lousy passwords and don’t care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That’s how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.
If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.
This essay originally appeared on DarkReading.com.
There is lots of commentary on this one.
EDITED TO ADD (4/4): Another commentary.
EDITED TO ADD (4/8): more commentary.
EDITED TO ADD (4/23): Another opinion.
Dan Boneh of Stanford University is offering a free online cryptography course. The course runs for six weeks, and has five to seven hours of coursework per week. It just started last week.
ETA 11/14: A second part of the course will be starting on 21 January 2013.
I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.
First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.
In general, though, I have three pieces of advice to anyone who wants to learn computer security.
I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.
I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset—one I consider essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.
One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote this guide and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.
This essay originally appeared on “Krebs on Security,” the second in a series of answers to the question. This is the first. There will be more.
In 2008, I wrote about the security mindset and how difficult it is to teach. Two professors teaching a cyberwarfare class gave an exam where they expected their students to cheat:
Our variation of the Kobayashi Maru utilized a deliberately unfair exam—write the first 100 digits of pi (3.14159…) from memory and took place in the pilot offering of a governmental cyber warfare course. The topic of the test itself was somewhat arbitrary; we only sought a scenario that would be too challenging to meet through traditional studying. By design, students were given little advance warning for the exam. Insurrection immediately followed. Why were we giving them such an unfair exam? What conceivable purpose would it serve? Now that we had their attention, we informed the class that we had no expectation that they would actually memorize the digits of pi, we expected them to cheat. How they chose to cheat was entirely up to the student. Collaborative cheating was also encouraged, but importantly, students would fail the exam if caught.
Excerpt:
Students took diverse approaches to cheating, and of the 20 students in the course, none were caught. One student used his Mandarin Chinese skills to hide the answers. Another built a small PowerPoint presentation consisting of three slides (all black slide, digits of pi slide, all black slide). The idea being that the student could flip to the answer when the proctor wasn’t looking and easily flip forwards or backward to a blank screen to hide the answer. Several students chose to hide answers on a slip of paper under the keyboards on their desks. One student hand wrote the answers on a blank sheet of paper (in advance) and simply turned it in, exploiting the fact that we didn’t pass out a formal exam sheet. Another just memorized the first ten digits of pi and randomly filled in the rest, assuming the instructors would be too lazy to
check every digit. His assumption was correct.
Read the whole paper. This is the conclusion:
Teach yourself and your students to cheat. We’ve always been taught to color inside the lines, stick to the rules, and never, ever, cheat. In seeking cyber security, we must drop that mindset. It is difficult to defeat a creative and determined adversary who must find only a single flaw among myriad defensive measures to be successful. We must not tie our hands, and our intellects, at the same time. If we truly wish to create the best possible information security professionals, being able to think like an adversary is an essential skill. Cheating exercises provide long term remembrance, teach students how to effectively evaluate a system, and motivate them to think imaginatively. Cheating will challenge students’ assumptions about security and the trust models they envision. Some will find the process uncomfortable. That is
OK and by design. For it is only by learning the thought processes of our adversaries that we can hope to unleash the creative thinking needed to build the best secure systems, become effective at red teaming and penetration testing, defend against attacks, and conduct ethical hacking activities.
Here’s a Boing Boing post, including a video of a presentation about the exercise.
Normally I just delete these as spam, but this summer program for graduate students 1) looks interesting, and 2) has some scholarship money available.
Dan Boneh of Stanford University is teaching a free cryptography class starting in January.
Sidebar photo of Bruce Schneier by Joe MacInnis.