Entries Tagged "security education"

Page 5 of 6

Second SHB Workshop Liveblogging (3)

The second session was about fraud. (These session subjects are only general. We tried to stick related people together, but there was the occasional oddball—and scheduling constraint—to deal with.)

Julie Downs, Carnegie Mellon University (suggested reading: Behavioral Response to Phishing Risk; Parents’ vaccination comprehension and decisions; The Psychology of Food Consumption), is a psychologist who studies how people make decisions, and talked about phishing. To determine how people respond to phishing attempts—what e-mails they open and when they click on links—she watched as people interacted with their e-mail. She found that most people’s strategies to deal with phishing attacks might have been effective 5-10 years ago, but are no longer sufficient now that phishers have adapted. She also found that educating people about phishing didn’t make them more effective at spotting phishing attempts, but made them more likely to be afraid of doing anything on line. She found this same overreaction among people who were recently the victims of phishing attacks, but again people were no better separating real e-mail from phishing attempts. What does make a difference is contextual understanding: how to parse a URL, how and why the scams happen, what SSL does and doesn’t do.

Jean Camp, Indiana University (suggested reading: Experimental Evaluation of Expert and Non-expert Computer Users’ Mental Models of Security Risks), studies people taking risks online. Four points: 1) “people create mental models from internal narratives about risk,” 2) “risk mitigating action is taken only if the risk is perceived as relevant,” 3) “contextualizing risk can show risks as relevant,” and 4) “narrative can increase desire and capacity to use security tools.” Stories matter: “people are willing to wash out their cat food cans and sweep up their sweet gum balls to be a good neighbor, but allow their computers to join zombie networks” because there’s a good story in the former and none in the latter. She presented two experiments to demonstrate this. One was a video experiment watching business majors try to install PGP. No one was successful: there was no narrative, and the mixed metaphor of physical and cryptographic “key” confused people.

Matt Blaze, University of Pennsylvania (his blog), talked about electronic voting machines and fraud. He related this anecdote about actual electronic voting machine vote fraud in Kentucky. In the question session, he speculated about the difficulty of having a security model that would have captured the problem, and how to know whether that model was complete enough.

Jeffrey Friedberg, Microsoft (suggested reading: Internet Fraud Battlefield; End to End Trust and the Trust User Experience; Testimony on “spyware”), discussed research at Microsoft around the Trust User Experience (TUX). He talked about the difficulty of verifying SSL certificates. Then he talked about how Microsoft added a “green bar” to signify trusted sites, and how people who learned to trust the green bar were fooled by “picture in picture attacks”: where a hostile site embedded a green-bar browser window in its page. Most people don’t understand that the information inside the browser window is arbitrary, but that the stuff around it is not. The user interface, user experience, mental models all matter. Designing and evaluating TUX is hard. From the questions: training doesn’t help much, because given a plausible story, people will do things counter to their training.

Stuart Schechter, Microsoft, presented this research on secret questions. Basically, secret questions don’t work. They’re easily guessable based on the most common answers; friends and relatives of people can easily predict unique answers; and people forget their answers. Even worse, the more memorable the question/answers are, the easier they are to guess. Having people write their own questions is no better: “What’s my blood type?” “How tall am I?”

Tyler Moore, Harvard University (suggested reading: The Consequences of Non-Cooperation in the Fight against Phishing; Information Security Economics—and Beyond), discussed his empirical studies on online crime and defense. Fraudsters are good at duping users, but they’re also effective at exploiting failures among IT professionals to perpetuate the infrastructure necessary to carry out these exploits on a large scale (hosting fake web pages, sending spam, laundering the profits via money mules, and so on). There is widespread refusal among the defenders to cooperate with each other, and attackers exploit these limitations. We are better at removing phishing websites than we are at defending against the money mules. Defenders tend to fix immediate problems, but not underlying problems.

In the discussion phase, there was a lot of talk about the relationships between websites, like banks, and users—and how that affects security for both good and bad. Jean Camp doesn’t want a relationship with her bank, because that unduly invests her in the bank. (Someone from the audience pointed out that, as a U.S. taxpayer, she is already invested in her bank.) Angela Sasse said that the correct metaphor is “rules of engagement,” rather than relationships.

Adam Shostack’s liveblogging. Ross Anderson’s liveblogging is in his blog post’s comments.

Matt Blaze is taping the sessions—except for the couple of presenters who would rather not be taped—I’ll post his links as soon as the files are online.

EDITED TO ADD (6/11): Audio of the session is here.

Posted on June 11, 2009 at 11:42 AMView Comments

Teaching Children to Spot Terrorists

You can’t make this stuff up:

More than 2,000 10 and 11-year-olds [in the UK] will see a short film, which urges them to tell the police, their parents or a teacher if they hear anyone expressing extremist views.

[…]

A lion explains that terrorists can look like anyone, while a cat tells pupils that [they] should get help if they are being bullied and a toad tells them how to cross the road.

The terrorism message is also illustrated with a re-telling of the story of Guy Fawkes, saying that his strong views began forming when he was at school in York. It has been designed to deliver the message of fighting terrorism in [an] accessible way for children.

I’ve said this before:

If you ask amateurs to act as front-line security personnel, you shouldn’t be surprised when you get amateur security.

Posted on June 9, 2009 at 2:45 PMView Comments

Obama's Cybersecurity Speech

I am optimistic about President Obama’s new cybersecurity policy and the appointment of a new “cybersecurity coordinator,” though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.

His principles were all welcome—securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he’s overly optimistic that legislation won’t be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we’ll be in a decade.

Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president’s commitment to transparency and privacy, both of which are vital for security.

But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won’t improve.

As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.

Two, security coordination needs to happen at the highest level possible, whether that’s evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.

This essay originally appeared on The New York Times website, along with several others commenting on Obama’s speech. All the essays are worth reading, although I want to specifically quote James Bamford making an important point I’ve repeatedly made:

The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president’s ear.

Because the new cyber czar will have neither a checkbook nor direct access to President Obama, the role will be more analogous to a traffic cop than a czar.

Gus Hosein wrote a good essay on the need for privacy:

Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.

The presumption should be that all data collected is vulnerable to abuse or theft. We should therefore collect only what is absolutely required.

As I said, they’re all worth reading. And here are some more links.

I wrote something similar in 2002 about the creation of the Department of Homeland Security:

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it’s more robust, reliable, and secure. You’re alive and reading this because of it.

EDITED TO ADD (6/2): Gene Spafford’s opinion.

EDITED TO ADD (6/4): Good commentary from Bob Blakley.

Posted on May 29, 2009 at 3:01 PMView Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

The Security Mindset

Uncle Milton Industries has been selling ant farms to children since 1956. Some years ago, I remember opening one up with a friend. There were no actual ants included in the box. Instead, there was a card that you filled in with your address, and the company would mail you some ants. My friend expressed surprise that you could get ants sent to you in the mail.

I replied: “What’s really interesting is that these people will send a tube of live ants to anyone you tell them to.”

Security requires a particular mindset. Security professionals—at least the good ones—see the world differently. They can’t walk into a store without noticing how they might shoplift. They can’t use a computer without wondering about the security vulnerabilities. They can’t vote without trying to figure out how to vote twice. They just can’t help it.

SmartWater is a liquid with a unique identifier linked to a particular owner. “The idea is for me to paint this stuff on my valuables as proof of ownership,” I wrote when I first learned about the idea. “I think a better idea would be for me to paint it on your valuables, and then call the police.”

Really, we can’t help it.

This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.

I’ve often speculated about how much of this is innate, and how much is teachable. In general, I think it’s a particular way of looking at the world, and that it’s far easier to teach someone domain expertise—cryptography or software security or safecracking or document forgery—than it is to teach someone a security mindset.

Which is why CSE 484, an undergraduate computer-security course taught this quarter at the University of Washington, is so interesting to watch. Professor Tadayoshi Kohno is trying to teach a security mindset.

You can see the results in the blog the students are keeping. They’re encouraged to post security reviews about random things: smart pill boxes, Quiet Care Elder Care monitors, Apple’s Time Capsule, GM’s OnStar, traffic lights, safe deposit boxes, and dorm room security.

One recent one is about an automobile dealership. The poster described how she was able to retrieve her car after service just by giving the attendant her last name. Now any normal car owner would be happy about how easy it was to get her car back, but someone with a security mindset immediately thinks: “Can I really get a car just by knowing the last name of someone whose car is being serviced?”

The rest of the blog post speculates on how someone could steal a car by exploiting this security vulnerability, and whether it makes sense for the dealership to have this lax security. You can quibble with the analysis—I’m curious about the liability that the dealership has, and whether their insurance would cover any losses—but that’s all domain expertise. The important point is to notice, and then question, the security in the first place.

The lack of a security mindset explains a lot of bad security out there: voting machines, electronic payment cards, medical devices, ID cards, internet protocols. The designers are so busy making these systems work that they don’t stop to notice how they might fail or be made to fail, and then how those failures might be exploited. Teaching designers a security mindset will go a long way toward making future technological systems more secure.

That part’s obvious, but I think the security mindset is beneficial in many more ways. If people can learn how to think outside their narrow focus and see a bigger picture, whether in technology or politics or their everyday lives, they’ll be more sophisticated consumers, more skeptical citizens, less gullible people.

If more people had a security mindset, services that compromise privacy wouldn’t have such a sizable market share—and Facebook would be totally different. Laptops wouldn’t be lost with millions of unencrypted Social Security numbers on them, and we’d all learn a lot fewer security lessons the hard way. The power grid would be more secure. Identity theft would go way down. Medical records would be more private. If people had the security mindset, they wouldn’t have tried to look at Britney Spears’ medical records, since they would have realized that they would be caught.

There’s nothing magical about this particular university class; anyone can exercise his security mindset simply by trying to look at the world from an attacker’s perspective. If I wanted to evade this particular security device, how would I do it? Could I follow the letter of this law but get around the spirit? If the person who wrote this advertisement, essay, article or television documentary were unscrupulous, what could he have done? And then, how can I protect myself from these attacks?

The security mindset is a valuable skill that everyone can benefit from, regardless of career path.

This essay originally appeared on Wired.com.

EDITED TO ADD (3/31): Comments from Ed Felten. And another comment.

EDITED TO ADD (4/30): Another comment.

Posted on March 25, 2008 at 5:27 AMView Comments

Teaching Viruses and Worms

Over two years ago, George Ledin wrote an essay in Communications of the ACM, where he advocated teaching worms and viruses to computer science majors:

Computer science students should learn to recognize, analyze, disable, and remove malware. To do so, they must study currently circulating viruses and worms, and program their own. Programming is to computer science what field training is to police work and clinical experience is to surgery. Reading a book is not enough. Why does industry hire convicted hackers as security consultants? Because we have failed to educate our majors.

This spring semester, he taught the course at Sonoma State University. It got a lot of press coverage.

No one wrote a virus for a class project. No new malware got into the wild. No new breed of supervillian graduated.

Teaching this stuff is just plain smart.

Posted on June 12, 2007 at 2:30 PMView Comments

Educating Users

I’ve met users, and they’re not fluent in security. They might be fluent in spreadsheets, eBay, or sending jokes over e-mail, but they’re not technologists, let alone security people. Of course, they’re making all sorts of security mistakes. I too have tried educating users, and I agree that it’s largely futile.

Part of the problem is generational. We’ve seen this with all sorts of technologies: electricity, telephones, microwave ovens, VCRs, video games. Older generations approach newfangled technologies with trepidation, distrust and confusion, while the children who grew up with them understand them intuitively.

But while the don’t-get-it generation will die off eventually, we won’t suddenly enter an era of unprecedented computer security. Technology moves too fast these days; there’s no time for any generation to become fluent in anything.

Earlier this year, researchers ran an experiment in London’s financial district. Someone stood on a street corner and handed out CDs, saying they were a “special Valentine’s Day promotion.” Many people, some working at sensitive bank workstations, ran the program on the CDs on their work computers. The program was benign—all it did was alert some computer on the Internet that it was running—but it could just have easily been malicious. The researchers concluded that users don’t care about security. That’s simply not true. Users care about security—they just don’t understand it.

I don’t see a failure of education; I see a failure of technology. It shouldn’t have been possible for those users to run that CD, or for a random program stuffed into a banking computer to “phone home” across the Internet.

The real problem is that computers don’t work well. The industry has convinced everyone that people need a computer to survive, and at the same time it’s made computers so complicated that only an expert can maintain them.

If I try to repair my home heating system, I’m likely to break all sorts of safety rules. I have no experience in that sort of thing, and honestly, there’s no point in trying to educate me. But the heating system works fine without my having to learn anything about it. I know how to set my thermostat and to call a professional if anything goes wrong.

Punishment isn’t something you do instead of education; it’s a form of education—a very primal form of education best suited to children and animals (and experts aren’t so sure about children). I say we stop punishing people for failures of technology, and demand that computer companies market secure hardware and software.

This originally appeared in the April 2006 issue of Information Security Magazine, as the second part of a point/counterpoint with Marcus Ranum. You can read Marcus’s essay here, if you are a subscriber. (Subscriptions are free to “qualified” people.)

EDITED TO ADD (9/11): Here’s Marcus’s half.

Posted on August 22, 2006 at 12:35 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.