Entries Tagged "security education"

Page 4 of 6

Using Science Fiction to Teach Computer Security

Interesting paper: “Science Fiction Prototyping and Security Education: Cultivating Contextual and Societal Thinking in Computer Security Education and Beyond,” by Tadayoshi Kohno and Brian David Johnson.

Abstract: Computer security courses typically cover a breadth of technical topics, including threat modeling, applied cryptography, software security, and Web security. The technical artifacts of computer systems—and their associated computer security risks and defenses—do not exist in isolation, however; rather, these systems interact intimately with the needs, beliefs, and values of people. This is especially true as computers become more pervasive, embedding themselves not only into laptops, desktops, and the Web, but also into our cars, medical devices, and toys. Therefore, in addition to the standard technical material, we argue that students would benefit from developing a mindset focused on the broader societal and contextual issues surrounding computer security systems and risks. We used science fiction (SF) prototyping to facilitate such societal and contextual thinking in a recent undergraduate computer security course. We report on our approach and experiences here, as well as our recommendations for future computer security and other computer science courses.

Posted on August 1, 2011 at 6:03 AMView Comments

Degree Plans of the Future

You can now get a Master of Science in Strategic Studies in Weapons of Mass Destruction. Well, maybe you can’t:

“It’s not going to be open enrollment (or) traditional students,” Giever said. “You worry about whether you might be teaching the wrong person this stuff.”

At first, the FBI will select students from within its ranks, though Giever wants to open it to other law enforcement agencies. Rather than traditional tuition, agencies will contract with the school, paying about $300,000 a year for groups of 15 to 20 full-time students, according to documents submitted to the board of governors of the State System of Higher Education.

Posted on July 15, 2011 at 6:31 AMView Comments

Folk Models in Home Computer Security

This is a really interesting paper: “Folk Models of Home Computer Security,” by Rick Wash. It was presented at SOUPS, the Symposium on Usable Privacy and Security, last year.

Abstract:

Home computer systems are frequently insecure because they are administered by untrained, unskilled users. The rise of botnets has amplified this problem; attackers can compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I investigate how home computer users make security-relevant decisions about their computers. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which security advice to follow: four different conceptualizations of ‘viruses’ and other malware, and four different conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring some security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they have been cleverly designed to take advantage of gaps in these models so that many home computer users do not take steps to protect against them.

I’d list the models, but it’s more complicated than that. Read the paper.

Posted on March 22, 2011 at 7:12 AMView Comments

High School Teacher Assigns Movie-Plot Threat Contest Problem

In Australia:

A high school teacher who assigned her class to plan a terrorist attack that would kill as many innocent people as possible had no intent to promote terrorism, the school principal said yesterday.

The Year-10 students at Kalgoorlie-Boulder Community High School were asked to pretend they were terrorists making a political statement by releasing a chemical or biological agent on “an unsuspecting Australian community”.

The task included choosing the best time to attack and explaining their choice of victims and what effects the attack would have on a human body.

“Your goal is to kill the MOST innocent civilians,” the assignment read.

Principal Terry Martino said he withdrew the assignment for the class on contemporary conflict and terrorism as soon as he heard of it. He said the teacher was “relatively inexperienced” and it was a “well-intentioned but misguided attempt to engage the students”.

Sounds like me:

It is in this spirit I announce the (possibly First) Movie-Plot Threat Contest. Entrants are invited to submit the most unlikely, yet still plausible, terrorist attack scenarios they can come up with.

Your goal: cause terror. Make the American people notice. Inflict lasting damage on the U.S. economy. Change the political landscape, or the culture. The more grandiose the goal, the better.

Assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.

For the record, 1) I have no interest in promoting terrorism—I’m not even sure how I could promote terrorism without actually engaging in terrorism, 2) I’m pretty experienced, and 3) my movie-plot threat contests are not misguided. You can’t understand security defense without also understanding attack.

Australian police are claiming the assignment was illegal, so Australians who enter my movie-plot threat contests should think twice. Also anyone writing a thriller novel about terrorism, perhaps.

An AFP spokeswoman said it was an offence to collect or make documents preparing for or assisting a terrorist attack.

It was also illegal to be “reckless as to whether these documents may assist or prepare for a terrorist attack”.

Posted on August 31, 2010 at 6:42 AMView Comments

Terrorizing Ourselves

Who needs actual terrorists?

How’s this for an ill-conceived emergency preparedness drill? An off-duty cop pretending to be a terrorist stormed into a hospital intensive care unit brandishing a handgun, which he pointed at nurses while herding them down a corridor and into a room.

There, after harrowing moments, he explained that the whole caper was a training exercise.

[…]

The staff at St. Rose Dominican Hospitals-Siena Campus, where the incident took place Monday morning, found the exercise more traumatizing than instructive.

Perhaps a better way to phrase it is that they learned to be terrorized.

Posted on June 1, 2010 at 5:54 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Actual Security Theater

As part of their training, federal agents engage in mock exercises in public places. Sometimes, innocent civilians get involved.

Every day, as Washingtonians go about their overt lives, the FBI, CIA, Capitol Police, Secret Service and U.S. Marshals Service stage covert dramas in and around the capital where they train. Officials say the scenarios help agents and officers integrate the intellectual, physical and emotional aspects of classroom instruction. Most exercises are performed inside restricted compounds. But they also unfold in public parks, suburban golf clubs and downtown transit stations.

Curtain up on threat theater—a growing, clandestine art form. Joseph Persichini, Jr., assistant director of the FBI’s Washington field office, says, “What better way to adapt agents or analysts to cultural idiosyncrasies than role play?”

For the public, there are rare, startling peeks: At a Holiday Inn, a boy in water wings steps out of his seventh floor room into a stampede of federal agents; at a Bowie retirement home, an elderly woman panics as a role-player collapses, believing his seizure is real; at a county museum, a father sweeps his daughter into his arms, running for the exit, while a raving, bearded man resists arrest.

EDITED TO ADD (9/11): It happened in D.C., in the Potomac River, with the Coast Guard.

Posted on August 25, 2009 at 6:43 AMView Comments

Second SHB Workshop Liveblogging (4)

Session three was titled “Usability.” (For the record, the Stata Center is one ugly building.)

Andrew Patrick, NRC Canada until he was laid off four days ago (suggested reading: Fingerprint Concerns: Performance, Usability, and Acceptance of Fingerprint Biometric Systems), talked about biometric systems and human behavior. Biometrics are used everywhere: for gym membership, at Disneyworld, at international borders. The government of Canada is evaluating using iris recognition at a distance for events like the 2010 Olympics. There are two different usability issues: with respect to the end user, and with respect to the authenticator. People’s acceptance of biometrics is very much dependent on the context. And of course, biometrics are not secret. Patrick suggested that to defend ourselves against this proliferation of using biometrics for authentication, the individual should publish them. The rationale is that we’re publishing them anyway, so we might as well do it knowingly.

Luke Church, Cambridge University (suggested reading: SHB Position Paper; Usability and the Common Criteria), talked about what he called “user-centered design.” There’s a economy of usability: “in order to make some things easier, we have to make some things harder”—so it makes sense to make the commonly done things easier at the expense of the rarely done things. This has a lot of parallels with security. The result is “appliancisation” (with a prize for anyone who come up with a better name): the culmination of security behaviors and what the system can do embedded in a series of user choices. Basically, giving users meaningful control over their security. Luke discussed several benefits and problems with the approach.

Diana Smetters, Palo Alto Research Center (suggested reading: Breaking out of the browser to defend against phishing attacks; Building secure mashups; Ad-hoc guesting: when exceptions are the rule), started with these premises: you can teach users, but you can’t teach them very much, so you’d better carefully design systems so that you 1) minimize what they have to learn, 2) make it easier for them to learn it, and 3) maximize the benefit from what they learn. Too often, security is at odds with getting the job done. “As long as configuration errors (false alarms) are common, any technology that requires users to observe security indicators and react to them will fail as attacks can simply masquerade as errors, and users will rationally ignore them.” She recommends meeting the user halfway by building new security models that actually fit the users’ needs. (For example: Phishing is a mismatch problem, between what’s in the user’s head and where the URL is actually going. SSL doesn’t work, but how should websites authenticate themselves to users? Her solution is protected links: a set of secure bookmarks in protected browsers. She went on to describe a prototype and tests run with user subjects.

Jon Callas, PGP Corporation (suggested reading: Improving Message Security with a Self-Assembling PKI), used the metaphor of the “security cliff”: you have to keep climbing until you get to the top and that’s hard, so it’s easier to just stay at the bottom. He wants more of a “security ramp,” so people can reasonably stop somewhere in the middle. His idea is to have a few policies—e-mail encryption, rules about USB drives—and enforce them. This works well in organizations, where IT has dictatorial control over user configuration. If we can’t teach users much, we need to enforce policies on users.

Rob Reeder, Microsoft (suggested reading: Expanding Grids for Visualizing and Authoring Computer Security Policies), presented a possible solution to the secret questions problem: social authentication. The idea is to use people you know (trustees) to authenticate who you are, and have them attest to the fact that you lost your password. He went on to describe how the protocol works, as well as several potential attacks against the protocol and defenses, and experiments that tested the protocol. In the question session he talked about people designating themselves as trustees, and how that isn’t really a problem.

Lorrie Cranor, Carnegie Mellon University (suggested reading: A Framework for Reasoning about the Human in the Loop; Timing Is Everything? The Effects of Timing and Placement of Online Privacy Indicators; School of Phish: A Real-Word Evaluation of Anti-Phishing Training; You’ve Been Warned: An Empirical Study of the Effectiveness of Web Browser Phishing Warnings), talked about security warnings. The best option is to fix the hazard; the second best is to guard against it—but far too often we just warn people about it. But since hazards are generally not very hazardous, most people just ignore them. “Often, software asks the user and provides little or no information to help user make this decision.” Better is to use some sort of automated analysis to assist the user in responding to warnings. For websites, for example, the system should block sites with a high probability of danger, not bother users if there is a low probably of danger, and help the user make the decision in the grey area. She went on to describe a prototype and user studies done with the prototype; her paper will be presented at USENIX Security in August.

Much of the discussion centered on how bad the problem really is, and how much security is good enough. The group also talked about economic incentives companies have to either fix or ignore security problems, and whether market approaches (or, as Jean Camp called it, “the happy Libertarian market pony”) are sufficient. Some companies have incentives to convince users to do the wrong thing, or at the very least to do nothing. For example, social networking sites are more valuable if people share their information widely.

Further discussion was about whitelisting, and whether it worked or not. There’s the problem of the bad guys getting on the whitelist, and the risk that organizations like the RIAA will use the whitelist to enforce copyright, or that large banks will use the whitelist as a tool to block smaller start-up banks. Another problem is that the user might not understand what a whitelist signifies.

Dave Clark from the audience: “It’s not hard to put a seat belt on, and if you need a lesson, take a plane.”

Kind of a one-note session. We definitely need to invite more psych people.

Adam Shostack’s liveblogging is here. Ross Anderson’s liveblogging is in his blog post’s comments. Matt Blaze’s audio is here.

Posted on June 11, 2009 at 2:56 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.