Entries Tagged "security awareness"

Page 4 of 5

Security vs. Usability

Good essay: “When Security Gets in the Way.”

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don’t? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer’s user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from “out there” that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AMView Comments

Too Many Security Warnings Results in Complacency

Research that proves what we already knew:

Crying Wolf: An Empirical Study of SSL Warning Effectiveness

Abstract. Web users are shown an invalid certificate warning when their browser cannot validate the identity of the websites they are visiting. While these warnings often appear in benign situations, they can also signal a man-in-the-middle attack. We conducted a survey of over 400 Internet users to examine their reactions to and understanding of current SSL warnings. We then designed two new warnings using warnings science principles and lessons learned from the survey. We evaluated warnings used in three popular web browsers and our two warnings in a 100-participant, between-subjects laboratory study. Our warnings performed significantly better than existing warnings, but far too many participants exhibited dangerous behavior in all warning conditions. Our results suggest that, while warnings can be improved, a better approach may be to minimize the use of SSL warnings altogether by blocking users from making unsafe connections and eliminating warnings in benign
situations.

Posted on August 4, 2009 at 10:01 AMView Comments

Tips for Staying Safe Online

This is funny:

Tips for Staying Safe Online

All citizens can follow a few simple guidelines to keep themselves safe in cyberspace. In doing so, they not only protect their personal information but also contribute to the security of cyberspace.

  • Install anti-virus software, a firewall, and anti-spyware software to your computer, and update as necessary.
  • Create strong passwords on your electronic devices and change them often. Never record your password or provide it to someone else.
  • Back up important files.
  • Ignore suspicious e-mail and never click on links asking for personal information.
  • Only open attachments if you’re expecting them and know what they contain.
  • If shelter is not available, lie flat in a ditch or other low-lying area. Do not get under an overpass or bridge. You are safer in a low, flat location.
  • Additional tips are available at www.staysafeonline.org.

Those must be some pretty nasty attachments.

Here’s the current version of the page, with the misplaced bullet point removed. And here’s where it was copied and pasted from.

Posted on July 27, 2009 at 4:16 PMView Comments

IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PMView Comments

Privacy Policies: Perception vs. Reality

New paper: “What Californians Understand About Privacy Online,” by Chris Jay Hoofnagle and Jennifer King. From the abstract:

A gulf exists between California consumers’ understanding of online rules and common business practices. For instance, Californians who shop online believe that privacy policies prohibit third-party information sharing. A majority of Californians believes that privacy policies create the right to require a website to delete personal information upon request, a general right to sue for damages, a right to be informed of security breaches, a right to assistance if identity theft occurs, and a right to access and correct data.

These findings show that California consumers overvalue the mere fact that a website has a privacy policy, and assume that websites carrying the label have strong, default rules to protect personal data. In a way, consumers interpret “privacy policy” as a quality seal that denotes adherence to some set of standards. Website operators have little incentive to correct this misperception, thus limiting the ability of the market to produce outcomes consistent with consumers’ expectations. Drawing upon earlier work, we conclude that because the term “privacy policy” has taken on a specific meaning in the minds of consumers, its use should be limited to contexts where businesses provide a set of protections that meet consumers’ expectations.

Posted on September 4, 2008 at 1:15 PMView Comments

Police Helping Thieves

This is a weird article. Local police are putting yellow stickers on cars with visible packages, making it easier for thieves to identify which cars are worth breaking into.

How odd.

EDITED TO ADD 12/19): According to a comment, this was misreported in the news. The police didn’t just put signs on cars with visible packages, but on all cars. Cars with no visible packages got a note saying: “Nothing Observed (Good Job!).” So a thief would have to read the sign, which means he’s already close enough to look in the car. Much better.

Posted on December 12, 2007 at 8:18 AMView Comments

Security by Letterhead

This otherwise amusing story has some serious lessons:

John: Yes, I’m calling to find out why request number 48931258 to transfer somedomain.com was rejected.

ISP: Oh, it was rejected because the request wasn’t submitted on company letterhead.

John: Oh… sure… but… uh, just so we’re on the same page, can you define exactly what you mean by ‘company letterhead?’

ISP: Well, you know, it has the company’s logo, maybe a phone number and web site address… that sort of thing. I mean, your fax looks like it could’ve been typed by anyone!

John: So you know what my company letterhead looks like?

ISP: Ye… no. Not specifically. But, like, we’d know it if we saw it.

John: And what if we don’t have letterhead? What if we’re a startup? What if we’re redesigning our logo?

ISP: Well, you’d have to speak to customer—John (clicking and typing): I could probably just pick out a semi-professional-looking MS Word template and paste my request in that and resubmit it, right?

ISP: Look, our policy—John: Oh, it’s ok, I just sent the request back in on letterhead.

Ha ha. The idiot ISP guy doesn’t realize how easy it for anyone with a word processor and a laser printer to fake a letterhead. But what this story really shows is how hard it is for people to change their security intuition. Security-by-letterhead was fairly robust when printing was hard, and faking a letterhead was real work. Today it’s easy, but people—especially people who grew up under the older paradigm—don’t act as if it is. They would if they thought about it, but most of the time our security runs on intuition and not on explicit thought.

This kind of thing bites us all the time. Mother’s maiden name is no longer a good password. An impressive-looking storefront on the Internet is not the same as an impressive-looking storefront in the real world. The headers on an e-mail are not a good authenticator of its origin. It’s an effect of technology moving faster than our ability to develop a good intuition about that technology.

And, as technology changes ever increasingly faster, this will only get worse.

Posted on October 30, 2007 at 6:33 AMView Comments

Home Users: A Public Health Problem?

To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system “out of the box,” but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ? What are the best settings for their wireless network? And so on and so on and so on.

How is it possible that we in the computer industry have created such a shoddy product? How have we foisted on people a product that is so difficult to use securely, that requires so many add-on products?

It’s even worse than that. We have sold the average computer user a bill of goods. In our race for an ever-increasing market, we have convinced every person that he needs a computer. We have provided application after application—IM, peer-to-peer file sharing, eBay, Facebook—to make computers both useful and enjoyable to the home user. At the same time, we’ve made them so hard to maintain that only a trained sysadmin can do it.

And then we wonder why home users have such problems with their buggy systems, why they can’t seem to do even the simplest administrative tasks, and why their computers aren’t secure. They’re not secure because home users don’t know how to secure them.

At work, I have an entire IT department I can call on if I have a problem. They filter my net connection so that I don’t see spam, and most attacks are blocked before they even get to my computer. They tell me which updates to install on my system and when. And they’re available to help me recover if something untoward does happen to my system. Home users have none of this support. They’re on their own.

This problem isn’t simply going to go away as computers get smarter and users get savvier. The next generation of computers will be vulnerable to all sorts of different attacks, and the next generation of attack tools will fool users in all sorts of different ways. The security arms race isn’t going away any time soon, but it will be fought with ever more complex weapons.

This isn’t simply an academic problem; it’s a public health problem. In the hyper-connected world of the Internet, everyone’s security depends in part on everyone else’s. As long as there are insecure computers out there, hackers will use them to eavesdrop on network traffic, send spam, and attack other computers. We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what’s the best way to get there?

I wonder about those who say “educate the users.” Have they tried? Have they ever met an actual user? It’s unrealistic to expect home users to be responsible for their own security. They don’t have the expertise, and they’re not going to learn. And it’s not just user actions we need to worry about; these computers are insecure right out of the box.

The only possible way to solve this problem is to force the ISPs to become IT departments. There’s no reason why they can’t provide home users with the same level of support my IT department provides me with. There’s no reason why they can’t provide “clean pipe” service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what’s the alternative?

In 1991, Walter S. Mossberg debuted his “Personal Technology” column in The Wall Street Journal with the words: “Personal computers are just too hard to use, and it isn’t your fault.” Sixteen years later, the statement is still true­—and doubly true when it comes to computer security.

If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn’t any other way.

This essay is the first half of a point/counterpoint with Marcus Ranum in the September issue of Information Security. You can read his reply here.

Posted on September 14, 2007 at 2:01 PMView Comments

The Psychology of Password Generation

Nothing too surprising in this study of password generation practices:

The majority of participants in the current study most commonly reported password generation practices that are simplistic and hence very insecure. Particular practices reported include using lowercase letters, numbers or digits, personally meaningful words and numbers (e.g., dates). It is widely known that users typically use birthdates, anniversary dates, telephone numbers, license plate numbers, social security numbers, street addresses, apartment numbers, etc. Likewise, personally meaningful words are typically derived from predictable areas and interests in the person’s life and could be guessed through basic knowledge of his or her interests.

The finding that participants in the current study use such simplistic practices to develop passwords is supported by similar research by Bishop and Klein (1995) and Vu, Bhargav & Proctor (2003) who found that even with the application of password guidelines, users would tend to revert to the simplest possible strategies (Proctor et al., 2002). In the current study, nearly 60% of the respondents reported that they do not vary the complexity of their passwords depending on the nature of the site and 53% indicated that they never change their password if they are not required to do so. These practices are most likely encouraged by the fact that users maintain multiple accounts (average = 8.5) and have difficulty recalling too many unique passwords.

It would seem to be a logical assumption that the practices and behaviors users engage in would be related to what they think they should do in order to create secure passwords. This does not seem to be the case as participants in the current study were able to identify many of the recommended practices, despite the fact that they did not use the practices themselves. These findings contradict the ideas put forth in Adams & Sasse (1999) and Gheringer (2002) who state that users are largely unaware of the methods and practices that are effective for creating strong passwords. Davis and Ganesan (1993) point out that the majority of users are not aware of the vulnerability of password protected systems, the prevalence of password cracking, the ease with which it can be accomplished, or the damage that can be caused by it. While the majority of this sample of password users demonstrated technical knowledge of password practices, further education regarding the vulnerability of password protected systems would help users form a more accurate mental model of computer security.

Posted on March 2, 2006 at 11:46 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.