IT Security: Blaming the Victim

Blaming the victim is common in IT: users are to blame because they don’t patch their systems, choose lousy passwords, fall for phishing attacks, and so on. But, while users are, and will continue to be, a major source of security problems, focusing on them is an unhelpful way to think.

People regularly don’t do things they are supposed to: changing the oil in their cars, going to the dentist, replacing the batteries in their smoke detectors. Why? Because people learn from experience. If something is immediately harmful, e.g., touching a hot stove or petting a live tiger, they quickly learn not to do it. But if someone skips an oil change, ignores a computer patch, or chooses a lousy password, it’s unlikely to matter. No feedback, no learning.

We’ve tried to solve this in several ways. We give people rules of thumb: oil change every 5,000 miles; secure password guidelines. Or we send notifications: smoke alarms beep at us, dentists send postcards, Google warns us if we are about to visit a website suspected of hosting malware. But, again, the effects of ignoring these aren’t generally felt immediately.

This makes security primarily a hindrance to the user. It’s a recurring obstacle: something that interferes with the seamless performance of the user’s task. And it’s human nature, wired into our reasoning skills, to remove recurring obstacles. So, if the consequences of bypassing security aren’t obvious, then people will naturally do it.

This is the problem with Microsoft‘s User Account Control (UAC). Introduced in Vista, the idea is to improve security by limiting the privileges applications have when they’re running. But the security prompts pop up too frequently, and there’s rarely any ill-effect from ignoring them. So people do ignore them.

This doesn’t mean user education is worthless. On the contrary, user education is an important part of any corporate security program. And at home, the more users understand security threats and hacker tactics, the more secure their systems are likely to be. But we should also recognise the limitations of education.

The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.

For example, we all know that backups are a good thing. But if you forget to do a backup this week, nothing terrible happens. In fact, nothing terrible happens for years on end when you forget. So, despite what you know, you start believing that backups aren’t really that important. Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.

For its part, Microsoft has made great strides in securing its operating system, providing default security settings in Windows XP and even more in Windows Vista to ensure that, when a naive user plugs a computer in, it’s not defenceless.

Unfortunately, blaming the user can be good business. Mobile phone companies save money if they can bill their customers when a calling card number is stolen and used fraudulently. British banks save money by blaming users when they are victims of chip-and-pin fraud. This is continuing, with some banks going so far as to accuse the victim of perpetrating the fraud, despite evidence of large-scale fraud by organised crime syndicates.

The legal system needs to fix the business problems, but system designers need to work on the technical problems. They must accept that security systems that require the user to do the right thing are doomed to fail. And then they must design resilient security nevertheless.

This essay originally appeared in The Guardian.

Posted on March 12, 2009 at 12:39 PM49 Comments

Comments

eyesoars March 12, 2009 1:38 PM

We have yet another local (MN) instance of the probably-guilty blaming the bystanders: Norm Coleman’s campaign is blaming their inexcusable security sloppiness on “hackers”, and have called in the FBI/secret service. This after they left credit card data (including (!) the “security number” data from the cards — a distinct DON’T!) available and readable in a public webserver directory.

No doubt VISA will be going after the campaign for breach of contract, but since Coleman’s almost certainly leaving elected political office, it may not amount to much.

Clive Robinson March 12, 2009 1:54 PM

Part of the problem is the ease of “usability” of modern operating systems, where gratification is just a click of the mouse away on a drop down menu.

The problem is that sometimes a user is just a click away from disaster as well but don’t know the difference one drop down entry and the next…

The underlying issue is that users are expecting a car or telephone they just use, but get a system which has an inconsistant interface and does not recover from user errors gracefully.

The reason for this is the age at which we meet a computer for the first time. If we are over twenty five then on average we are set in our ways and do not take kindly to learning something new.

When we are young learning how to use a new tool is usually fun and we pick it up quickly because of that.

The clasic example of the previous generation is the Video recorder.

This sugests that many “user” issues will disapear with the current under 10’s who are learning to use a keyboard and mouse faster than a pen or pencil.

Adam G March 12, 2009 2:28 PM

Great statements Bruce! 110% correct.

Can’t say I agree with the twist on Microsoft UAC.

The solution should have more to do with people using common sense. If a company or product is leaving you open to be taken advantage of DONT USE IT.

Pat Cahalan March 12, 2009 2:30 PM

They must accept that security systems that require the
user to do the right thing are doomed to fail. And then they
must design resilient security nevertheless.

That’s the money quote, right there.

But as you point out earlier, Bruce, there is an economic disincentive for designers to do this; designing secure systems is hard, and takes time and development cycles that instead can be used to make pretty widgets that sell to people who don’t care about security or just get a system out the door faster.

Systems designers who insist on security over visible functionality are going to be let go and pretty widget makers are going to be hired unless the business is forced to pay attention to security. But the customers don’t force that behavior directly.

I’m reminded of an interesting talk I heard given by an Infosec manager of a regional bank. In that talk, he argued that the bank cannot be held responsible for maintaining the security of end-user systems, as it is clearly an impossible task. And yet, when queried by an audience member (guilty) about the practical outcome of this stance (users are going to lose their money to fraud), he pointed out that his bank couldn’t simply stop offering online banking, or customers would pull out of his bank and go elsewhere. The bank would effectively force itself to fold, as it could no longer compete in the financial services playing field.

billswift March 12, 2009 2:38 PM

Maybe Vista is more secure, but after using an eMachines with Windows 98 for years with no problems, this Compaq with Vista that I have used for the last year and a half SUCKS. I don’t know what I’m doing next, probably dive down and learn to use more linux, but I am not buying another Microsoft or Hewlett-Packard product, ever again.

HJohn March 12, 2009 2:51 PM

@Clive: “Part of the problem is the ease of “usability” of modern operating systems, where gratification is just a click of the mouse away on a drop down menu.”


The usability vs security dynamic for Microsoft (or any vendor) can’t be easy. Since the users are customers, gratification is necessary. Likewise, since users are customers, satisfaction is necessary (which means limiting vulnerabilities should be a priority too).

As I described in writings I have done on eCommerce, if security is too cumbersome then customers are lost resulting in failure, and if convenience makes it to vulnerable then loss is incurred (reputation, lawsuits, customers) resulting in failure.

I guess the holy grail of IT is a product whose security is matched only by its usability. Maybe some day…

billswift March 12, 2009 2:51 PM

I patch regularly, and every patch breaks something. One regular problem, very minor which is probably why M$ doesn’t apparently pay attention to it, is the disappearing recycle bin icon – first it disappears from my desktop when I empty it, after the next patch it stops disappearing, the next patch it disappears again, and around and around.

More irritatingly, sometimes IE will bring up an older cached version of a webpage rather than the new one, sometimes even a day old one when I’ve just clicked the Back button. This has happened a couple of times for a few weeks at a time over the last year or so.

Maybe as you said their security has improved, but their product in general hasn’t.

Sean March 12, 2009 3:36 PM

The “blame the victim” routine has come to its logical end and makes me curse mightily every time I come across it. I administer a computer network. How can I guard my users when the best advice is “don’t go to an untrusted site”, and having beaten that drum for the past couple of months, the very next thing I do is go to a well known web development website (in the past trusted) only to have Kaspersky jump up with its “dying pig squeal” because the trusted site was vulnerable to a SQL injection attack and is trying to suck in some nefarious garbage through a hidden iframe?

HJohn March 12, 2009 3:55 PM

@Sean

You make a good point. The “forces of darkness” so to speak are adept at tricking even smart and careful users from time to time. Especially when they take something that is trusted and turn it into something that isn’t.

Alan Kaminsky March 12, 2009 3:55 PM

“Apple got the solution right with its backup utility Time Machine. Install it, plug in an external hard drive, and you are automatically backed up against hardware failure and human error. It’s easier to use it than not.”

No, Bruce, Apple didn’t get it quite right. The user still has to (a) buy a hard drive, (b) plug it in, and (c) install the backup utility. That’s time, and that’s money — not to mention the money the user will have to spend for electricity to keep the machine running all the time so the utility can do the backups.

If protection against disk crashes is essential, then computer manufacturers should be putting a RAID array in every box. This would protect the data against disk crashes with no user action whatsoever needed.

Why doesn’t every computer come with a RAID array? Because the users wouldn’t pay the extra price. So if the user doesn’t do backups and the disk crashes, she has no one but herself to blame. And we’re back to blaming the victim.

HJohn March 12, 2009 4:03 PM

@Alan Kaminsky

Good points. Goes back to the economics of security–the system that backs up automatically will lose out to the system that doesn’t (because it is cheaper), until users get sick of losing their data (if that even happens). And if there is never a crash, the cheaper computer may even seem faster because backups cause overhead.

Maybe some day users will see a computer without backup as the virtual equivalent of a car without an air bag. It may be cheaper, may work just as well, but the airbag sure makes life better after the crash.

Steve King March 12, 2009 4:24 PM

@Alan Kaminsky:

You assume that Time Machine exists only to recover from a disk failure. Not at all so. It’s also there to recover from a common user failure; to wit, deleting or overwriting something important. A RAID array doesn’t protect you from yourself. Time Machine is intended to.

Brandioch Conner March 12, 2009 5:05 PM

Just some points:

#1. Forget about preventing the attacks. Spend some time on a RECOVERY/MITIGATION plan.

#2. It is impossible to identify all the viruses / trojans / worms / whatever out there. Impossible. Do NOT attempt this approach. You will fail.

#3. It SHOULD be easy for any software vendor to identify every single version of every single file they’ve ever released.

#4. So all you need is a way to cleanly boot a system and quarantine every file that cannot be identified as known good. At the very least you’d end up with a cleanly booted OS.

#5. The OS vendors (I’m looking at you, Microsoft) need to enforce a distinction between OS and apps. Installing anything to the OS partition/folders/whatever should require a LOT more effort than installing a new game.

Those points are just to keep your basic system running. But your basic system is not as important to you as the DATA that it contains.

#6. Application vendors. Write your software in such a way that there is a BUTTON on the screen that will BACKUP the installed software and default data directories COMPLETELY. In such a way that I can simply re-install the app and data WITHOUT hunting for the unlocking key. Including registry entries, files, config settings … EVERYTHING.

NAte March 12, 2009 5:11 PM

This is why I like the ‘secure by default’ approach of some things like OpenBSD.

If you make it difficult to do dangerous things and make it easy to do safe things then people will do safe things.

If they have to do extra work to do safe things, then your setting them up for failure.

Phil March 12, 2009 5:11 PM

It’s not so much that UAC’s prompts pop up too frequently (they do) but that your average user (and, in some cases, your above average user) hasn’t got a clue what the correct answer should be when they do pop up.

And that is the fundamental flaw in the whole concept.

Do you really want to do this? Yes/No/I haven’t a f***ing clue. (If MS provided the third option, it would probably be the most popular response).

Mark March 12, 2009 5:16 PM

@Alan Kaminsky

Time Machine is included in OS 10.5, not a separate application.

If you are concerned about leaving the computer on, then turn it off – if it’s on and not doing anything, no backups are needed; if you’re working on the computer, the backups happen automatically.

Yes, I’m on a Mac, using OS 10.5.6.

Davi Ottenheimer March 12, 2009 5:17 PM

“People regularly don’t do things they are supposed to”

Perhaps you meant to say people regularly take more risk than we think they should because the impact to us/them is…

The “supposed to” assumes some kind of governance/requirement assigned by a group rather than an individual decision of free-will. Last time I checked dental visits, smoke-detectors and oil-changes are not governed. So it seems more appropriate examples would be to say people do not do things they are supposed to do like stop at stop signs or invest money instead of running a massive Ponzi scheme.

RH March 12, 2009 5:29 PM

The 2nd article mentions something I’ve been sounding out over (and Bruce too). The better your security, the harder it will be to assert mistakes under it. I’m reminded of a Choose Your Own Adventure book I read over a decade ago: clones were common place, and marked with an impossible-to-remove wristband. In the book, the cloner makes a mistake, and bands the original copy. Of course, once banded, no one believes you when you say you’re not the clone!

As an added bonus, the book also includes a successful search to find someone who can remove the impossible-to-remove wristband!

reinkefj March 12, 2009 6:44 PM

With all due respect to all the security “experts”, I still don’t understand why “operating systems” don’t boot from “secure media”. A bootable cdrom from microsoft would prevent uneradicable malware from taking over a user’s computer forever by hiding in the “registry”. An unrewriteable usb fob could serve the same function.

Seems so simple to me. Cars have a unique key that starts the ignition.

All too often, the security experts and IT gurus over look very simple ways to “secure” their environment. Like ROBOFORM, for passwords. Like regularly using a non-administrator id for regular work. For using IE as opposed to FFox. Argh!

Lawrence D'Oliveiro March 12, 2009 7:19 PM

“A bootable cdrom from microsoft…”

would make it too easy to reuse on another machine, thereby defeating Windows Product Activation.

Guess what Microsoft cares more about?

Lesser Whark March 12, 2009 9:09 PM

@Steve King,
You’re right, and it provides valuable confirmation that your backup scheme is working. There’s innumerable stories of backup tapes being meticulously created, stored and labelled… only after a disaster, they find the backup script was faulty and the tapes are blank.

Even few months, I wonder what some file contained last year. With Time Machine, it’s easy to check – so, as part of my usual work cycle, I regularly verify that my backup strategy is working.

The downside is that I burn less backup DVDs than I used to. If my house burns down I’ll lose both my computer and my backup hard drive, and be falling back to the disk I stored off-site three months back…

Steve March 12, 2009 9:47 PM

Am I the only one who, on seeing the first demonstrations of Microsoft UAC, thought “This seems to have been designed to nag users into disabling it”?

Gweihir March 12, 2009 11:06 PM

Every machine I use (except the laptop, just no space, but I use it only if I have no ither choice) has RAID1. Every document/code that has value goes into my or my employers SVN repository. I work on Linux. I use Windows only for gaming and there I stick to XP.

Compare that to the average user’s setup and the alternative to blaming the user is telling the user the truth, namely that computers are still machines for experts and that everybody else cannot operate them reliably or safely. Maybe in a decade or two, with some more decline of that main “cheap trash” vendors (Mircrosoft and those inspired by it), will get us there. But I expect it will take a lot longer.

I now have 25 years experience with computing, a masters degree in CS, a practical doctorate in CS and I still get bitten and run into unexpected problems on everyday things I try to do with computers. Computer technology is not mature yet, much less well usable for normal people. (Well, Unix/Linux/BSD is much more mature and much less easy to use. Windows is much easier to use, but so immature it is a bad joke. OS X is nice, but also not there yet. And it is too expensive and comes with vendor lock-in.)

If you look at how long other technologies took from first demonstration to successful general adoption, maybe that is not a surprise after all. A 50-100 year timeframe seems to be the norm, rather than the exception. I think, given the complexity, a 100 year estimate is the more realistic one for computing.

Anton March 13, 2009 1:12 AM

I’m amazed how many banks and government departments who have huge purchasing power buy into the “cheap and sexy” culture at the expense of security.

Jesse McNelis March 13, 2009 1:49 AM

I generally blame the developers. I’m computer geek, running gentoo, and most of the time I don’t even know what my computer is doing. I find it very strange that everyday people are expected to deal with the complex nature of Microsoft Windows.

Expecting users to regularly apply patches because the developer made a program so complicated that they are finding huge bugs in it even years later is just silly.
On windows it’s basically impossible to apply patches because the system relies on individual programs to have a facility to alert the user to patches, and most don’t have this feature.

  • Jessta

Nostromo March 13, 2009 3:06 AM

@Clive Robinson:
“If we are over twenty five then on average we are set in our ways and do not take kindly to learning something new.”

Ageist rubbish.

Readiness to learn new things is correlated with level of education, and may possibly be correlated with intelligence, but in my experience (and I would guess I have at least twice as many years of that as Clive Robinson) it is not strongly correlated with age.

Rob Schneider March 13, 2009 3:07 AM

Re the Vista UAC prompt … I’ve been using Vista for about 18 months on a couple of machines. I don’t get why the world complains about it.

It only “prompts” me when I try to do something that clearly needs admin rights, e.g. install something, change a system setting. And I don’t do that but a few times (or less) a month. I’ad appreciate it prompting me when the machine wants to do such a thing “on its own” or due to malware–doesn’t happen, though.

What’s the big deal? (I suspect related to Microsoft bashing.)

Or maybe my machines are insecure because UAC is not “in charge” as is on everyone else’s machines.

francois March 13, 2009 4:32 AM

@ Rob Schneider
As a user of both Mac OS X (primary) and Vista (secondary), I agree with you that for an end-user who doesn’t fiddle with his machine much Vista’s UAC is not a (major) problem, and would be even less one if it wasn’t so slow sometimes…
As a user, I don’t mind UAC.
As an administrator I hate it. It is an unnecessary annoyance, it is irritating to have your computer “shout” at you at virtually every click.
I wish it had something like sudo, where you have a period of time where it doesn’t ask you again when doing administrative work, or a “fully admin mode” that you can set on or off, but always defaults to off. Or, as in Mac OS X, where in most (sensitive) control panel items, you can set a little padlock to choose whether it’s going to prompt you for admin rights or not next time you access it.

Clive Robinson March 13, 2009 5:25 AM

@ Nostromo,

“and I would guess I have at least twice as many years of that as Clive Robinson”

Perhaps you would care to tell me your memories of the first world war?

For the record I’m actually older than Bruce and have suffered for a number of years from ageisum in the workplace and when moving jobs.

“Readiness to learn new things is correlated with level of education, and may possibly be correlated with intelligence, but in my experience it is not strongly correlated with age.”

If you care to go back to my post and read it again you will see I covered part of this point.

Also your comment says much about who you chose to asscociate with and the bredth of your experiance.

When I entered the workplace jobs for life where very much the norm and further education in most cases being limited to “sitting next to Gerty” or “on the job training” as it would be called today.

The concept of life long learning was held by very few and they where mainly in the academic and technical communities.

I still come across senior managers older than 45 who will just not learn new tools to help them. The reason is not ability but desire, they know how to get most of what they do done by their established methods. The possible speed increase from learning new tools is not justified by the time needed to get up the learning curve.

This is not to say that they cannot or will not learn to use new tools, but the tool realy has to offer them something considerably more than saving a bit of time.

Well learned skills free up thinking time. The obvious example being driving a car. Most drivers with more than three or four years of daily driving realy do not drive with their concious mind but their hind brain, it has become like walking.

The trend of the “Silver Suffer” is due to active and able minds not having other more satisfing things to do. And consiquently little or nothing to do with the level of education of the person.

But as I said the issue is “taking kindly” not the ability.

So please do not accuse me of being ageist, especialy as I am subject to it on frequent occasions.

alistair March 13, 2009 5:44 AM

@nostromo

“Ageist rubbish.

Readiness to learn new things is correlated with level of education, and may possibly be correlated with intelligence, but in my experience …. it is not strongly correlated with age.”

I’ve no idea of anyone’s age, but my experience tally’s with CR – but with a qualification.
Older people went through an education system that told them facts (and consequently often learnt a lot). They expect to be told how to operate new equipment. More recently education has involved more in the way of exploring and so younger people are happier with finding out how to work something by themselves.
The very young are the best examples of this because they are still programmed to explore.
Those of us of a technical disposition seem to have retained this original curiousity and will explore the options of any new function.
My experience is that, no matter what the level of education, it is this lack of inherent curiousity in most people that limits their adaption to new technology.

Winter March 13, 2009 6:17 AM

I think the Bitfrost system underlying the original design of the OLPC’s XO computer deployment is going in the right direction:

http://wiki.laptop.org/go/Bitfrost

Open design
The laptop’s security must not depend upon a secret design implemented in hardware or software.

No lockdown
Though in their default settings, the laptop’s security systems may impose various prohibitions on the user’s actions, there must exist a way for these security systems to be disabled. When that is the case, the machine will grant the user complete control.

No reading required
Security cannot depend upon the user’s ability to read a message from the computer and act in an informed and sensible manner. While disabling a particular security mechanism may require reading, a machine must be secure out of the factory if given to a user who cannot yet read.

Unobtrusive security
Whenever possible, the security on the machines must be behind the scenes, making its presence known only through subtle visual or audio cues, and never getting in the user’s way. Whenever in conflict with slight user convenience, strong unobtrusive security is to take precedence, though utmost care must be taken to ensure such allowances do not seriously or conspicuously reduce the usability of the machines. As an example, if a program is found attempting to violate a security setting, the user will not be prompted to permit the action; the action will simply be denied. If the user wishes to grant permission for such an action, she can do so through the graphical security center interface.


http://en.wikipedia.org/wiki/Bitfrost

Every program, when first installed, requests certain bundles of rights, for instance “accessing the camera”, or “accessing the internet”. The system keeps track of these rights, and the program is later executed in an environment which makes only the requested resources available. The implementation is not specified by Bitfrost, but dynamic creation of security contexts is required. The first implementation was based on vserver, the second and current implementation is based on user IDs and group IDs (/etc/password is edited when an activity is started), and a future implementation might involve SE Linux or some other technology.

By default, the system denies certain combinations of rights; for instance, a program would not be granted both the right to access the camera and to access the internet. Anybody can write and distribute programs that request allowable right combinations. Programs that require normally unapproved right combinations need a cryptographic signature by some authority. The laptop’s user can use the built-in security panel to grant additional rights to any application.

Winter

Olaf March 13, 2009 6:20 AM

It’s the usual move by companies these days. Banks are expert at it.

Introduce some half baked measure, claim a huge increase in security and in the small print shift liability from the company to the customer.

Chip and Pin, Verified by Visa etc etc.

Marc March 13, 2009 6:29 AM

While it is usually not really helpful to blame the victims, I strongly believe that stupidity must carry a price tag.

And all “victims” of phishing and advance fee fraud have acted stupidly. A government agency that puts confidential or secret documents on pubic servers acts stupidly – even if the documents are encrypted, no mater if the key is a good one or “progress”.

Greg March 13, 2009 7:54 AM

@Clive Robinson

I wouldn’t hold your breath for the under 10. For starters they don’t learn a keyboard but a number pad (cell phone) and a “controller”. If they can’t play the game with the start button they get distracted rather easily.

I teach a class at uni, introduction to computers for Bio students. (We have nick named the course stupid computers for stupid biologist). This “younger” generation really has no idea what they are doing and no idea at all about computers. They change the format of a word document by changing the name from test.doc to test.pdf.

So much for the grow up with technology generation. My Gran does far better (89).

David March 13, 2009 8:38 AM

@Winter:

A close-grained security model is another one of those security myths. They’re great if you can make sure everything’s set up right. The moment somebody slips up and gets the permissions wrong, it all falls apart.

If it stops a user from doing what he or she wants, it fails in the marketplace. If the user can reset the permissions, the user will reset them to overly permissive (after all, the normal user doesn’t understand what most permissions mean, or what the program is actually going to want to do).

If no program can access both the camera and the internet, that means the bad guys have to write slightly more sophisticated malware, that’s all.

Finally, it’s really hard to set permissions to stop bad things from happening. An application that needs to access the file system will have permission to, for good or for evil. It isn’t possible to set permissions to “only do what the user wants, not the what the bad guy wants”.

Anonymous March 13, 2009 8:39 AM

“The solution is to better design security systems that assume uneducated users: to prevent them from changing security settings that would leave them exposed to undue risk, or—even better—to take security out of their hands entirely.”

That’s the worst I’ve ever seen you give. Really mind-bogglingly bad. Good advice is the “opt-out” model, where “undue risk” requires activity on part of the user — they need to show that they know what they’re doing, that they’ve understood the risk, and are actively accepting that risk because they happen to know better than the designer.

But giving the user no option? You’d force the educated user to accept extra risk in order to create a necessary work-around — the designer is destroying security by making it impossible to adapt the security.

Really bad — a thoughtless statement. How the hell is the “designer” supposed to know what is “undue risk”?

Clive Robinson March 13, 2009 8:54 AM

@ Greg,

“I wouldn’t hold your breath for the under 10”

Not sure where your Uni is.

I used to manage a central ICT Help Desk in a UK Uni just over a decade ago, that used to have visiting students from various parts of the world.

It was very noticable the different levels of IT literacy and self motivation on a national basis.

The difference was such that we used to dred the arival of one particular (non EU) nations students. They virtualy all required “spoon feading” how to log on to their accounts and how to print etc, even though it was all in the handbook they got given.

It got so bad that we eventualy held a compulsary IT introduction class for them and literal talked them through the basic mechanics…

In more recent times I have helped out with IT at one or two places where there are quite young (4y+) students and I have been impressed with how quickly they learn to use (Open Source) education programs and OS’s and how to log into their own accounts and browse the Internet (and yes they can remember their passwords from week to week 8)

I guess mileage varies with access to resources and “fun factor” I’ve noticed Open Source education progs get more use and the “kids” appear to enjoy them more (even though they are to an adult like myself less user friendly).

I would urge people with young kids to give Edubuntu ( http://www.edubuntu.org ) a spin.

Winter March 13, 2009 9:20 AM

@David:
“Finally, it’s really hard to set permissions to stop bad things from happening. An application that needs to access the file system will have permission to, for good or for evil. It isn’t possible to set permissions to “only do what the user wants, not the what the bad guy wants”.”

There are several points here:
1 This is not based on windows, but on a custom build Linux kernel. Policies can be enforced until the user switches kernels.

2 Every application runs inside a (KVM) virtual machine. Each virtual machine image (ie, application) gets access to a custom chroot file system and devices. Mallware has to breach the KVM to get out of the chroot jail. Only files and devices the application is allowed to access are inside the KVM image.

3 A user can change the security settings, but that means she has to understand how to do that. For some, highly dangerous actions (like reflashing the boot image), special keys must be requested.

Ivan Krstić really did try to build security from the ground up to be usable by 5 year olds. The leading idea was that users must be allowed to break their machines, but only after they have mastered a certain level of competence.

Winter

Chaz March 13, 2009 10:48 AM

@billswift:

“Maybe as you said their security has improved, but their product in general hasn’t.”

But you are still using it, as are many others. Why? Maybe if we could figure out the answer we’d learn something.

BadRap March 13, 2009 12:54 PM

Regarding Windows Vista UAC, I agree with the comments about it not being a hinderance. I find that those that complain about Vista UAC have either never actually used Vista (just passing on heresay) or are performing some complex admin task (where one would expect to get UAC types of prompts as verification/confirmation when performing an admin operation).

I use Vista everyday on several computers and for routine, everyday, tasks, I never encounter Vista UAC prompts.

The biggest problems I have found with Vista UAC are related to third party applications written by stupid/lazy programmers who are clueless about how to write good applications. Typically these are applications that unnecessarily expect admin level privileges by trying to write to protected areas or some such thing.

Unfortunately, Microsoft has to try to accomodate these lazy developers and Microsoft usually ends up taking the blame when these poorly written applications do dumb things.

Raymond Chen has written several very enlightening articles on the lengths Microsoft has had to go to over the years to maintain things like unpublished APIs in the Windows operating system because lazy programmers used them and eliminating them would break their 3rd party application.

David March 13, 2009 3:40 PM

@Winter

  1. I wasn’t assuming any particular OS with what I said (although thinking vaguely of SELinux, which I have no experience with). I don’t see how the OS is relevant to what I was saying.
  2. How do you define, in advance, what needs to be in the chroot jail for each individual application? This is the general problem I have, and it applies whether you use chroot, access lists, virtual machines, or magic transistor fairy dust. My text editor has to be able to read and modify any text file I have reason to read or modify, including my source code, my configuration files, and my fiction. (Doesn’t everybody write short stories using vim?) Whatever I’m using to upload files has to be able to read those files and send them out.

If my internet-connected software is limited to certain directories, I have to move the files in and out, and that’s going to inconvenience people. If I can’t download a program, edit it, compile it, test it, and upload source and executable, that’s unacceptable. If I can’t delete old files easily, that’s unacceptable. If I can’t programmatically read a batch of files, change them, and save them in a different place, that’s unacceptable.

Is there some sort of tagging of commands from the keyboard and mouse, so they can be considered reliable, and which can’t easily be forged by software? If so, does this stop me from doing file manipulation with Perl scripts?

The fundamental problem here is that I have to be allowed to do things that essentially trash my system. I have to be able to upload any financial data or diary or other sensitive stuff I’ve got. I have to be able to delete files. I need to be able to do those things with a script or program. Moreover, I need to be able to do those things with what I write myself, or download, and if I’m going to have to add a crypto signature it’s going to have to be quick, easy, and preferably automatable.

Since I have to be able to do that, then all a bad guy has to do is get me to run the appropriate program. Of course, no intelligently designed browser or email client will make it easy to do this by accident, but that won’t stop the bad guys.

If we’re talking about computers for learning purposes, or computers intended for limited purposes, this could work reasonably well, although it won’t stop all attacks. For general-purpose computers, I don’t see it working.

Jonadab the Unsightly One March 14, 2009 6:28 AM

focusing on [users] is an unhelpful way to think.

Actually I think a lot of problems, both in terms of security and just in general, result from the fact that developers don’t focus on users in the way they should, and specifically, that developers don’t try to understand ordinary users and how they think.

Certainly, relying on users to behave in a prescribed fashion is an extremely unhelpful way to think. But I don’t think that means developers shouldn’t think about users at all. Quite the contrary.

UAC:

the idea is to improve security by limiting
the privileges applications have when
they’re running.

Although, they still have unlimited access to all of the user’s personal data, which is not really desirable in the general case. This is the traditional security model on Unix systems also, but ultimately it would be better if applications were more limited. A web browser, for instance, shouldn’t have access to the user’s email address book, and a mailreader shouldn’t be able to change files it didn’t create. Neither of them should be able to create executable files or launch processes with privileges they don’t have themselves. But that would all require a larger redesign.

Back to UAC:

But the security prompts pop up too frequently,

That’s because Windows application developers were trained for years developing for the Windows 95/98/Me product line, where user account security didn’t exist. Windows XP introduced limited user accounts, and good developers figured out what changes they needed to make to their practices, so when Vista came along, they had no problem. But you still had lots of devs out there (Symantec, for example) who refused to support that and just expected everyone to run as admin all the time under XP. Now along comes Vista, and those badly-behaved apps bug the user with privilege-escalation prompts unnecessarily, when a well-behaved app would not. Hopefully this puts some pressure on the developers to fix their habits, not that I’m holding my breath or anything.

There’s no reason why any normal process running out of a user account would need to trigger a UAC prompt on anything resembling a regular basis. The Linux world has sudo (and gksudo, and so on), which is essentially the same as UAC in principle, but the only time you ever get a prompt is when you go poking around in the administrative section of the menu (e.g., to set up a new printer or install some software). Normal applications don’t trigger it, because they don’t DO things that need privileges.

This implies that privileged tasks that DO need to be done on a regular basis, such as installing new virus definitions, should be done in the background by a process that is not particularly connected to the currently logged-in user. In the Unix world, this is standard practice, but a lot of Windows developers still need to learn this technique.

Even Mozilla Firefox is guilty here, with its update mechanism. Auto-update policy should be decided at install time by the admin who does the install, so then the user SHOULD NOT need to be prompted. If the admin said yes, do updates automatically, then a background administrative process, not tied to the logged-in user’s account, should handle that. (If the admin says no auto updates, then the admin becomes responsible for doing them manually, which is fine if you manage one or two computers, not so much fun if you manage thirty of the blessed things.)

and there’s rarely any ill-effect from
ignoring them. So people do ignore them.

Yes, but what does it mean to “ignore” them, i.e., what’s the easy-lazy action?

You’ve probably seen systems where if the user clicks the “Ok” button, the privs are granted. This is bad and wrong and dangerous, because users have been trained by decades of boy-crying-wolf prompts to click Ok without reading. Do you really want to exit this wonderful media-player program, which you obviously aren’t still using to play any media? Do you really want to send search terms to Google unencrypted? Do you really want to visit this site whose SSL certificate expires in 2014, when your computer’s clock thinks it’s 2099 because your CMOS battery is out and we couldn’t be bothered to make ntp the default for internet-connected systems? Et cetera, ad infinitum, ad nauseam, ad bedlam. Of course the users have learned to click Ok without reading. I catch myself doing it sometimes, when I think I know what the dialog box says, just by the context in which it is occurring.

However, if the system is set up correctly (from a security perspective) when it’s installed, the user has to type in an admin password to grant the requested privileges. I’ll say that again, because it bears repeating: if the user accounts are set up in a secure fashion, the user has to type in an admin password to grant a process admin privileges.

That changes everything, because users HATE typing in passwords. Now smacking cancel is much easier, so that’s what the user will naturally learn to do if the prompts are too frequent and/or occur when the user isn’t deliberately trying to do an administrative task. Of course, this relies on the person doing the install, in most cases the OEM, to do the right thing. I’m not sure how to get OEMs to set up the user accounts with security in mind. (Also, in an actual multi-user environment, you don’t have to give out the admin password to all users, so then users who don’t have it CAN’T grant the apps admin permissions.)

UAC is a big step in the right direction. It’s better than the XP approach of having limited accounts that most users don’t actually use, and the ones who do have to log out and back in before and after every administrative action. But yeah, there are still problems.

This doesn’t mean user education is worthless.

User education is vital, but it had best not be your whole approach to security.

For example, we all know that backups are
a good thing. But if you forget to do a
backup this week, nothing terrible happens.

The problem with an external hard drive is that it’s probably still plugged into the computer when lightning strikes or the building burns or whatever. As cheap as storage space and bandwidth have gotten, we really should work toward making automatic daily over-the-network backups the norm. Think in terms of rsync on a cron job, with encryption during transport at least, possibly for the remote stored content as well. Hands-off, and then your data is not just on a second device, but physically elsewhere. Then we just need to get it to notify the user (email, txt msg to a cellphone, whatever) if it fails three days in a row.

But your conclusion is dead-on. Security needs to be designed to work regardless of whether the user does the right thing or not.

greg March 14, 2009 8:51 AM

@Clive Robinson

I’m at Vienna university. (My IP number will give that much away, usually a static ip too)

Before I left NZ we were helping getting some computers set up for education in Papa New Ginne. Edubuntu + a dvd wikipedia set is what we used. It was a lot cheaper than a libaray even with ebook cost too.

I currently only teach my classes with linux now. If the students don’t like perhaps I will get less teaching ;). My next course is in the UK so I will see how we go.

Winter March 14, 2009 10:52 AM

@David:
“Since I have to be able to do that, then all a bad guy has to do is get me to run the appropriate program. ”

Bitfrost was developed for children. The idea here is that a child should not accidentally thrash her system. If she wants to do so, or even take the risk, she should learn how to deactivate the protections. A little like child-safe bottle caps.

With respect to applications, this means that the default security model limits every application to only access files created by itself. That is, Abiword can open text processing files, the Gimp bit-mapped graphics, and an email program email and connect to the internet. Neither Abiword, nor Gimp have any legitimate reason to access the internet nor the the address book. An email client has no business with the camera or text processing files.

The complete system is more elaborate than this, you can read it in the links.

My point is, there is much more that can be done to secure computers than the MS DOS security model where a MS word macro can feed your camera pictures to the internet when the correct macro arrives in the mail.

Winter

EvenBiggerPedant March 15, 2009 10:56 AM

@SpellingPedant

Perhaps you didn’t realiSe that the blog post, having been published in British periodical ‘The Guardian’, favoUrs spellings that are coloUred by divergences of cultural and linguistic history, since the colonies were established in the Western Hemisphere.

It is not my purpose to come the defence of Britain’s refusal to kowtow to American superiority; I write this only to expose it.

Now, I must dash off to change a tyre on my aeroplane.

Paeniteo March 16, 2009 3:33 AM

@David: “I have to be allowed to do things that essentially trash my system.”

Yes, you do.
Just not by default, but only after setting the “YES, I’m willingly shooting myself in the foot!” configuration switch in /etc/harmfuloptions.conf.

Pat Cahalan March 16, 2009 3:21 PM

Catching up, a couple of quick notes:

If protection against disk crashes is essential, then
computer manufacturers should be putting a RAID array
in every box. This would protect the data against disk crashes
with no user action whatsoever needed.

It doesn’t help you much against simultaneous disk failure, which isn’t unheard of since drive failure probability is strongly correlated to batch production processes. It doesn’t help you against non-simultaneous but dual disk failure, surprisingly common (one drive fails, nobody bothers to replace it “because they have better things to do” before the other drive likewise fails), failure of the RAID controller resulting in a degraded raid coupled with a drive failure, or any one of a long list of other failures, including “the box catches on fire” or “I couldn’t rescue the entire computer when the house caught on fire.”

Good advice is the “opt-out” model, where “undue risk”
requires activity on part of the user — they need to show that
they know what they’re doing, that they’ve understood
the risk, and are actively accepting that risk

You’ve missed the point. This is the model we more or less have now, and the history has shown it doesn’t work. The users will routinely accept the risk because they don’t understand it fully.

But giving the user no option? You’d force the educated user
to accept extra risk in order to create a necessary
work-around — the designer is destroying security
by making it impossible to adapt the security.

Again, I don’t think you’ve parsed the article thoroughly. Perhaps you just haven’t read enough of Bruce’s stuff; this is certainly not what he’s advocating.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.