Security vs. Usability

Good essay: “When Security Gets in the Way.”

The numerous incidents of defeating security measures prompts my cynical slogan: The more secure you make something, the less secure it becomes. Why? Because when security gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the security. Hence the prevalence of doors propped open by bricks and wastebaskets, of passwords pasted on the fronts of monitors or hidden under the keyboard or in the drawer, of home keys hidden under the mat or above the doorframe or under fake rocks that can be purchased for this purpose.

We are being sent a mixed message: on the one hand, we are continually forced to use arbitrary security procedures. On the other hand, even the professionals ignore many of them. How is the ordinary person to know which ones matter and which don’t? The confusion has unexpected negative side-effects. I once discovered a computer system that was missing essential security patches. When I queried the computer’s user, I discovered that the continual warning against clicking on links or agreeing to requests from pop-up windows had been too effective. This user was so frightened of unwittingly agreeing to install all those nasty things from “out there” that all requests were denied, even the ones for essential security patches. On reflection, this is sensible behavior: It is very difficult to distinguish the legitimate from the illegitimate. Even experts slip up, as the confessions reported occasionally in various computer digests I attest.

Posted on August 5, 2009 at 6:10 AM29 Comments

Comments

Clive Robinson August 5, 2009 6:29 AM

….

“The more secure you make something, the less secure it becomes.”

Is somewhat true due to the human aspect. (ie it’s less usable from the user point so they work around it).

The so called “Usability -v- Security See-saw”.

However, what is less commonly known is that there is another, which is the,

“Efficiency -v- Security See-Saw”

Put simply the more efficient a system is at serving out it’s resources the less secure it is.

Basically “efficiency” opens up “side/covert” channels that can be exploited actively/passively to the detriment of the system owner/user.

And unlike crypto “certification failures” these are real nasty implementation dependant and quite easily exploitable and are very very difficult to spot depending on the bandwidth of the channel.

Worse an ordinary user on the system can simply by “doing their work” effectively turn these channels on and off simply by the load they place on the system.

Currently they are very real very practical attacks on secure systems based on COTS components, and little if nothing is being done about them.

mimir August 5, 2009 6:36 AM

As we all know, no matter what security is put in place, the end user can make or break it. How to combat user security holes? Now that is the question worth debate (and has been debated… over.. and over.. and.. )

My personal opinion is that the general level of security will improve as some of the older generations move on to the great server farm in the sky, and the younger generations (with a higher percentage of computer background and training) take their place. Perhaps a morbid view on security and a slow implementation. But are the billions spent on user training really being cost effective?

just my 2 bits.

Clive Robinson August 5, 2009 7:08 AM

From the article,

“What the community needs is a set of standardized scripts, templates, and system tools that allows them to implement best practices in ways that are both effective and efficient, standardizing interactions across systems in order to simplify the life of users, but still tailoring the requirements to any special needs of the organization. These tools do not exist today.”

Oh dear this is exactly what the industry does not need (except as a stop gap).

What is needed long-term is metrics both for usability and security, not just simple mind numbing useless ones (“my system has not been broken into this year”). But quantifiable ones based on sound principles.

That is the “Scientific method” of,

1, Observe,
2, Hypothersize,
3, Quantatize,
4, Measure,
5, Test,
6, Reflect,
7, Accept / Reject hypotosis
8, publish for peer review.

The main point being “reflect” effectively this is recursive around the preceding steps.

Importantly is steps 3&4 if there are no current standards of measurement, then you have no choice you have to find “new measures” that not just work (Best Practice) but are explainable and repeatable for all (the last being the most important).

Importantly you have to have an understanding of metrology and statistics.

The main difference between “Best Practice” (guesses) and Science (reliable measurement) is the correct usage of the “null hypothesis” to remove false assumptions.

But you have to know what you are measuring how and what it’s limitations are which is where the science of metrology comes in.

“Theoretically, metrology, as the science of measurement, attempts to validate the data obtained from test equipment.

Though metrology is the science of measurement, in practical applications, it is the enforcement, verification and validation of predefined standards for precision, accuracy, traceability, and reliability.

1, Accuracy : This is the degree of exactness which the final product corresponds to the measurement standard.

2, Precision : This refers to the ability of a measurement to be consistently reproduced.

3, Reliability : This refers to the consistency of accurate results over consecutive measurements over time.

4, Traceability : This refers to the ongoing validations that the measurement of the final product conforms to the original standard of measurement.”

From : “Fundamentals of Dimensional Metrology” by Ted Busch of Wilkie Bros Foundation. ISBN 0-8273-2127-9

Chris Bulow August 5, 2009 7:09 AM

@mimir – I’d disagree with that actually. The younger generation seem willing to share anything and everything on social sites, to the ultimate detriment of their own and their employers security. The “old timers” tend to be more paranoid 🙂

Kashif August 5, 2009 7:11 AM

@mimir: level of security will not improve, because security is not only remembering passwords etc. but also privacy. Any unfortunately the privacy ideal is sinking with a very fast rate, especially with the younger generation.

casey August 5, 2009 7:16 AM

Security is dependent upon the thing being secured, but most efforts address the behavior of a potential attacker. For example, a social security number is 9 digits with only the last 4 being random. The first 3 are regional and the second 2 are sequenced in a published pattern. It should never be enough to establish identity or open an account. It is unsecure because of how it is used. The protection should not be to encrypt the ssn (because of all the ways it may be decrypted), but to stop using the thing as a substitute for diligence setting up an account.

Tynk August 5, 2009 7:33 AM

@ Casey

You are correct and incorrect in your statement. First, the social security number is great for identification in the same way that your name is great for identification.
It was never intended for use as authentication which is where the trouble is coming in. It is the private sector that has taken up the SSN and decided to use it as authentication against a names identification.
The difference between authentication and identification is a subtle but important one.
But you are correct. A SSN should never be used for authentication. Authentication should only be initially created through physical contact. Such as photo ID’s and face to face interviews that lead to a unique RSA token which is then used for authentication over the internet while your SSN is used for the identification (username).

bethan August 5, 2009 7:39 AM

newer products and systems are being designed for security/privacy and usability from first spec through code-complete. that kind of process, applied to networks, products, and services, will create a more usable and more secure experience.

another thing that would help is if consumers realized how much they contribute to security and their own privacy. consumers need to monitor both more vigilantly, whether it’s announcing a vacation on twitter, reviewing their visa bill, or holding the badge-access door open for someone at work.

Grant Gould August 5, 2009 7:45 AM

@Chris Bulow — The fewer things you choose to secure, the more effectively you can secure what remains. If people today choose not to secure information that they mean to share, it means that they will not have to break the security paradigms protecting the data they don’t mean to share.

Securing fewer, less useful things is an improvement.

Clive Robinson August 5, 2009 8:19 AM

In my earlier post I only gave a reference for metrology, and forgot to give one for a suitable statistics book.

Due to the nature of the problems involved (humans) ordinary statistical tests used in manufacturing etc will not be of real use (people are well meaning and tell you what you think you want to hear). You need to use Biostatistics as developed by the medical profession from the mid 20th Century, however originally in Victorian London with the likes of John Snow and Florence Nightingale (she of the lamp).

The book I would suggest is,

Martin Bland : An Introduction to Medical Statistics (Oxford Medical Publications) 3rd Edition, Oxford University Press : ISBN 0-19-263269-8

Martin used to work at St Georges Hospital Medical School in Tooting S.W. London, in the mid to late 1990’s which was developing a tie with Kingston University.

I quote the following from his book (I hope he does not mind)

“The important thing is to know why a particular calculations should be done and what the results of these calculations actually mean. Indeed, the danger in the computer age is not so much that people carry out complex calculations wrongly, but they apply very complicated statistical methods without knowing why or what the computer output means. More than once I have been approached by a researcher bearing a two inch thick computer printout, and asking what it all means. Sadly, too often, it means that another tree has died in vain.”

Sadly the first time I met him I was one of those “tree slaughter’s” he refers to…

Pete Austin August 5, 2009 8:26 AM

IMO the real problem with computer security is that it’s too hard to adjust. We lack an easy equivalent of using a brick to hold doors open.

Systems are usually run with security settings that are too relaxed, to allow for the rare cases when you need “admin” access, because this is better than tightening the rules and finding that whenever something unexpected happens you need to beg for access from some smug BOFH administrator.

casey August 5, 2009 8:27 AM

Tynk-

I agree with your post, but I am not sure what part of my post is incorrect according to you.

I just wanted to state that when we have a bad practice (using ssn as auth), we try to continue the bad practice by calling on security to make it less bad instead of admitting we have a problem. Using email as a platform for active content is a bad practice. We shoud stop doing that instead of creating an industry to protect that bad idea.

If security appears to be in the way, that should be red flag indicating that the underlying practice should be evaluated.

Tynk August 5, 2009 8:35 AM

@ Chris

“a social security number is 9 digits with only the last 4 being random. The first 3 are regional and the second 2 are sequenced in a published pattern.”

The first 3 are regional, originally it was segregated by by state but in 1972 they changed it to segregate by groups of zip codes to deal with population growth.

The next 2 are used to break down within the zip code grouping.

The last 4 are not random at all, they are distributed sequentially.

“It should never be enough to establish identity or open an account.”

This is just the point I made about distinguishing between identification and authentication. It is ok to use a SSN to establish identity, as long as that identity is then authenticated.

“It is insecure because of how it is used.”

This is 100% correct, the tool has been misused and thus helped create a much large issue of identity theft.

Pete Austin August 5, 2009 8:39 AM

I think that many people faced by Northwestern University’s extremely detailed password requirements (from the article) would choose something close to “f*cknwu”, so ironically they are probably very easy to guess.

Chasmosaur August 5, 2009 8:54 AM

I’m a usability professional, but unlike many, I actually spent time as a back-end programmer. While I always advocate security as part of the user experience, I know it is a hard thing to control for a web site.

Depending upon the browser people use and their settings, different windows and flow can occur, so there’s that. But that doesn’t mean it’s an insurmountable obstacle and should therefore just be ignored (as many developers are wont to do).

Then there’s the politics of the development cycle. I can’t count the times that I have had to argue with programmers and project managers about how unusable something becomes when you let the programmers determine how an application should flow. Because otherwise programming something the way that is easy for the average end user is “too hard”, or “takes too long”. When in reality, it’s not that much harder and doesn’t take that much longer – it just means branching beyond and improving your code library.

Beta-testing (or if I’m lucky and persistent, I actually get to perform mid-cycle user testing) usually reveals that people get frustrated with the security process, and I’m vindicated. But whether or not anything is done about it depends upon the willingness and budget of the client to spend the time and money (who will also get upset and ask why we didn’t anticipate the problem, which is a fair question), or the willingness of the PM to break bad news to the client at all and decide it’s not THAT important an issue because of the individuality of security settings.

I’m not saying usability professionals are free from flaw – I know some who push too hard or don’t think things out completely. But the tendency in application development is to defer to the programmers, because, you know, that’s the HARD stuff. (I used to gently deflate that concept by pointing out how something could be done within the underlying database set up and the current programming capabilities. And pointing out that it would save time and money by doing it NOW instead of later when it was harder to fix and that it would make ALL of us look brilliant and prescient.)

I think more UI people should make the effort to understand the technology, but I also think more companies and educational institutions should strive to include basic UI concepts in their best standards/curriculum for programming. There’s a middle ground, but so few teams seem to want to meet there, because programmers are told how brilliant they are, and UI professional skills are frequently marginalized as “soft” (and in my case “girly”).

Of course, that’s when I point out that Apple and Google rose to massive success and profitability by creating applications that were all about user-friendliness, and an awkward silence descends 😉

r. voltamp August 5, 2009 9:12 AM

Things get even more exasperating when you add regulatory “security” compliance into the mix. Compliance usually has adverse effects on both security and usability, despite good intentions. Think of, for instance, the current fiasco with the energy industry.

aikimark August 5, 2009 9:58 AM

For over 20 years, I’ve been espousing the meme that “Security without inconvenience isn’t very secure”

Brandioch Conner August 5, 2009 10:03 AM

I’m going to have to disagree with the author.

“The side door of the auditorium that led to the secure part of the building and the toilets was propped open with a brick.”

To me, that indicates a failure by MANAGEMENT.

Why was a PUBLIC meeting being held in a SECURED location? There are plenty of public venues for such.

As Bruce has mentioned in the past, it is the EXCEPTIONS that break the security model.

In this case, the previous management made a mistake by not designing the auditorium as a “sand box” with all the facilities necessary … but without any access to any of the secured buildings.

HPL August 5, 2009 10:16 AM

SCoopFS is an example of where security need not sacrifice usability. From the paper:

‘Most people agree with the statement, “There is an inevitable tension between usability and security.” We don’t, so we set out to build a useful tool to prove our point. Since people in our line of work often share work on documents, such as this paper, we decided to build a file sharing tool.

[snip most of the paper]

The user study supports the claim that we succeeded. Even more telling is the quote from one of our early users. “This tool would be a lot better if it had some security. Is there any way I can turn some on?” While that question shows we achieved our goal, it also indicates that achieving our goal is not enough. While “security reality” is necessary, the “feeling of security” is important, too. We need to find a way to make people feel secure without making security interfere with their work.’

Link: http://www.hpl.hp.com/techreports/2009/HPL-2009-53.html

Maneesh August 5, 2009 10:31 AM

Security should only be a symptomatic treatment to human problem but we spend almost all our resources here. Who is working on addressing the root cause or have we given up on that? Or perhaps we have accepted that fact that human beings by design are not trustworthy and this cannot be changed.

Sometime I wonder, we don’t know where we come from. We don’t know where we will end up. We have come empty handed and will go back empty handed. So what the heck do we have to secure??
Sorry for being a bit philosophical.

Steven Hoober August 5, 2009 10:45 AM

@Clive Robinson has some good points. But two things I want to add.

Usability, and interface/interaction design has processes, procedures and repeatable metrics. We can test, measure, improve and test again to show the improvement.

But a key failure of it is #8. NOTHING gets published. Essentially. I don’t have time and money to spend on publishing past blogging. And lots of privacy concerns mean that I am almost never allowed to talk about a solution in sufficient detail to help anyone else.

I have occasionally here, when these discussions come up, mentioned my findings in an ad hoc manner. But only because it’s years later and I don’t work there anymore. Ideally, some of the clever solutions we came up with, based on user testing and actual real world trial and error I could have published at the time.

Two occur to me. One was a specific set of circumstances and design patterns to use for non-masked input (mostly during registration). Couldn’t talk about that because it was for a secret product launch, and a general fear of exposing systems.

Similarly, a way to avoid bot use of the web-initiated SMS system. We avoided the kneejerk reaction of overt (type the letters you see) CAPTCHA with a simple timer. We gave the technical teams min times that users could plausibly bump into, and they tweaked the timeout under that until the economics of bothering with the wait made the azerbaijani bots give up on us. Couldn’t talk about that AT ALL, as there was a presumption from on high that all security must be obscured, even though I argued there is no way that knowledge helps you work around it in any significant way.

Being a big company, with (at the time) reasonably lauded products, we could have published at least on relevant forums or blogs, and with sufficient detail, maybe have influenced the world for the better. Instead, nothing of the sort almost ever happens.

mat August 5, 2009 1:47 PM

I think Security admin should listen user and not be against them.

I remember in my old job. Private network where not connected to internet.
To go onto the internet, you should :
– either be a manager and get a vnc like account for internet
– either use special (few) computer in the hallway.

That’s was so annoying that lot’s of users used wifi usb dongle to get internet on their computer (and expose the private network to internet without firewall & co).

And in my new job, password should be changed every 2 months and can’t be reused (and should be quite different from the previous). Instead of using my strong password, I cycle on very weak password (like “qwerty”).

Michael Seese August 5, 2009 8:22 PM

Actually, I think it’s a three-factor equation: Security vs. Usability Vs. How Important It Is To The Person. To wit: folks will prop open a door to the data center to step outside for a breath of fresh air. They probably won’t prop open the front door of their home when they go to work. (Ignoring that the various hide-the-key methods, all of which the bad guys know, are tantamount to the same thing.) The data center…not that important. My house…VERY important!

Clive Robinson August 6, 2009 4:11 AM

@ Michael Seese,

“Actually, I think it’s a three-factor equation: Security vs. Usability Vs. How Important It Is To The Person.”

Sometimes “security personnel” when “champing on the bit” forget an important fact,

Racing bridles come with blinkers 8)

‘Argh Gadzooks, them thar (ab)users sure don’t think like what they’s supus’t’

Or as Bruce notes in his Gaurdian article of today,

“They know what the real risks are at work, and that they all revolve around not getting the job done. Those risks are real and tangible, and employees feel them all the time. The risks of not following security procedures are much less real. Maybe the employee will get caught, but probably not. And even if he does get caught, the penalties aren’t serious.

Given this accurate risk analysis, any rational employee will regularly circumvent security to get his or her job done. That’s what the company rewards, and that’s what the company actually wants.”

So personally I can’t blame them for looking after No1, “Mr Security don’t cut the cheque, or decide the bonus, the boss man does” so “what the man wants the man gets”.

Which brings me back to a point I have been making for some time,

“The security process in an organisation is the same as the quality process”

When talking about “best practice” in security the simple fact is we have no idea of what works and does not work or why.

Those that espouse best practice have simply identified the top ten organisations that are (or appear to be in their view) the best at it. They then look at what the organisations do for security and then say that “best practice” is the “common subset” of the things the ten companies do.

What they usually fail to do is take (the advice Pete Checkland of “Soft Systems Analysis” advised 15-20 years ago, and take) a couple of steps back and look at what the organisations culture is like.

If they do and start to look at the human not technology aspects they will notice a couple of other things,

1, The CISO speaks “business” and probably has an MBA or other Business Management related training.

2, The organisation has a strong “quality process” in place.

Back in the mid 1990’s it was identified that the “quality process” was an organisation view point shared by all the employees implicitly from the top to bottom of the organisation. It was a shared ethic of the workforce and it was built into all processes implicitly by them without fear or favour.

Such organisations where this was the case reaped considerable rewards from it, however those that paid lip service to it or tried to bolt it on failed to gain any advantages often it occured significant cost from it.

The only question that tickles at the back of my mind is security part of the quality process or is it the other way around.

Another aspect of the “quality process” which appears to have been lost on most “security practitioners” is metrics and their use and relevance, which is where an MBA comes in handy.

Think of a manufacturing company and it’s work flow. Simplistically two work flows might be,

1, They approach or are approached to tender for a contract to manufacture an item.
2, They evaluate the specification and determine what is required in the way of resources.
3, They put together a bid based on this.

When they are working on a job,

1, They setup a line and source initial resources.
2, They evaluate the line continuously at all levels to ensure that work stays within specification.
3, They make the finished item available to the customer.

The important point at all levels is the evaluation stage (2).

However senior management are not interested in tool wear rates or feed stock rates. These are quantities that they will (probably) have difficulty understanding. If however the tool pusher for whom they are important gives them to the junior floor manager he will convert them into workshop norms based around time and resource utilisation. In turn the junior supplies the reworked figures to the project and plant managers, they in turn pull in other information and rework the figures into a different form that gets supplied to senior managers who then have them in a form that they can use easily to make business decisions.

It is this process of taking metrics and presenting them in the appropriate way to those at the level you are presenting them that is what is almost entirely missing from the IS Sec business.

Ask yourself a serious question of what possible use to the average accountant is,

“Our IDS systems detected 19206 intrusion attempts, likewise our email security system detected 47613 viruses and 84590 unsolicited messages”

Even if set against time scale and rate of change with,

“up 21% on last year”

There is nothing there that enables them to make a financial judgement. They deal in terms of direct and indirect costs of resources and the resource utilization.

Which brings up the other hidden problem, IS Sec is like defence, you really only know when you don’t have enough of it, in this respect it is exactly the same as quality.

Anthony Maughan August 6, 2009 9:28 AM

All I can say is “eco congruo”. For way to long the only options we have are to enforce un-usable technology, that scares users or tell them the threat isn’t really as it’s made to be. It’s time for security development & research to come up with new ideas that are both secure and easy to use. I hope I’ll be around to be part of that. It seems great that industry experts such as yourself are pushing the industry in that direction.

bob August 7, 2009 7:52 AM

I have stated for a while that password security maximizes at ~10 characters length and descends rapidly once you pass that. And once you have reached the entropy level of the hash, additional characters add zero security.

My problem is not in remembering my passwords; I can probably remember every password I’ve ever created. My problem is since they expire so often remembering which one is the CURRENT one AND assigned to the account that I am trying to unlock at the time. Its like the old joke about multiple-choice tests – “the answers are: a, b, c, d”.

agentsmith May 17, 2010 12:24 PM

Therefore, the less secure you make it, the more secure it becomes? – Why? Because if you don’t care you aren’t scared – It is not all about logic – there is a higher level of human being which is the mental/emotional layer. What you feel has the same importance as what you think!

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.