Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PM64 Comments

Comments

HJohn November 24, 2009 12:52 PM

Good article.

An unintended consequence of overwhelming users with security advice is they can’t distinguish the relevant from the irrelevant so they tune all of it out.

HJohn November 24, 2009 1:06 PM

I like this note on page 11:

“Note: this paper is not to be read as an encouragement to end-users to ignore security policies or advice.”

I would have went a step further and added the following: “Rather, it is to illustrate that over-promoting security policies and advice is not the most efficient means to improve security.”

In other words, you only have so much resource and time, so use your resources and time to do things that are actually useful.

BH November 24, 2009 1:25 PM

This is exactly the reason I no longer run virus scanning software on my windows PCs. The cost of this software in terms of lost performance is just too great for the limited benefits they confer.

John Campbell November 24, 2009 1:29 PM

@HJohn:

In other words:

“Choose your battles”

A lot of time is wasted on bull(something).

It’s like the 90/10 rule: 90% of your cycles are being burned up in 10% (or less) of your code… so it is important to concentrate your efforts where they will pay off the most.

In a security context… push security where it matters the most…

… and try to make security less of a requirement everywhere else to getting your job done.

kangaroo November 24, 2009 1:33 PM

‘Cept that most of the security infrastructure is built around the “customer” and not the “consumer”. The “customer” is the IT department at most locations — for them, the “consumer” costs are almost zero — it’s other people who have to spend inordinate amounts of time rotating passwords, going through firewalls, etc, etc, etc.

While the “1%” is almost completely borne by them. A break-in doesn’t go to the “consumer” but the “customer”. So of course security systems and security advice is intended to advance their interests (since they are the “customer”, aka, the source of income) by shifting costs from them to the “consumer” (the benighted end-user).

As usual, it’s a problem of accounting. Most costs aren’t captured across groups, but only within a group, and not all groups are equal.

Or, to respond to HJohn, the goal of IT as a caste is NOT to improve security — it’s to make money, like everyone else. Security as a cost-benefit is only an end to that means, so to analyze security practices one must include the systemic goals — as always, follow the money.

HJohn November 24, 2009 1:37 PM

@kangaroo: “Or, to respond to HJohn, the goal of IT as a caste is NOT to improve security — it’s to make money, like everyone else. Security as a cost-benefit is only an end to that means, so to analyze security practices one must include the systemic goals — as always, follow the money.”


Fair enough. I may add that security, when done effectively, does affect the bottom line (not always by making money, but often by reducing loss).

Aviatrixc November 24, 2009 1:37 PM

BH: Yes! I find it hard to believe that any number of viruses would cause the slowdowns, pop-ups, browser restarts and reboots that maintaining AVG does.

HJohn November 24, 2009 1:43 PM

@BH and Aviatrixc, in regards to AV

I agree that, for skilled users, shutting down AV for performance reasons may be reasonable when factoring in cost/benefit when one knows how to protect themselves otherwise. However, for the average user, I wouldn’t bet on that skill holding up.

A friend year or so ago asked me to look at their computer because it was slow. It was heavily infected with hundreds of viruses (I wish I was making that up). AV would have helped them, but they (obviously) aren’t pros.

Groa November 24, 2009 1:48 PM

Had I to pose as an attacker, I’d enjoy knowing my potential victims think it only happens to others.
For most of people it’s true, but from the attacker perspective incidents happen 100% of the time.
Not keeping them aware of threats makes the bad guy task way easier, IMHO.

RH November 24, 2009 1:49 PM

The article touches on it a few times, but the vast majority of the effects are externalities. What it doesn’t cover is that some of those externalities hurt the attacker, not the user. For example, the URL education policy they refer to (to minimize phishing). They point out that it does more damage than the phishers do, but if you removed such education, I would expect phishing to become more lucrative, and more common.

One cannot merely measure cost vs benefits. One must measure against opportunity costs, which are often not self evident in security situations – and they often have externalities. One bank skipping anti-phishing stuff might save money, all of them skipping it hurts everyone.

HJohn November 24, 2009 1:58 PM

When I teach a class, I keep in mind that “the brain can only absorb as much as the butt can endure.”

A similar thing can be said for security awareness. If we include too many low risk items with the high risk, it all gets tuned out.

I don’t think the point of the article or any poster is that security advice is bad, it is about picking battles and setting priorities.

I take a look at the security policy of a client I used to consult for:
1. Passwords must be at least 8 characters (meaningless since it is enforced)
2. Passwords must be changed every 35 days (meaningless since it is enforced)
3. You cannot reuse your last 15 passwords (meaningless since it is enforced)
4. You can only change your password once per day to prevent recycling(meaningless since it is enforced)
5. Do not share your password with anyone. Not even an administrator needs your password.

Now, I wouldn’t be surprised if people here didn’t read all of them. The actual policy I’m citing had over 10 points. Most were a waste of space since the network enforced them. The rellevant point being that the only important policy in the above list is number 5, but the user may have got bored before reading it or tuned it out with the rest of the jarble.

Tim November 24, 2009 2:00 PM

The point about SSL certificates is a very good one, that thankfully people are being more aware of. Currently if you encrypt your site without paying verisign it appears less secure than if left completely unencrypted. Firefox’s stupid solution is to make the user jump through hoops to get to the site.

Hopefully this will be fixed by providing certificates through DNSSEC at some point.

BH November 24, 2009 2:11 PM

@HJohn – Yes, I do agree with you that for less knowledgeable users, AV is probably worth the cost, although they are certainly still susceptible to zero-day infections.

Personally, I browse using firefox with noscript from behind a firewall. As long as I don’t plug an infected flash drive into my computer or open an infected email, I think I’m protected against most attack vectors.

Andrew Suffield November 24, 2009 2:14 PM

The worst thing about “password policies” is that this “8 characters including numbers and letters” nonsense still doesn’t really improve security even if the users follow it.

We have long since reached the point where the complexity needed to deflect a brute-force attack is closer to 80 characters than 8. These short passwords are well-within current computational capacity to brute-force. Instead, we use lockouts after N invalid password attempts to provide the security – but when N is small you don’t need all that enforced complexity in the password. Even dictionary words would suffice if N<=3 and you have a policy of changing passwords after each lockout. Such policies are not only annoying and inconvenient, they’re vestigial; they no longer provide any meaningful security.

HJohn November 24, 2009 2:25 PM

@Andew Suffield: “Even dictionary words would suffice if N<=3 and you have a policy of changing passwords after each lockout. ”


I agree with much of your post, but not this part.

If you are the user protecting your account, or if an attacker is only targetting one user, then locking out will likely defend you even if you have a poor dictionary word.

However, if you are an entity with hundreds or thousands of employees, then lockout will not protect you from dictionary words. That is because they may only get 2 or 3 shots at each account, but they get that many shots for each ID for the entire entity. Chances of hitting a dictionary word are excellent if you get a few thousand attempts a day. (Sure, IT should detect this, but that’s a topic for a different day.)

Lagrandeimage November 24, 2009 2:48 PM

As Bruce has argued before, usually no one gets fired for not respecting security tips and advice. Therefore it is economically rational not to follow them.

Since the problem is the balance of incentives then one solution would be to tip the incentives in favour of having people respect tips and advice. How?

Well simply by shedding out serious enough punishment so that people will have an economical incentive to follow the tips.

Economically this makes sense but of course on a social point of view it may be hard to implement.

Lennon November 24, 2009 2:54 PM

@Andew Suffield

And, to add to HJohn, if an attacker gets hold of the password file, the lockout system is (obviously) of no use for the passwords within.

Angel One November 24, 2009 2:55 PM

Perhaps what we need to realize is that we security practitioners need to design systems in such a way that the users are not the ones on the front lines of the battlefield. Let the security engineers (who are the ones best equipped to fight the battle) do the fighting while the users remain oblivious on the “home front”. Transparency is the key.

AppSec November 24, 2009 2:59 PM

@Lennon:
If always felt if the password file is obtained, then you’ve got lots of other issues that need to be dealt with and it doesn’t matter.

And I know the next argument is: It isn’t just your site/system since most people use the same password for mutliple sites.

Well, again, if the site cares about that the security for your password file would be such that it would be nearly impossible (or highly improbable to get).. and if it is gotten, who knows what else was put in their which can obtain passwords (ie: sniffers on a port, keyloggers, etc).

HJohn November 24, 2009 3:09 PM

Lagrandeimage: “usually no one gets fired for not respecting security tips and advice. Therefore it is economically rational not to follow them.”


And professionally. Imagine two people at the same level, equally skilled, dealing with protected information (classifed, sensitive, whatever it is). One takes due professional care, takes the time to encrypt the data, protect it, not transport it more than necessary, if ever. The other skips data protection and does whatever it takes to get as much done as possible, including emailing information in plain text home or walking around with it on a USB drive in his pocket, which enables him to get more work done faster than his competitor.

There are no known security incidents. Which one gets promoted? I think we know, it’s the one who was careless because he gets more done. Just like a speeder–he gets to the destination faster. Unless, of course, he gets pulled over or killed.

Of course, things may change, the entity may end up on the chronology of breaches thanks to Mr. CutCorners. In which case rewards/punishments may line up. But without that, good luck convincing the boss that you deserve the promotion over someone who got more done faster.

Cormac Herley November 24, 2009 3:26 PM

Bruce,

We know your “Microsoft name” is Cormac Herley. You’re just testing us again!

Lagrandeimage November 24, 2009 3:31 PM

Hjohn:”There are no known security incidents. Which one gets promoted?”

You provide me with a great transition to the followup post I was going to write after some minutes of post-thinking.

A more socially acceptable way of tipping the balance in the “good” direction would be to reward people who follow security advice and tips.

I mean that if a firm is serious about security it should enforce that following security procedures and tips is considered in the evaluation process just like other criteria and should contribute to the direct level of financial bonuses and/or advancement.

HJohn November 24, 2009 3:46 PM

@Lagrandeimage at November 24, 2009 3:31 PM

I like the idea of positive rewards in practice, but realistically I think it is tough to measure.

Sort of like the “safe driver” discounts given to people who haven’t gotten tickets. Some deserve it, some just never got caught so they get credit anyway. Overall, the program is a plus, but there are no-doubt bad drivers included who never got caught and good drivers excluded who did something dumb at the worst possible moment.

Glenn November 24, 2009 3:51 PM

I very rarely change my passwords. I know my passwords; if I regularly changed them, I’d have to store them somewhere, which is a cost massively outweighing any benefits of changing passwords.

Requiring that I change passwords regularly is a net security loss, for the same reason. Anyone forcing people to do so is showing a complete lack of basic security sense: the weakest link of passwords is when they’re stored somewhere.

If users are changing passwords as often as monthly, you’re essentially guaranteeing that all of your users (or employees) are saving or writing down their password, and you’ve decimated your security.

You can only change your password once per day to prevent recycling

This doesn’t say anything good about this company’s ideas of security.

Brandioch Conner November 24, 2009 4:12 PM

@HJohn
“There are no known security incidents. Which one gets promoted? I think we know, it’s the one who was careless because he gets more done.”

Which is one of the secondary reasons that security needs to be built in from the beginning and CANNOT be tacked on as policies or other crap.

The system should have been designed so that the information could not have been taken so easily in the first place.

Which is similar to your password “policies”. The policies exist in the system so that the user cannot choose to ignore them.

BH November 24, 2009 4:18 PM

@Glenn – I agree with you that forcing people to change passwords regularly is of limited benefit. Particularly because if you pick an expiration time based on the difficulty of brute-force guessing, moore’s law and logic dictates that the expiration time should decrease by a factor of 2 every 18 months or so… which rather quickly becomes ridiculous.

Sadly, forced password expiration is endemic. For example, the credit card industry dictates in PCI-DSS that all parties involved in credit card processing (except the end user) expire passwords every 45 days.

Brian November 24, 2009 4:19 PM

I think if often gets overlooked that many security practices can be made a lot easier. For example, secure websites have traditionally required a user to verify the site they are going to manually by reading the URL. EV certs more or less don’t require that, as the security has been offloaded to the issuing trust. We can do better in many ways. One time passwords for networks, for example, are much easier then extremely complex passwords, but the price needs to come down per token for companies to deploy them. If the advice we give makes their lives easier and more secure, then they will deploy that security. Honestly, I think we can do it too.

HJohn November 24, 2009 4:29 PM

@Brandioch Conner: “Which is one of the secondary reasons that security needs to be built in from the beginning and CANNOT be tacked on as policies or other crap.”

Totally agree. It’s like calling in an architect to make sure a building is sturdy after its been built on sand next to a cliff.


@Brandioch Conner: “Which is similar to your password “policies”. The policies exist in the system so that the user cannot choose to ignore them.”

Totally agree again. The point was not about any specific policies, it was about how the entity foolishly cluttered the listing of security policies/tips/guidelines/etc. with a bunch of crap that no one cared to read (and didn’t need to read since it was mandated by the system anyway) so something they actually may need to be aware of was ignored with all the other clutter.

Ati November 24, 2009 5:01 PM

I like the article but it has dealt with the user base and costs at a macro level. This creates large populations of which not all individuals may be relevant. Also, relatively speaking there is very few known incidences. I.e. we aren’t sure of exact figures for say identity theft because much of it probably goes unnoticed and unrecorded.

I’d say this likely inflates the populations and underestimates the impact/cost.

Taking a micro perspective the equations probably become a lot more relevant I.e. I have 1000 online customers and am a small business worth $1mil. The effort for 1000 custs at $15 p/h and avg .5 hrs per year comes to $7,500. A breach might cost me $20,000 in compensation and lost revenue, decreased cust confidence or I might collapse completely. The figures are completely fictional but I only say it to show that by looking at it from an individual point of view things the resulting ratios can change dramatically.

TruePath November 24, 2009 5:55 PM

Seems to me the underlying problem here is hindsight bias, or something closely related. Anyone offering security advice is under a strong pressure to avoid the hindsight bias induced blame “if only IT had a policy requiring password changes” etc..

olympia gnome November 24, 2009 7:23 PM

While trying to retrieve the URL: http://research.microsoft.com/en-us/um/people/cormac/papers/2009/SoLongAndNoThanks.pdf

This raises an interesting perceptual difference between text and software:
Changing a few introductory words does not allow you to invisibly pirate
a paper’s contents as your own –
(and I am emphaticly not saying that this author did anything like that
I know nothing since I can no longer access the paper )-
but this is becaue the language [English] is open source and openly maintained,
and fruits of expression are allowed to be compared.

Legal protection extends to expression and design,
architecture and process require patent protections.
Similarly, rewriting the intro does not change the melodic theme of a symphony or a tune,
nor does changing the lyrics, and changing the tune does not change the lyrics.

May we expect pressure from parallel process and cloud based program suppliers
to adopt similar standards?

(Then the reputed microsofting practices of rewriting the meaning of keystroke commands
to block competitors’ program utilities,
or changing the gateway command to the pipeline to claim a change in programming that follows,
or another seeming practice of microsoft’s appropriation.protocols,
followed independantly of acquisition,
might be at last stopped?
Hmmmm.
This latter seems might be the main reason Redmond was so desperate to pay court penalties
rather than to allow judicial review and programmers’ audits of
its acquired and integrated source code past).

Maybe there could be a Redmond Carol:
the ghosts of programs past, present, and future visit the computer giant to trigger
reparations to those whose companies were killed off by the big guy…What do you think?..

Ctrl-Alt-Del November 24, 2009 7:59 PM

@Lagrandeimage:

Your suggestions fall on the rock of cost. The source article made the point that the costs of “security” can outweigh the benefits. Exaggerating end-user costs (through punishment) or benefits (through rewards) simply enlarges the system and may not improve the cost to benefit ratio.

If forcing employees to jump through hoops costs your company, say, $20,000 a year in lost productivity in order to prevent annual losses to fraud of, say, $10,000, then your security is already a net loss. Additional rewards or punishments will increase your company costs (TANSTAAFL) but may not pay for themselves.

Finding ways to mitigate company losses without increasing the burden on employees may be a more cost effective way to spend that extra money.

Peter November 24, 2009 9:21 PM

@Ati

You are confusing customer costs and business costs. Even if the potential loss to the business is great users don’t care about that. The paper argues that the benefit to users has to be greater than the cost to users.

Peter November 24, 2009 9:26 PM

@Lagrandeimage

Agree with Ctrl-Alt-Del. If you goal is to increase compliance then knock yourself out and fire everyone you suspect of using a weak password. But if your goal is to reduce costs you need to be sure that the cost you are imposing on users is smaller than the benefit that it generates.

TristanI November 24, 2009 11:07 PM

People have spent a bunch of time talking about passwords, but the paper seems to offer far more interesting stuff. 100% of certificate errors are false positives. Teaching users to read URL’s causes 160x more harm than good.

The last section of this paper is a great read:
1. Users Understand risks better than we do
2. Worst-case harm and actual harm are not the same
3. User effort is not free
4. Designing security advice is not unconstrained optimization

Lee November 25, 2009 2:15 AM

Security advice for the lowest common denominator must adhere to the KISS principle – you over-engineer and over-complicate anything and it becomes less valuable (to the majority). The old “this is dangerous, do you want to proceed” with a yes/no box will result in how many clicking on yes? 99%?

Having a padlock in the browser, for example, makes people think that everything is safe – because they’ve become accustomed to the “padlock = good” abstract. I do remember some shopping sites some years ago with the SSL triggered by the submit function – users ran from the sites as they were not secure.

I think today we have a large number of users out there with a little knowledge and we’re building policies to plug the gaps in technology (after all, aren’t our policies simply covering the bases that we can’t cover with technical solutions?).

Based on the fact that I assume nobody will read any security policy or documentation, I try to highlight with bullet points in an executive summary what the document will help them do – and then try to bring examples on how they would benefit in their personal life if they change. Changes at home often make it to the workplace….they care about their PC and data at home (in some way) whereas the corporate PC and data isn’t there problem….it’s ours..!!

Clive Robinson November 25, 2009 2:48 AM

@ HJohn, Brandioch Conner,

You are both starting on the path of making the argument that Security is a Quality process.

In manufacturing “Mr Cutcorners” appeared to be getting “more product to market” than “Mr Quality” which is the position we find ourselves in with the software industry.

However the “return rate” of Mr Cutcorners was about twice that of Mr Quality. Due to the asymetrical costs of distribution -v- return the increased return rate of just 1% or 2% could wipe out not just any profit but the good name of company he works for.

Unfortunatly the Software industry currently avoids this “return for rework” issue by issuing patches mainly at the customers expense.

Manufacturing industry had to “build in” Quality Processes in order to survive. However they are just part of the “value added” “supply chain”. In order to be able to trust earlier parts of the chain you have to have your suppliers implement “Quality Assurance”.

By using “purchase power” in a non monopolistic “open market” those who wanted “quality assurance” up stream could find it.

It became clear that what applied to the “manufacturing” process also applied to the “design” process, and in turn the “administration” process. Finaly it became clear to “business Gurus” that those organisations that actualy embraced “Quality” reaped the rewards, whilst those that paid “lip service” to it did not.

Most business execs except that although “quality costs” when done properly it costs less than not doing it. That is it is in the odd quirky position of being a “sunk cost” with a positive “return on investment (just like Email was at one point)”.

Currently not many “security gurus” accept that as far as processes go Security = Quality, in fact recently I have had Richard B of TaoSecurity argue it cannot be because of “intelegant adversaries”.

I suspect that in time the point will be mute. But there is the fly in the ointment of a near monopolistic market without significant “return” costs, although it has started to see significant “rework” costs.

I think Bill Gates woke up to the fact that “rework” was eating up productivity hence his push to up MS’s Security attitude over marketing attitude.

There is a way to move the tipping point on security but it is one of those things “where you dare not speak it’s name”.

However I will say it “Charge for Outbound” IP traffic. This will introduce the asymetric “return rate” costs into software manufacture and as with tangible goods manufacture what happened for “Quality Assurance” will happen for “Security Assurance”.

Clive Robinson November 25, 2009 3:43 AM

A further point that needs to be made about security costs.

Risk analysis just does not work.

The Banking industry had some of the best risk analysis going, and look where that got us.

Likewise the insurance industry is waking up to the fact that the traditional acturial process is not working either.

The reason for this is a little subtal and it revolves around “force multipliers” and “action at a distance”.

Also there is a difference between tangable and intangable goods that we are just starting to wake up to.

Information is said to be “the new currancy” of the economy. However unlike previous “economy drivers” it is an intangable not tangable asset.

Which has a very very real difference on the way the market works.

For instance in the tangable world theft is usually apparent by an audit process of some kind, even if it’s just “has anybody seen my calculator” through to stock taking.

You steal an intangable by “copying it”. In the past stealing an intangable like IP or “trade secrets” was not easy for an outsider and had significant costs (film, photocopy paper, etc) and risks (access time with IP, page counter on the photocopier, etc) for an insider.

That is cost and audit has previously protected IP from insiders and physical access outsiders.

Some of these old audit processess are intangable themselves, like people getting a “hinky feeling” about insiders. Networks and computers don’t do “hinky detection” unless it is specifficaly built in.

In the tangable world to steal something you have to get physical access to it, which generaly means you or an “agent” has to go and get it.

Because information is held in/on inadiquatly protected systems, that for “good and proper business reasons” are connected to the outside world. You don’t actually have to go there to “steal it by copying”.

That is it is now “action at a distance” which makes current security concepts and axioms for tangable goods not directly transferable to intangable goods. Thus we have to “make it up” as we go along, which is effectivly an “evolutionary” response.

Now we come to “productivity” for the thief. With tangable goods as I said above either the thief or an agent has to “go get”. Either way represents significant cost and risk. It also has the effect of “localising” what the thief or agent can do. Thus there are tangable limits on what they can achive which is where acturial activities work for tangables, and to some extent old style IP.

Modern society is based on the application of “force multipliers” that is a man can only till a tenth of an acre by hand. Add a horse and plough and that goes up to an acre, replace the horse with a tractor and that can go up by considerably more than ten times.

There is still only one man doing the work but the horse and the tractor are “force multipliers”. They do however have significant costs, but the benifit or return on investment pays off within a reasonable time frame.

However in the intangable world of information there are issues that the physical world usually does not.

As already said the thief can work from any where “action at a distanc” but more importantly can employ almost as many agents as the wish with little or no cost to themselves. This is by co-opting the victims “non hinky” equipment for their own use. Thus they have a “force multiplier” without either “localisation” or “cost” constraint.

Therefore one person can harvest the entire information world at only the cost of storing the harvest.

This lack of constraint makes acturial activities pointless because of the lack of constraint.

If acturial activites do not work then standard risk analysis techniques don’t work either.

Which means you have to find some way of restoring the tipping point.

Old style “issolation” is seen as not just a “business inhibitor” but is also not practical with intangables that can be copied for near zero cost or risk. So will not work for insiders or outsiders where the business process says otherwise.

Systems have unseen flaws and there is little or no “Old style hinky” detection built in to spot they are being exploited.

And as noted there is next to no cost on the “force multipliers” an outsider can employ.

Again the tipping point on outsider attack can be moved by introducing a cost for moving the intangable goods.

Which brings you back to “charging for data transport”. Interestingly it only needs be for “outbound” data as this will ensure in most cases correct audit and rate limiting processess are put in place.

Kieran November 25, 2009 3:55 AM

I think he’s ignoring the percentage of users that ALREADY read URLs to avoid phishing – if NOBODY did, the losses might well be high enough to justify that massive amount of time for everyone to do so.

Clive Robinson November 25, 2009 4:10 AM

I could give other evidence as to why security fails due to the difference between intangable and tangable goods markets but I would be stretching the Moderators good graces to breaking point.

So to conclude,

All of our current security models are based on two things,

1, A traditional economic market place.

2, Intangable “human hinky detection” audit processes.

As I have said the intangable world is a market place so different to our tangable world that it is an unknown alien world where our accepted market axioms do not apply.

Thus the intangable world is not subject to,

1, Current economic models.
2. Current acturial models.
3, Current risk models.
4, Current financial investment models.
5, Current business process models.

We have two choices to make,

1, Evolve our models to the new intangable market.
2, Bring the intangable market into line with the tangable market.

In the long term only option 1 is a viable solution, however in the short term bringing the market into line option 2 is the probable solution.

However option 2 has significant costs, and they have a hysterisis effect that will tend to keep the intangable market artificialy enslaved to tangable market ideals.

This is significantly worrying because in the process it will enslave human abilities as well and this will give rise to unhealthy economic constraint.

Ultimatly this will like putting the breaks on, on a run away motor will cause them to over heat and fail catastrophicaly.

It is easily arguable that it was the intangable market asspects of banking that caused the “brakes” of an incorectly chosen market axioms thus monitoring and regulation to fail.

Further like King Cunuet we “cannot stop the tide” the intangable market is actually a fundemental driver to all human progress.

Option 1 therefore is the only solution to the problem of not just security but monetary policy and market operation.

Moe November 25, 2009 5:31 AM

fully 100% of certificate error warnings appear to be false positives

This isn’t true. For some reason, SmartFilter seems to give me a bogus SSL cert (and the accompanying warning) every time I try to visit an SSL site once my network authentication has expired.

Just leave yourself logged into Gmail (using https) for a few hours and come back after your authentication has expired and ou have to log in again. It happens ALL the time where I work. I grant that it’s probably harmless, but it IS effectively performing a MITM attack on me, so I deny it every time.

David November 25, 2009 9:54 AM

@HJohn:

Those policies don’t make passwords secure. As we all know, with rules like that people will settle into password patterns, and if you forbid that somehow they’ll be written down, probably somewhere insecure. Mathematically, as the password entropy approaches adequate, the probability that it’s on a sticky note on the monitor approaches one.

Moreover, frequent password changes aren’t going to stop attacks, since an attack is likely to take place between changes, and in any case one pattern password is as easy to guess as another.

So, these rules aren’t going to stop security breaches. What they are doing is giving the security person a raison d’etre, some visibility, and a documented excuse why it isn’t his or her fault if there is a problem. This is not only a negative externality to the business’ customers, but to the business.

RvnPhnx November 25, 2009 10:52 AM

I have to admit that one doesn’t need to be an economist to figure this out. When people encounter complex (and ever more fervent) explanations and arguments for why they should be doing something else they either ignore those arguments outright or assume that if the presenter can’t make the information make sense to them in a short and sweet sort of way then in must be complete bull$#!t.

This type of thinking also applies to politics, global warming, medicine, and law (for starters).

Remember, most people still can’t get it through their heads that perpetual motion machines don’t work either.

kangaroo November 25, 2009 10:55 AM

HJohn: Fair enough. I may add that security, when done effectively, does affect the bottom line (not always by making money, but often by reducing loss).

Who’s bottom line, and how is it measured? Who is doing the measuring?

You can’t abstract interests away from people. Who can do a cost-benefit analysis of IT — other than an IT person? How can they measure the bottom line, when most of the measures are unknown or uncaptured by any accounting system?

I’m not saying that there are no organization with IT departments that help the organization — but grabbing any one at random and doing some back-of-the-napkin calculations will quickly show that usually IT’s security protocols are a net cost.

How many security systems are designed around ancient proven principles like cellular organization? That’s one that’s about 4 billion years old — yet most (but not all) organization have very little internal structuring with most of their effort placed at a single external perimeter, as if they were fighting WWI.

And why? Because the customer is not the consumer. That is the central element of almost all organizational failures at all levels — that the guy who has to put up with the crap isn’t the guy who gives the orders.

TristanI November 25, 2009 10:59 AM

@Kieran

“I think he’s ignoring the percentage of users that ALREADY read URLs to avoid phishing – if NOBODY did, the losses might well be high enough to justify that massive amount of time for everyone to do so.”

Is there evidence that users avoid phishing by reading URLs? We know that the vast majority choose super-weak passwords, we know that they’ll type credentials whether the lock is there or not. If they don’t make the effort at those two basics why would you imagine they have mastered the art of reading URLs?

The paper claims the effort of reading URLs is 164x greater than the cost of phishing. So even if you could claim that 50% of users are reading URLs we’d stil be off by 82x.

HJohn November 25, 2009 11:51 AM

@David: “Those policies don’t make passwords secure.”


I tend to agree. The quality of the policies was not the point of the post, it was the meaningless clutter of the entity’s “tips” and “advice.” As such, anything important in their (ridiculously long, IMO) list was tuned out with the clutter.

David November 25, 2009 12:12 PM

@Clive: It’s not just that risk analysis is bad, it’s that you have to do something with the results.

From what I read, financial institutions had excellent risk analysis. It wasn’t perfect, but that wasn’t what crashed the economy. There were three other big failures:

First, executives would disregard the risk analysis because it told them things they didn’t want to hear, or things that they didn’t want to have to tell their investors or partners. This was also in defiance of common sense: packaging questionable mortgages into trillions of dollars of securities can’t be a good idea. However, there was a lot of money to be gotten. It was more a case of picking up diamonds in front of a steamroller than nickels.

Second, it was applied at too granular a level. It may be well worth accepting a 1% chance of seriously bad results on a project, but if you’re accepting that on numerous projects and it turns out that the probabilities of massive failure aren’t independent, you can be accepting a small chance of catastrophe. What happened was that a bunch of what were seen as low-odds risks all depended on the same conditions.

Third, in evaluating analysts, there was a common practice of accounting for risk for almost all cases, which means disregarding what would happen 1% of the time. Rational actor analysts then set things up so that things will go pretty well 99% of the time, and the other 1% the business goes bankrupt.

Peter November 25, 2009 1:04 PM

@Clive Robinson

“Risk analysis just does not work.

The Banking industry had some of the best risk analysis going, and look where that got us.”

This doesn’t follow. What happened to the banks is not a failure of risk analysis, but an illustration of the principal agent problem. Execs figured how they could mak lots of money even if they ran insane risks with shareholder and taxpayer money.

If execs could figure a way to have their bonuses go up everytime phishing losses increased, then I’m sure they would. But that doesn’t happen. The fact that they can keep consumer liability at $50 for CC fraud suggests that risk analysis works pretty well.

Oldster November 25, 2009 1:12 PM

The cost/benefit analysis of security does break down at one point: have you ever read personal accounts of victims of identity theft? The hours and hours of lost time trying to get their credit records corrected and the crushing stress are just part of it. Averaging out the loss over the entire population does not do justice to the painful losses of one individual. Not to mention, as someone already said, the loss of reputation if it is a company whose security is broken. Perhaps security can be likened to insurance (car, house, even life): if you never end up needing it, your vastly overpaid for it. But if you are the one who needs it to prevent a financial catastrophe, it was indeed worth it.

Oldster November 25, 2009 1:21 PM

Relying on the end user to maintain high security standards will never work. A certain percentage of end users take shortcuts whenever they can. Yellow sticky notes under keyboards — containing complex passwords people are forced by some computer systems to use — are just one easy example. It’s not made easier when each place — work, the bank, online groups, whatever — each has different requirements for passwords: at least two special characters, no special characters, a mix of letters and numbers, only numbers, case sensitive, not case sensitive. Not to mention the people who will not take security seriously, even if there is a critical reason. Just like some people override safety switches on their power tools because they don’t like them, or some people refuse to wear seatbelts. I wonder: How much security can be built into the system, and not rely on the end user?

notMe November 25, 2009 2:43 PM

Main issue I have here is that security folks in IT repeatedly claim that each new attack vector, no matter how arcane of a vector it is, is always the end of the world.

The truth is… it simply isn’t true. If security wants to get users better on board, security then needs to understand that the big picture in most IT shops puts them at the bottom of pile.

And… as a project manager, I know that the standard, internal policies we have are effective, safe, and cheap. Changing policies in the name of “security” just to try and change the power balance is not effective, not conducive to collaboration, and annoying as hell to me.

Matter of fact, a failure to understand the big picture in an IT project field is the number one reason I fire security analysts from my projects.

Clive Robinson November 25, 2009 2:59 PM

@ David,

“It’s not just that risk analysis is bad, it’s that you have to do something with the results.”

You are correct on your points.

However the reason that “risk analysis is bad” is partly due to the fact that the results can be (and are) meaningless, when dealing with intangables as oposed to tangables.

Not only does the GIGO principle apply not just to the data in to data out but also to the methods.

The reason the methods are no good is the underlying assumptions (axioms) of the methods are incorrect.

This is before any misguided, underhand or fraudulant behaviour starts.

For instance one axiom of markets of tangable goods is that the players and the market it’s self can fail.

If a bank is “to big to fail” then the implication is the same as having “infinate resources” at “zero cost”. Which means there is no speculative risk. If you speculate and win well done, if you speculate and lose, you get bailed out so it’s the opposit of “Go straight to jail, do not pass Go, do not collect 200 pounds”.

That is not a market by any definition (including a free market) as there are no constraints natural or legal, and one or more players have effectivly unlimited resources and a monopolistic position.

Porlock Junior November 26, 2009 3:41 AM

@kangaroo:
Thanks for the customer/consumer distinction here. It’s a good model for a whole lot of very unsatisfactory situations. I’d mention the whole American health insurance model, but one doesn’t want to get controversial.

TristanI November 26, 2009 11:29 AM

@Adrian

“I think he’s ignoring the percentage of users that ALREADY read URLs to avoid phishing – if NOBODY did, the losses might well be high enough to justify that massive amount of time for everyone to do so.”

You bring up exactly the same point, in exactly the same words, as Kieran (Nov 25, 3:55am).

Is there evidence that users avoid phishing by reading URLs? We know that the vast majority choose super-weak passwords, we know that they’ll type credentials whether the lock is there or not. If they don’t make the effort at those two basics why would you imagine they have mastered the art of reading URLs?

The paper claims the effort of reading URLs is 164x greater than the cost of phishing. So even if you could claim that 50% of users are reading URLs we’d still be off by 82x.

Dean November 26, 2009 6:21 PM

Huh? People actually fall for phishing scams?

Come on, I don’t need to read a URL to know a phish when I see one. Then again, I’m a bit of pedant when it comes to spelling and grammar so when I see the botched English that is supposed to be a corporate communication my bs detector is triggered immediately.

When the scammers learn proper English I might have to be on the ball a little more, but since I never give out information unsolicited I’m not terribly worried.

Bacopa November 27, 2009 4:49 PM

I work for a major US retailer (1000+ stores). Intranet access requires strong passwords with containing one special character, mumbers, and no more than six letters in a row. Password must be changed four times per year for managers and once a year for everyone. Damn good security.

Problem is, manager overides on returns and voids requires only a 2 digit ID and 4 digit code and there are many opportunities for the code to come up on the wrong screen if you punch the wrong screen. If a cashier knew the two sets of digits the old post-void scam would be easy. Of course, there are paperwork barriers to prevent this, but what manager really checks the closing paperwork when you have fewer closers in this tight economy and you have to do the paperwork fast so you can get out on the floor and make sure those kids aren’t talking on their cell phones instead of facing the store?

Roger November 29, 2009 2:46 PM

Herley’s premise is interesting and bears examination, but in the interests of a good article he goes far beyond the support of his facts. (To start, we should note that this is an article, not a paper, since it isn’t peer reviewed.)

A list of examples as they occur to me on reading:
1. “Further, if users spent even a minute a day reading URLs to avoid
phishing, the cost (in terms of user time) would be two
orders of magnitude greater than all phishing losses.”

Unfortunately, if you are going to do a detailed economic analysis to prove your point, you can’t use made up numbers. I don’t know what the average amount of time is that users s[ending reading URLs to avoid phishing, but in my personal case, 1 minute per day is a gross exaggeration .. by rather more than two orders of magnitude. My spam filters now keep out all but 3 or 4 spams per week, nearly all of which I manually delete without further ado. Only about once every 3 or 4 months do I find myself needing to “[read] URLs to avoid phishing”, and when I do it takes much less than a full minute.

This item also suffers from the bean-counter fallacy, discussed further in point 9.

  1. “The main response of the security community to these
    threats against the human link has been user education. “

If Herley has any evidence of this, he doesn’t provide it (a link to a rather obscure government website with a list of advice hardly constitutes “user education.”) Based entirely on my personal experience at several employers and numerous clients, I don’t believe it is true. There was a brief fad for this concept in the late 90’s but it never really got off the ground, and today few institutions make any effort at all to educate users of their systems. The most that one might see is an occasional poster or flyer, a method which is very cheap but is well known to be ineffective unless delivering a very simple and compelling message.

On the contrary, the security community — which of course is by no means a monolithic entity — has spent far more effort in finding ways to make the “human link” less vulnerable.

  1. “a study of pass­word habits in 2007 [26] found that users still choose the
    weakest they can get away with, much as they did three decades earlier”

In 2006, on this blog, Bruce published an analysis:
http://www.schneier.com/blog/archives/2006/12/realworld_passw.html
of phished MySpace passwords which completely contradicted this result. On a system with NO strength requirements (or in fact, at one time a restriction on maximum strength permitted!), Bruce found that there were still some very weak passwords, but far fewer than Klein’s and Spafford’s analyses in 1989 and 1992, and a surprisingly high proportion of very strong ones. (As many as 80% were mixed character set non-words of 7 or more characters, a pretty good password for a low-value account.)

  1. “The advice offers to shield them from the direct costs
    of attacks, but burdens them with increased indirect
    costs, or externalities.”

A minor quibble, but several times what Herley refers to as indirect costs and externalities are in fact simple, direct costs.

  1. “For example, it makes little sense to
    invest effort in password strength requirements if phish­ing and keylogging are the main threats.”

This is a non sequitur. The question of whether or not phishing and keylogging are the main threats, has no bearing on whether or not it makes sense — economic or otherwise — to invest effort in password strength requirements.

  1. “(e.g. the US­CERT advice [13] contains 51 tips, each of
    which fans out to at least a full page of text)”

While it may well be true that security advice for internet usage can be complicated, this parenthesis is misleading. The page in question indeed contains 51 sub-sections, but many of them are not “security tips” and quite a few are not relevant to all users. For example it contains sections like “Understanding Bluetooth Technology” and “Avoiding Copyright Infringement”. Also, many of these ‘full pages” are only 3 ~ 5 paragraphs. This is a minor point but I think it reflects the tone of the entire article, which is partisan rather than balanced.

  1. “Equally, if an unpatched Windows ma­chine “is infected within 12 mins” [1], then a user may
    wonder what is the point of even basic precautions?”

Or that user might consider that patching is important, especially as most users can now do it automatically at no personal cost.

  1. “Third, the claimed benefits are not based on evidence:
    we have a real scarcity of data on the frequency and
    severity of attacks. So the absolute reduction of risk
    for any attack is speculative”

A fair enough concern — but bizarrely Herley then goes on to make analyses based on guessed values for exactly this sort of missing data!!

  1. “To make this concrete, consider an exploit that affects 1%
    of users annually, and they waste 10 hours clearing up
    when they become victims. Any security advice should
    place a daily burden of no more than 10/(365 × 100)
    hours or 0.98 seconds per user in order to reduce rather
    than increase the amount of user time consumed. This
    generates the profound irony that much security advice,
    not only does more harm than good (and hence is re­jected), but does more harm than the attacks it seeks
    to prevent, and fails to do so only because users ignore
    it.”

This is the main point that Herley seeks to make throughout his paper, and I am afraid it is complete bunk.

It suffers from what I call the “beancounter fallacy” (for lack of having heard any other name for it!), and occurs again and again when accountants, social sciences types and others seek to discuss security issues. The analysis assumes that the (hypothetical) 1% rate of incidence under the “treatment” scenario is a reasonable predictor of the rate of incidence under the “no treatment” scenario. This is fair enough when considering, say, natural disasters, and tends to be only mildly wrong when looking at some kinds of social issues. But in security issues it is complete and utter bunk. Any analysis conducted on this basis — such as most of this paper — will produce nonsense results.

PeterK December 1, 2009 7:26 PM

“The analysis assumes that the (hypothetical) 1% rate of incidence under the “treatment” scenario is a reasonable predictor of the rate of incidence under the “no treatment” scenario.”

The starting point of the paper is that users ignore security advice. The treatment and no-treatment cases that you describe are the same since nobody bothers with any of it.

If you want to make the case that treatment is helping can you produce any evidence? E.g. any evidence that even 1% of users pay any attention to URL reading?

Roger December 2, 2009 4:32 AM

“The starting point of the paper is that users ignore security advice.”

Which is a fairly blatant circular argument.

“If you want to make the case that treatment is helping can you produce any evidence? E.g. any evidence that even 1% of users pay any attention to URL reading?”

In some cases, yes. For example, as I previously mentioned, there is some evidence that typical password strength — even for low value accounts — is much greater than it was 17 years ago.

But you say “If you want to make the case that treatment is helping … ” Who said I did? I don’t have to prove the opposite case to find this paper dubious. In fact I’m not sure if the “treatment” is or is not working — it is certainly true that in many cases we don’t have much hard data (or at least, little publicly accessible hard data.) That is why I remarked “Herley’s premise is interesting and bears examination, … ”

But rather, I do question, very strongly indeed, whether Herley has made his case. In fact I am sceptical if it is even possible to make this case using the methods he suggests.

Branda December 2, 2009 4:39 AM

But you say “If you want to make the case that treatment is helping … ” Who said I did? I don’t have to prove the opposite case to find this paper dubious. In fact I’m not sure if the “treatment” is or is not working — it is certainly true that in many cases we don’t have much hard data (or at least, little publicly accessible hard data.) That is why I remarked “Herley’s premise is interesting and bears examination, … “

PeterK December 3, 2009 2:17 PM

@Roger

“The starting point of the paper is that users ignore security advice.”

Which is a fairly blatant circular argument.

No it’s not. An argument is circular when the result to be proved is assumed. This article isn’t trying to prove that users ignore security advice, it’s trying to explain why.

“If you want to make the case that treatment is helping can you produce any evidence? E.g. any evidence that even 1% of users pay any attention to URL reading?”

In some cases, yes. For example, as I previously mentioned, there is some evidence that typical password strength — even for low value accounts — is much greater than it was 17 years ago.

This is a pretty thin offering of evidence.

But you say “If you want to make the case that treatment is helping … ” Who said I did? I don’t have to prove the opposite case to find this paper dubious.

You did say the paper was bunk and nonsense on the basis of difference between treatment and no-treatment cases. The paper claims those cases are the same. You don’t have to prove the opposite, but if you’re going to call it bunk you might produce strong evidence that treatment has affected what’s going on.

Darrin March 5, 2010 11:54 AM

For years, security people have been trying to show businesses how to be secure. It is still not working.

Teach Security to the user at home. Show them how to secure their home computer so that they can do their business on it, preferably for free. Teach it to their kids in school. Start now. Then show them the exact same tools and techniques and rules and safe practices at work. It won’t be something new and different, but something they are used to and they know will work. They will not look at it as something to make their job difficult and inconvenient. Show them that by using this at work it will keep them safe at home. It will stop spam and protect them from the botnets and malware that threatens to ruin their business of running their own home.

It is what will keep their kids safe online and their money in their bank and the credit cards in their name and out of the online crooks and scammers. Fraud rates would drop and then their interest rate on all these credit cards would drop. And the chargebacks on all the fraud would stop being charged back to the retailer. And the prices of all of our goods and services would drop also.

Which means the credit card companies would not be making the money they are now. They have set themselves up perfectly. Spam and malware and trojans and botnets and rootkits do not cost the credit card companies a dime.
They have conned the cons.
It’s “The Sting” all over again.

They make big business pay for what the credit card companies say they have to have or it’s no dice. The businesses continue to pay pentesters and compliance auditors. They continue to buy the newest big and shiny blinky light box or software because someone told them it will solve their problem. It won’t. They continue to pay the chargebacks for any fraud that was committed against them and the higher rates the credit card companies charge them. The businesses pass all of these extra costs to them on as a cost directly to the consumers. Us. You and me.

All businesses that do credit card business of any kind.
Even Micky D’s.
Hackers and Crackers. Just Say’n.
Your Big Mac costs more because of fraud and the credit card companies.
We pay for the fraud! YOU AND ME!!!

Believe me, brother, if it were costing them anything, they have more money to throw at this than anybody else on the planet because they have all of our money already. I would not be surprised to find out that it is the credit card companies who are the ones who are paying for the malware to begin with. What the thieves and malware writers don’t understand is their paying the same high prices as everybody else even if it is someone else’s credit card. They would be able to afford alot more stuff on their on credit card if it all just didn’t cost so damn much. The “Cost Of Doing Business” passed down to the working man and woman and working children of working men and women and the working parents of the working man and woman, just to live and raise a family. What a Crock!

And where is the government in all this? Quietly doing whatever the credit card companies tell them to.
They owe them more money than we do.

The banks too, profit from all this, and do all they can to force you to use credit cards. They do nothing to defend your hard earned money in your checking and savings accounts if you fall victim to malware and fraud on your own computer which they have made no effort to help you protect. They actually made money off of the fraudulent transaction in transaction fees or from you, if you catch it in time, with stop payment charges. Or if the credit card companies decide to garnish your accounts, the banks will give your money to them with no regard to your best interest at all. If it’s just sittin’ in your account then it costs them money.

Win and Win.
I Call BS!!

BS on the credit card companies and the banks that represent them and the governments that let them get away with it. They are the only ones to profit from the whole fiasco. As long as you are working to pay off your credit cards you are also paying taxes with no money left to put in the bank or savings for your retirement. What retirement? You will work until you die as servants to the credit card companies, banks and the government. And then you die. And guess what. Then your estate gets to pay off your credit cards and your bank loans and if there is any money left, your taxes you still owe. It is doubtful if anything will be left to bury you with or to leave to your family. But maybe they can put the funeral on the old Visa, M/C, Amex.
Win and Win and Win.
They have made it so now you can’t be your own banker in control of your own money.
Which is absolutely the last thing in the world that they are going to let happen. People in control of their own money.

Blasphemy. End of the World as we know it.

Damn Right!

Put the money back in the hands of the people and the government will listen to what you have to say.

These hackers and malware writers appear to be fairly good coders and would probably work a hell of a lot cheaper than what the credit card companies gouge out of the business every year. Give them a job. And then start paying the pentesters and auditors to secure your employees at home. Show them what they stand to lose there and you will have them in the same mind set when they come in to work to get that paycheck that will now buy so much more stuff because online fraud and spam have disappeared and prices on everything
( and I mean everything, from Quarter Pounders to toilet paper)
will be so much lower, because the businesses no longer have to worry about charge back fees and high interest rates (they borrow from the same folks as we do, sister.)

When you want a business to see where its weakest link is, it is always gonna be Layer 8. The human element. Ask anyone in security “Where is the weakest link?” You will get the same answer every time.
Us. We are the weakest link.
You can not blame the user for being a stupid user if you are not doing everything you can to help them not be one. So why do we not devote all of our time and effort and money in this war on fraud on us? Spend the money and manpower where it will do the most good. Teach us to be secure at home where it means the most to us and we will bring that with us when we come to work.

We can’t allow these machines to get the better of us.

Can you only imagine what this is going to be like when every Layer 8 on the whole bloody planet gets their own Layer 8 IPv6 address. And we all know there is no way to spoof that. Right??? Your Credit card will have its on static IPv6 address. Your phone, your watch, your car, your TV and your refrigerator.
Who you gonna call? The scammers 900 numbers. Which they can now dial for you.
What time is it? Time for Viagra.
Do you want spam coming out of your car? Pay up or no go.
Malware on your flatscreen? Pay up or no HBO.
Scammers in your fridge? Pay up or we’ll turn off your freezer or freeze your beer.
Crazy Talk.

This problem must be solved now while it still can be. The businesses should take a small percentage of the money the credit card companies are costing them and spend it on their employees security at home. The users will see where it will make doing the business they do online at home safer, and with cheaper prices and lower interest rates, they will still spend their hard earned money, but now it’s a better deal for everyone, not just the credit card companies. We might even have some money left over to save.
Imagine that!
And if our online transactions are safe, we might just use our own money on our own debit card to buy something instead of a credit card.
What a concept!
Take back what is ours,
Secure yourself.
Then secure everyone around you.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.