Security Awareness Training

Should companies spend money on security awareness training for their employees? It’s a contentious topic, with respected experts on both sides of the debate. I personally believe that training users in security is generally a waste of time, and that the money can be spent better elsewhere. Moreover, I believe that our industry’s focus on training serves to obscure greater failings in security design.

In order to understand my argument, it’s useful to look at training’s successes and failures. One area where it doesn’t work very well is health. We are forever trying to train people to have healthier lifestyles: eat better, exercise more, whatever. And people are forever ignoring the lessons. One basic reason is psychological: we just aren’t very good at trading off immediate gratification for long-term benefit. A healthier you is an abstract eventually; sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good right now. Similarly, computer security is an abstract benefit that gets in the way of enjoying the Internet. Good practices might protect me from a theoretical attack at some time in the future, but they’re a lot of bother right now and I have more fun things to think about. This is the same trick Facebook uses to get people to give away their privacy; no one reads through new privacy policies; it’s much easier to just click "OK" and start chatting with your friends. In short: security is never salient.

Another reason health training works poorly is that it’s hard to link behaviors with benefits. We can train anyone—even laboratory rats—with a simple reward mechanism: push the button, get a food pellet. But with health, the connection is more abstract. If you’re unhealthy, what caused it? It might have been something you did or didn’t do years ago, it might have been one of the dozen things you have been doing and not doing for months, or it might have been the genes you were born with. Computer security is a lot like this, too.

Training laypeople in pharmacology also isn’t very effective. We expect people to make all sorts of medical decisions at the drugstore, and they’re not very good at it. Turns out that it’s hard to teach expertise. We can’t expect every mother to have the knowledge of a doctor or pharmacist or RN, and we certainly can’t expect her to become an expert when most of the advice she’s exposed to comes from manufacturers’ advertising. In computer security, too, a lot of advice comes from companies with products and services to sell.

One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple. And aside from certain sub-Saharan countries, we have taught people a new model of their health, and have dramatically changed their behavior. This is important: most lay medical expertise stems from folk models of health. Similarly, people have folk models of computer security. Maybe they’re right and maybe they’re wrong, but they’re how people organize their thinking. This points to a possible way that computer security training can succeed. We should stop trying to teach expertise, and pick a few simple metaphors of security and train people to make decisions using those metaphors.

On the other hand, we still have trouble teaching people to wash their hands—even though it’s easy, fairly effective, and simple to explain. Notice the difference, though. The risks of catching HIV are huge, and the cause of the security failure is obvious. The risks of not washing your hands are low, and it’s not easy to tie the resultant disease to a particular not-washing decision. Computer security is more like hand washing than HIV.

Another area where training works is driving. We trained, either through formal courses or one-on-one tutoring, and passed a government test, to be allowed to drive a car. One reason that works is because driving is a near-term, really cool, obtainable goal. Another reason is even though the technology of driving has changed dramatically over the past century, that complexity has been largely hidden behind a fairly static interface. You might have learned to drive thirty years ago, but that knowledge is still relevant today. On the other hand, password advice from ten years ago isn’t relevant today. Can I bank from my browser? Are PDFs safe? Are untrusted networks okay? Is JavaScript good or bad? Are my photos more secure in the cloud or on my own hard drive? The ‘interface’ we use to interact with computers and the Internet changes all the time, along with best practices for computer security. This makes training a lot harder.

Food safety is my final example. We have a bunch of simple rules—cooking temperatures for meat, expiration dates on refrigerated goods, the three-second rule for food being dropped on the floor—that are mostly right, but often ignored. If we can’t get people to follow these rules, what hope do we have for computer security training?

To those who think that training users in security is a good idea, I want to ask: “Have you ever met an actual user?” They’re not experts, and we can’t expect them to become experts. The threats change constantly, the likelihood of failure is low, and there is enough complexity that it’s hard for people to understand how to connect their behavior to eventual outcomes. So they turn to folk remedies that, while simple, don’t really address the threats.

Even if we could invent an effective computer security training program, there’s one last problem. HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link. If four-fifths of company employees learn to choose better passwords, or not to click on dodgy links, one-fifth still get it wrong and the bad guys still get in. As long as we build systems that are vulnerable to the worst case, raising the average case won’t make them more secure.

The whole concept of security awareness training demonstrates how the computer industry has failed. We should be designing systems that won’t let users choose lousy passwords and don’t care what links a user clicks on. We should be designing systems that conform to their folk beliefs of security, rather than forcing them to learn new ones. Microsoft has a great rule about system messages that require the user to make a decision. They should be NEAT: necessary, explained, actionable, and tested. That’s how we should be designing security interfaces. And we should be spending money on security training for developers. These are people who can be taught expertise in a fast-changing environment, and this is a situation where raising the average behavior increases the security of the overall system.

If we security engineers do our job right, users will get their awareness training informally and organically, from their colleagues and friends. People will learn the correct folk models of security, and be able to make decisions using them. Then maybe an organization can spend an hour a year reminding their employees what good security means at that organization, both on the computer and off. That makes a whole lot more sense.

This essay originally appeared on DarkReading.com.

There is lots of commentary on this one.

EDITED TO ADD (4/4): Another commentary.

EDITED TO ADD (4/8): more commentary.

EDITED TO ADD (4/23): Another opinion.

Posted on March 27, 2013 at 6:47 AM66 Comments

Comments

notmyopinion March 27, 2013 7:19 AM

And all this time I thought it was a 10 second rule for food!

More seriously, how usable are the “folk beliefs” about security – particularly ho usable for designing systems round? Can we use them wholesale – are they even consistent between different folk? Is there good data / research on what these beliefs are?

And what when they are wrong? Some re-education would seem to be needed – like we tried with “Clicking links is spam is dangerous…” or whatever. How achievable is that?

Zoltan March 27, 2013 7:54 AM

The 3 second rule (called 5 second there) was busted by Mythbusters. They concluded that other properties of food, like moisture etc. have a much greater effect on contamination than time.

David March 27, 2013 7:55 AM

Having read this, it occurs to me that, as he often does, Bruce is taking an extreme position to highlight an underlying truism. Making someone aware of an issue is not going to make them change their behaviours. Make them understand why it matters to them (and why they should care) and you have a fighting chance. Security Education should be the start point and WIFM (what’s in it for me?) has to be behind any conversation. Many Facebook users restrict access to their personal details to Friends only, because they understand sharing ‘those’ pics with the world is a bad idea (a win for education). At the same time many Facebook users are totally promiscuous when deciding who they will accept as friends, negating many of the benefits of the restriction! It’s an ongoing battle, and we seem to be starting to understand what tools we should be using, but giving up and relying on the techies to save us is not the answer.

JeanFG March 27, 2013 7:58 AM

As much as it saddens me, I have to agree with Mr. Schneier’s reasoning: I have seen far too often a “good” Security Awareness Campaign defeated by “because it’s convenient for me”/”because I want it”. The “possible” consequence is unfortunately that: a possibility. Like being hit by thunder or dying in a donut avalanche. So people tend to think that (a) it won’t happen and (b) if it happens, it will be to someone else.

However, I think this is a tug boat (thug boat?) effort: by keeping pushing in the same direction over and over, things will start to permeate in people’s mind.

And maybe evolution will kick in: those not fit to survive …

Simon March 27, 2013 9:07 AM

I don’t think the essay represents an extreme position at all.
Security has to be the least innovative field in all of technology. You’re stepping on some big status quo toes here. But you’re in good company. Over the past two years several cover articles in information security and networking publications have blown the whistle on all the me-too products that companies have blown enormous amounts of money on only to be told later that reason they failed is because their users were dumb.

wiredog March 27, 2013 9:19 AM

“We should be designing systems that won’t let users choose lousy passwords ”
Yes. And forces users to change them frequently enough so that if one is compromised it won’t be for long. So you end up with rules like “Must be at least 15 characters long, at least one uppercase letter, at least one lowercase letter, at least one number, at least one special character.” Oh, and change it every month.

And of course writing it down is a security violation.

All of the above applied at my last job. Where we had multiple logins on different password change cycles.

AlanS March 27, 2013 9:22 AM

@Bruce

I tend to agree with your commentary but not this one.

“One area where it doesn’t work very well is health”.

Your comments on health are simplistic. Changing behavior is usually hard but lots of public health campaigns have had major successes although it’s often taken a lot of time and effort to figure out what works and what doesn’t. People who criticize security training and education appear to dismiss it because it doesn’t pay big dividends immediately. God help us if people working on public health took a similar approach.

“One area of health that is a training success is HIV prevention. HIV may be very complicated, but the rules for preventing it are pretty simple.”

Tell that to some of the people who have worked on these issues. CDC and other health organizations poured a lot of money into research and evaluation to figure out what techniques worked best in different communities and populations to promote condom use, etc. When it comes to changing security-related behaviors the serious work just hasn’t been done. It would pay dividends to study how effective public health programs succeed.

Also, like so many other people who talk about training and education you pose this as a choice between different strategies (“money can be spent better elsewhere”). Education and training aren’t optional. You have to do them with all the other types of controls you would but in place. Your question can be turned around: For those of you thinking that training users in security is a waste of time, I want to ask: “Have you ever met an actual user?” Try imposing all your PITA controls without an explanation of their purpose and see what happens. Or, maybe someone, somewhere has figured out how to do security that works well and is completely invisible to the end user?

“HIV prevention training works because affecting what the average person does is valuable. Even if only half the population practices safe sex, those actions dramatically reduce the spread of HIV. But computer security is often only as strong as the weakest link.”

But it’s not all or nothing. We might as well give up now if that’s the measure of success. Say you do training to prevent phishing (which can be very effective), you don’t have to reduce the number of effective attacks to zero to reap benefits. In fact reducing successful attacks by a quite modest number might be beneficial. For example, if you reduce the number of people clicking on things they shouldn’t, there’s potential for a substantial benefit because every time an incident happens you have to go through a whole incident response process. It’s a lot of work and it’s expensive. And often it’s dealing with stupid security issues where someone’s e-mail account is compromised for the purpose of sending spam. It’s quickly detected and stopped but the staff who have to work on the response could be working on detecting and preventing more serious security events.

Petréa Mitchell March 27, 2013 9:26 AM

Great essay. And the main point applies more generally to all human-computer interaction– the approach of “The users are just using it wrong and we’ll fix it by teaching them to use it right” never works.

Tito March 27, 2013 9:36 AM

The key point from this is “We should be designing systems that … don’t care what links a user clicks on”.

Why is visiting certain websites something that can take over my computer? Why is opening an attachment something perilous? Why do I have to trust a video game developer to not steal the banking creds in my browser?

This is flat out ridiculous. The problem is not that a “user did something they shouldn’t have”. That is the pinnacle of victim-blaming.

I’ll even go further and discuss the issue with “developer training”. The solution to buffer overflows wasn’t “more training”… it’s using a language that doesn’t let you make the mistake. The solution to XSS won’t be even-more owasp top 10 training, it will be web frameworks that make it something you don’t have to think about when you just want to make the next calendar app.

Yes, there are extremely difficult security engineering problems. The people to figure it out are security engineers with lots of review, etc. Then we need to encapsulate that expertise into libraries and tools and languages that make it easy to use. Yes I’m over simplifying and it’s more complicated than that and we can’t encapsulate everything.

Right now we are building houses that require crossing live wires just to turn on the lights… and then blaming the occupants when it burns down. There is a vast and impressive infrastructure, built with lots of expertise, skill and technical understanding that lies behind it all. But when I am at home, I don’t need to think about phase shifting transformers or resistance losses…

I flip a switch.

Justin March 27, 2013 9:44 AM

While I do not disagree with your examples above, I’d like to point out that you’re grouping awareness and training into the same bucket, something I’ve been trying to push my colleagues to stop doing, as they are very different concepts.

Training is the act of providing specific knowledge about something in order to ensure the person being trained can regurgitate it back. Awareness is building an affinity for a concept at a subconscious level, and is far more useful than training.

So you can “train” someone that they must have an eight character, complex password, and even provide consequences for not complying; but it is through awareness that they will really comprehend why it is in their best interest and self-select (consequences or not) to use complex passwords on their own. The holy grail of awareness is when they take it a step beyond and evangelize to others.

We should stop all but “required because we have too many lawyers” training, and focus entirely on awareness tactics.

Mike B March 27, 2013 9:56 AM

I think that getting people to be careful about clicking on attachments and malicious links is probably one of the areas where education and awareness can make a difference. The model is EXACTLY like HIV where a few poor practices lead to the spread of infections.

I think another good general awareness skill the whole thing of getting people to realize when something is authentic vs just posing as authentic. Just because the e-mail says its from your bank doesn’t mean it is. Building a healthy skepticism is possible through awareness training and its knowledge that can help people in all aspects on their life even outside their organization.

mchopra March 27, 2013 10:04 AM

Bruce points out correctly that when it comes to security, we are only as good as the weakest link.

Therefore, I don’t know if I can also buy the argument that we can become aware of security ‘organically’.

And till the design and security folks can bullet proof the technology exposure (who knows when that will be, if ever) – I think we are throwing in the towel if we say that training is a waste of time.

I think security is more analogous to his driving example. If you want to drive, you have to pass the test. That is not a driver’s choice.

Bruce Kaalund March 27, 2013 10:43 AM

The fact of the matter is, security training is a poor and cheap substitute for investing in robust systems. It is cheaper for a company to create policies and training presentations for everyone to view, than to insist on (and pay for) robust products from their industry partners. The cost in product research, process development, and time simply do not offset the risk of an event to many managers (see TJX).
Please understand, this is not a cynical position, just the cold, hard facts about the business needs. Where the risk of an event can potentially create significant negative impact on ‘shareholder value’, the business takes a whole different view.

AlanS March 27, 2013 10:47 AM

I agree with the concluding points about engineering and design and limited demands on users. Lots of stuff could be fixed with better developer education now. But a lot of what you are proposing is pie in the sky.

For example: “We should be designing systems that won’t let users choose lousy passwords.”

You can enforce strong password requirements on lots of systems now but if you do it without an explanation of the requirements your users are going to come after you with pitch forks or more likely they’ll go to management or, even more likely still, they are management, and they’ll come after you with pitch forks. So you have to educate management and then all your users about strong passwords and why what they are doing at the moment is lousy, otherwise your system requiring secure passwords isn’t going anywhere.

So assume you’ve got everybody on board, they all understand what a strong password is and they think strong passwords are a good idea, how are they going to do it? You’re requiring them to have a strong, unique password on dozens of sites. Most of them have no clue about password managers (your own recommendation). They have no clue how to do what they are being asked to do. This is going no where fast unless you train them.

Work is being done on “designing systems” to address the problem. For an example, see http://www.wired.com/wiredenterprise/2013/01/google-password/?cid=5394044. Maybe this or something like it will be very effective and available soon or maybe it’s like a miracle cure for cancer. Who knows but it’s not a solution now.

JR Fezziwig March 27, 2013 10:52 AM

Where I work, we have very robust systems that are audited by external parties up to 10 times a year.

We also do company-wide security awareness training semiannually and send out a security and privacy newsletter monthly.

The results are positive and quantifiable. A single example: the number of “actual users” (to borrow Bruce’s term) who report phishing attempts to the Security Office is rising — often from persons who report that they recognized the phishing attempt from the training itself.

Our training and our newsletters are customized for our company. We don’t slap together a couple of canned items and call it good — and I think that this is one reason that for this company, training is actually working to some extent.

Mailman March 27, 2013 11:16 AM

“sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good right now. ”

It does?

Mailman March 27, 2013 11:22 AM

I think user training should focus less on “awareness” and more on actual training on processes and procedures. So it’s not about “accept these principles because it’s beneficial in the long term” but rather “respect these procedures or else…”

So yes, security must be well designed, but as long as humans will be required to operate them, they will need to be trained to use them.

Yossarian March 27, 2013 11:34 AM

The issue here is a particular instance of a general problem: how do you motivate people to implement a (security) standard?

The key word is motivate and motivation comes from consequences and ongoing auditing creates consequences.

An unaudited standard is a spotty standard.

In this case an audit would mean creating live situations to test response and then measuring the outcome. Being the subject of an audit review is a marvelously focusing experience. There is plenty of stick involved. Not failing the audit and not being the subject of review is a pretty good carrot.

Of course all this is expensive and that is a good reason why it is not done (very often). It requires buy-in at the top to be successful.

Security education is a good and valuable thing but creating the motivation to act on the education is the sticky bit.

InsanityBit March 27, 2013 12:04 PM

I a lot of really similar stuff.

http://www.insanitybit.com/2012/12/05/educated-users-are-unicorns/

And I’ve been saying it for years. And so has Microsoft, as I link to in that post.

Good post, anyways, I’m glad someone else out there realizes this. I see so much blame of users every day. “They got a virus, must have clicked on something” as if that’s so wrong.

As Microsoft states, it’s incredibly difficult for users to care about security, because, from a rational cost-benefit view, they draw a conclusion that it’s not worth it.

RH March 27, 2013 12:39 PM

3 second rule, 5 second rule, 10 second rule. While it may have been busted by Mythbusters, the rule still holds true. Of course, the real rule is “don’t eat stuff that’s been on the ground for an unknown period of time.” The fact that they chose an arbitrary number of seconds is just an instance of a NEAT rule that’s provably false being more successful than the more nuanced correct rule. In this case, the false negatives (thrown out food) are deemed acceptable.

Rob March 27, 2013 12:46 PM

It is true that 1% compromise kills even with 99% successful defense. Given this risk, it is logical to eradicate the threat.

“Tomorrow, I will ride out to the Indians. I do not know the wisdom of this thinking but I’ve become a target, and a target makes a poor impression.” ~ Dances with Wolves.

Calvin March 27, 2013 12:58 PM

Heh, Microsoft as the bastion of security – Whose products are you likely to end up bogged down with multiple “helpers”? Chrome on Windows? Ubuntu? Nope, it’s the IE-Windows dynamic duo. The same goes for your rant about knowing which links to click on. How long did it take them to get Outlook Express web safe? Just in time for nobody to be using it?

HJohn March 27, 2013 4:18 PM

I talk out of both sides of my mouth on this issue.

On one hand, I do not think security awareness training is worth spending a lot of money or time on.

On the other hand, I do believe it is reasonable to educate users. This is particularly beneficial when the person is being trained for the first time, and simply did not understand before explained.

This is another case where there is no silver bullet. Training is just one more layer in security. We should not depend on users being aware, but it is good to increase their awareness in case another layer fails.

I would say a lone security mechanism that fails 5% of the time is a failure. But for that failure to be compensated by another layer that fails 5% of the time, then another and another, we are getting into the realm of acceptable risk.

I used to work in a hospital kitchen. They taught us to wash hands properly. Some didn’t listen, some did. Given how little time and effort there was involved in the teaching, it was probably overall a benefit. Add that to the fact that we had gloves (that weren’t always worn, but it was easy to spot when someone didn’t), tongs, etc., and the patients were probably pretty safe. No one defense alone did it, but together complete lapses were rare, and consequences even rarer.

Such is security. Systems should be as secure as possible, but it doesn’t hurt for users to be trained… within reason. Wasting a full day of productivity on 10 year employees who aren’t going to be told anything they don’t already know and already either will or will not adhere to isn’t within reason.

Alex B March 27, 2013 4:28 PM

I’m not entirely convinced by the HIV example. Certainly in the UK there has been a very large rise in sexually transmitted infections over the last decade, suggesting that it may not be the case that general education has reduced the incidence of HIV.

tz March 27, 2013 7:47 PM

There is also some basic economics.

Insecurity from an ordinary user is an “externality”. Like pollution, he doesn’t personally bear the cost. Others do.

Much security is about economics. If it will cost $10,000, $10,000,000, or $10 E9 to break a password, it won’t be worth it depending on what it gives access to. A safe with several million in gold needs different protection than one with a few identity documents.

I don’t know an easy way of making it economically advantageous. Perhaps a lottery so if users DON’T click bad links (or even report phishing) they share in a monthly pot? Some things work better positive instead of negative.

In any case, you need to change the incentives so that it is in each and every individual’s interest.

Even something like having two parallel networks, one inside and one outside, with two computers on every desk. One faces inward and is completely isolated. One is public. Crossing can happen but is well protected (e.g. you can send and receive mail on the “secure” network, but it is text and formatting).

But as with “no one gets fired”, or security as excuse to fire, the actions of the companies and the management and the rest speak louder than any policy.

S. March 27, 2013 8:19 PM

1) Alan is correct about CDC and NIH funding for behavioral research, particularly during the AIDS crisis of the 1980s. However, what’s important to remember is that those interventions only became effective once we focused on listening to the gay community. Or, more accurately, once they began lobbying Congress and the FDA/NIH/CDC and forced us to listen to them.

However, I think it’s extremely important to remember that the behavioral interventions researchers developed have not been transferrable to other communities, who were never involved in the research and who have completely different risk factors. HIV/AIDS incidence and prevalence have been rising for well over a decade in the U.S. in African-American communities due to their markedly higher incarceration rates, which is most likely due to a combination of structural racism and, well, just good ol’ racism. While none of the epidemiologists and biostatisticians really expected the interventions to work in Africa or Asia, most of us were surprised that they weren’t somewhat transferrable to African-Americans. IIRC, it was the psychologists who thought the interventions would be universal, so they had the wind knocked out of them. If the interventions don’t work within one country for one disease, then I’m not sure you can expect them to work for a completely different field.

I’m not if/how you’d go about applying these lessons in computer security to develop better computer security proceedures, but that’s because I’m not entirely sure what the lessons are. Use stigma from outing a discrete and insular minority to create political power? Get NGOs to lobby relevant Administrative agencies for funding instead of their Congressmen directly? Ignore the question of government funding entirely and just do your own research via grassroots fundraising? These are the lessons various cancer groups have drawn with greater (breast) and lesser (lung) degrees of success. I don’t know if they can be ported to a non-medical field at all.

2) This is my pet peeve, and is tangential. Obesity is not as simple as CICO. (a) Researchers still don’t really know what ultimately causes obesity, nor (b) do we really know how to effectively treat it without surgery. And even then, research is beginning to suggest that an increasing number of surgeries fail in the long term. Behavioral interventions only work if you understand a disease’s cause and/or how it is transmitted, so they’re ineffective if you don’t understand those aspects of the disease you’re dealing with. /tangent

Bill March 27, 2013 9:14 PM

Purely “Security Awareness” training to the general user base, as in “These are the threats” without giving them strategies on how to deal with these threats is a waist of time. The general user doesn’t know what to do with this information. You need to train them on how to deal with the threats. In fact training includes practice.

By training your user on how to deal with the threats you will be reducing the incidents security problems that your users cause. Since groups of people tend to fall in a bell curve, you’ll be shifting the curve to the right.

If you believe in the layered security approach, the users are a layer. Improving this layer will be beneficial. Yes people do click links when they are in a hurry, the point is if you can train them so that the group as a whole does it fewer times, or missing that notifies you that they did immediately you’ve improved this layer.

I strongly agree that the user shouldn’t be able to click a link or visit a site that compromises her/his machine. This is a single layer. Why don’t the user permissions prevent launching executables? Why aren’t there more layers between the user and the system?

I do agree that the developers need training, but the people who write their paychecks need incentive to allow them training and time to implement good security design. Software companies need to be held liable for their bad security designs. If you buy a product like a car and the wheels fall of when you’re driving, you can sue the manufacturer. If you buy an OS and a banking trojan infects your machine through a myriad of OS bugs steals all of your money your OS manufacturer is off the hook. Put that OS manufacturer on the hook and you’ll see a improvement in Security design. There will be an incentive.

bill March 27, 2013 9:31 PM

Medical Analogy

I think the medical analogy is flawed in that the degree of belief in non scientific modalities ( acupuncture, chiropractic, etc.) and misinformation about what is healthy is of a greater extreme than misperceptions about computer security. In fact health can be like politics, the truth can be less imporant than the individuals world view. Tell someone that chiropractic is likely to not help and possibly dangerous, you’ll get an emotional response out of many people. Something you’re not likely to see in anything you say with regards to security. Moreover, you can demonstrate to people good security practices and show, in a sandboxed environment the effects not following them with for example a bank trojan, etc. You can show the harm right now. With health, beyond the misinformation, it is a long term proposal. So the scope of security from a user perspective and the scope of health are different. One is short term (” I got a computer virus”) one is long term (“by being overweight I reduce my life span statisically”.)

PS Not meaning to be a wet blanket. 🙂

itgrrl March 27, 2013 10:06 PM

I think that at least part of the problem is that for the average user, compliance with good security practice involves a buttload of stick, with not a carrot in sight. What do you suppose would happen in an organisation that backed it’s “we take security very seriously” rhetoric with financial incentives? $50 for the first person to report an instance of a phishing email doing the rounds. $100 for turning in a USB stick found in the parking lot to the security staff rather than plugging it in to “see if I can find out whose it is”. $500 for identifying a significant security risk in an existing process or system, and $1,000 for identifying a serious or critical risk in a process or system.

I suspect that you might be able to turn a corporate security culture from “It’s a PiTA and who really cares so long as I do the bare minimum so as not to be fired?” to “Security bug hunt, woohoo! I’m gonna make some cash!”

Has anyone seen this sort of approach implemented consistently over an extended period? Success or failure?

joequant March 28, 2013 6:24 AM

I have seen effective security awareness training programs, but these tend to be extremely “practical.” I.e. here are some scenarios and this is what you should do in this situation.

“Don’t click on attachments from people that you don’t know about” isn’t what gets taught. “Here is the phone number that you should call if you get a weird attachment” is more useful. “Here is the other phone number to call and what you should do if your manager doesn’t follow proper security procedures.”

Also one thing that a good security program does is to raise awareness and make it clear that you won’t get into trouble if you do the right thing. If you loose your company smart phone in a taxi, and you recover it, then you are required to report this as a possible security breach. Convincing people that they won’t get into serious trouble for reporting things like this is part of good security training.

joequant March 28, 2013 6:34 AM

itgrrl: What do you suppose would happen in an organisation that backed it’s “we take security very seriously” rhetoric with financial incentives?

You’ll end up with perverse incentives. If you just get the organization to avoid bad incentives (i.e. you won’t get fired if you report a security issue that you were involved with), you are doing fine.

One reason that there is a special number to report bad e-mail attachments is that those shouldn’t make it past the firewall, so if you are even seeing them, something broke.

Bill: Put that OS manufacturer on the hook and you’ll see a improvement in Security design. There will be an incentive.

I don’t think so. You’ll see that OS manufacturer spend more money on lawyers than on security and on hiding problems rather than dealing with them. One other thing that could happen is that the OS manufacturers go out of business, and people start writing OS’s in house, which gets you both inefficiency and insecurity.

One reason for using COTS is that commodity software tends to have known failure modes. It’s pretty clear the 1000 ways Windows 7 is insecure. If you sue Microsoft out of business, then people will write their own software and you end up with software with unknown failure modes.

John Ultra March 28, 2013 6:44 AM

behavioural economics suggest that individuals tend to act more on things which will result to a gain or where inaction will result to a lose or penalty. In terms of Internet security for everyone, aside from improving the pedagogy, i believe it is also important to incorporate some kind of reward system in the strategy for increasing security awareness and skills among ordinary technology users. This reward maybe personal in nature, which i think is the ideal case mostly covering personal motivation to practice security (similar to an individual reasoning for practicing safe-sex for example so as not to acquire STDs :P) and secondly, institutional (like how spreading STDs is against the law of most countries, but this covers laws and policies across international boundaries in which I believe is logistically harder to enforce).

A personal reward system could vary from one individual to another, but it is a main motivation for an individual to practice security because the effects of doing so is detrimental immediately to himself, just as when one does not practice safe sex he will for sure reap its harmful effects etc. However, not all individuals in the Internet store highly valuable information on it which are worth protecting. In such cases, these individuals couldn’t care less about the security vulnerabilities and threats lurking around the net, so for them the immediate risk of not doing something is close to nothing. And there is where the institutional form of reward/penalty system ideally would come in. But again, such a system could itself be harder to secure operationally and could result to abuses, prone to wrongful accusation and imposition of penalties and so on.

So I agree, perhaps it’s not really a good idea to let ordinary users do the security jobs for you.

joequant March 28, 2013 6:45 AM

AlanS: So you have to educate management and then all your users about strong passwords and why what they are doing at the moment is lousy, otherwise your system requiring secure passwords isn’t going anywhere.

And at that point you start wondering if you are doing the wrong thing, and maybe you should use two-factor authentication and dongles. If you have to spend too much effort “educating” then maybe the approach is wrong. The problem is that if you have to pull teeth to get people to do something, then you should expect that 10% of them won’t do it, and if that happens then your system isn’t going to be usable anyway.

Also you have to think of compliance rates. For example, if you tell people to call number X if they get a weird attachment, the results are pretty effective even if you have 10% or 5% compliance. All you need are two or three people to call the security desk at which point people realize that there is something that isn’t blocked by the firewall and something that they should fix. There are a lot of things in which you can get improvements with modest compliance rates. On the other hand if you need 99% compliance to get something done, then you aren’t going to get it through a training program.

John Ultra March 28, 2013 6:55 AM

I also believe that training which could help personal security awareness per individual basis is always better. A guess an Internet community where the model for security is, “to each his own” is a better perspective to think of this problem. Solving the problem, one guy at a time

joequant March 28, 2013 6:57 AM

Something else that’s important with financial institutions is that a lot of the interactions involve communications with people that are “outsiders.” One thing that people have figured out is that leaving a group of good friends alone with large sums of money is a bad idea, so you have procedures to make sure that whenever there is a pile of money, that there is someone who is an outsider around monitoring things.

A lot of what goes for training in financial institutions involves explaining social rules and procedures, some of which are non-obvious.

joequant March 28, 2013 7:04 AM

One thing about infectious disease training is that it can be effective because some relatively minor changes in behavior can drastically reduce the growth rate, and reducing the growth rate in an exponential process makes a big difference. Also infectious disease training is useful because you can often narrow things down to “do this or don’t do this” where doing this or that is a relatively minor change in behavior.

paul March 28, 2013 9:09 AM

There’s a implicit assumption in the bit about the dangers of letting a user choose a lousy password, namely that once you have regular-user access to a system, getting administrator access is pretty much a given. That’s probably true, but it’s not really the regular user’s fault.

HJohn March 28, 2013 9:30 AM

@itgrrl
I think that at least part of the problem is that for the average user, compliance with good security practice involves a buttload of stick, with not a carrot in sight. What do you suppose would happen in an organisation that backed it’s “we take security very seriously” rhetoric with financial incentives? $50 for the first person to report an instance of a phishing email doing the rounds. $100 for turning in a USB stick found in the parking lot to the security staff rather than plugging it in to “see if I can find out whose it is”. $500 for identifying a significant security risk in an existing process or system, and $1,000 for identifying a serious or critical risk in a process or system.


I like that you think outside the box, but this would create a lot of dangerous incentives.

You’d end up with phishing scams and dropped USB sticks in the parking lot constantly, probably by the very people that claim the reward.

999999999 March 28, 2013 9:35 AM

Users’ time is too valuable to waste on penetration defense.
If security was important there would be a person (or 3) that do real-time network monitoring. The NoSuchAgency is going to use 5000 people to do exactly that for monitoring everyone all the time and save all the information in a massive hard drive in Utah.
If your 100-500 user company has a credible threat, maybe employee number 101 can be the “narc”
(There are so many possible weaknesses in this approach that it boggles the mind, but I can see how it would work better than “training”. It would also be easier for people to go to “Bob the internet gatekeeper” and ask rather than just get a SITE BLOCKED popping up.)

Dewi Morgan March 28, 2013 1:48 PM

I can hear the cries now: “Yes, please don’t train your users. Some of us rely on social engineering tactics, and security-aware procedure-following jobsworths are much harder to play!”

When I worked in network security, our clients would often get third parties to do security audits. These would often have people call us, fishing for information on the company structure and procedures, asking for exceptions to be added to the firewall rules for an urgent video conference, and so forth.

To my knowledge, none ever succeeded: we’d have heard if they had, because we’d have lost a contract. We only ever had positive reports passed to us.

Training is important, for information gatekeepers. Any employee, especially those with a phone or email address, can be a gatekeeper. Without correct procedures and systems, gatekeepers will leak.

Train those who can be trained, disable the remainder both by restricting their access (to information and to systems), and by enforcing procedures that cause probes to be forwarded to central gatekeepers.

This isn’t that hard.

A question about the network? They all go to the network security group.

A request for a change to the network? They must all be authorized by Bob, no matter how urgent. Yes, we have him on 24/7 call.

A request for a copy of the company phone directory from an employee who’s out of the office at the moment, could you send it to my yahoo account because I can’t get to my bloody work email from this hotel? Now that is where that expensive security training pays off!

Lisa March 28, 2013 1:53 PM

First let me say that most security awareness training sessions are done improperly, often by people who are not qualified to be security managers.

Rather than just stupidly read off a slide desk, it is best to inform the audience on how the material presented directly relates to them and the company.

Typical feedback from people is that they thought cloud storage was secure, not knowing that the 3rd party providers has access to everything that was not previously encrypted by you, with keys only you know.

The most important part of company’s security awareness training is to stress that our company has a zero-fault policy with regard to lost or damaged devices (laptops/tablets/phones, keys, access cards, VPN tokens, ID badges, etc.).

We let them know that the company does not really care about the occasional loss of thousands of dollars worth of hardware, but the potentially millions of dollars of damage that could occur if sensitive data is leaked. We stress that if the employees notify the company immediately, they will not be blamed, and it will give the company time to enact countermeasures (revoking credentials, remote wiping, etc.) to minimize damage to the company. However if the employee delays reporting, it is grounds for immediate dismissal.

Especially in Asia, where there is a culture of not losing face, Security Awareness Training lets them know that they will lose more face with not reporting and being fired, then they would if they make an accidental error in losing a sensitive device.

I also let them know about a previous instance where the CEO lost their laptop, to stress that it could happen to anyone in the company, without consequences, when immediately reported. It also shows why full disk encryption, and countermeasures where able to minimize damage to the company.

Andrew March 28, 2013 6:00 PM

I wish I had time to read the comments right now but I can’t at the moment – will do so tomorrow.

Here’s the reality from my standpoint as an Infosec consultant for SMBs:

1.) Clients (business stakeholders) take for granted they need some security infrastructure (appliances, limited software).
2.) They want to do it for as cheap as possible while spending enough to feel that they did it right
3.) Threats are currently an issue, and not just to the client but also their ability to pay me

Schneier’s stance seem to me to assume that there are 1.) implementable (which BSD for workstations is not) solutions for security and 2.) that these solutions can become available or are as effective given limited budget as security training

He’s right, training isn’t perfect, especially due to weakest link issues, but it’s the best we can usually do. I don’t care if staff of a client has to come to a free-to-the-public presentation I give instead of paying me for it – with the systems that are almost everywhere today THEY NEED IT. At least something.

I focus on accountability. Make them understand that having no care for security can cost them their job, and this happens often. Explain that if they don’t rotate their credentials in a responsible way that they are accountable, as far as my auditing is concerned they committed the fraud. Getting infected isn’t just an inconvenience along the lines of waiting for IT to give you a new laptop.

Yeah, it would be fantastic if we had some dependable solutions. But we don’t, yet or if ever. And guess what that “if ever” part depends on … security training. Until developers can get their act together, and that’s another case of weakest link, we’re all exposed to uneducated or uncaring users. And you can’t care for what you don’t understand

joequant March 28, 2013 10:02 PM

Also for a computer professional, $1000 is a piddling small amount of money. If you want to incentivize people to do things you have to look at $10,000 or $100,000.

The other thing about security is that you have to worry about managers and financial incentives. People worry a lot about low level workers because they get paid crap, so that they are easy to turn. If you offer $5000, you’ll likely find some bank teller that will give you a copy of the corporate directory, but you aren’t going to get a managing director that gets paid $500,000 a year. However, even then you still have to worry about security because if the managing director can think of a clever way of getting themselves a $1 million bonus.

One thing about banks is that they don’t think of computer security as something separate, but rather as a general concept of “operational risk.” The big money security issues are those in which someone violated procedures and the bank loses several billion dollars (which has happened).

Also operational security is thought of in terms of “compliance.” The thing about managing directors is that they can think of a lot of clever legal ways of giving themselves bonuses, and the same governmental regulators and groups that figure out ways of preventing that are involved in computer security, so “security training” in banks are part of an overall effort at “compliance training.”

Something about PDF phishing attachments is that you have to keep them in perspective. Yes, they are bad, but they didn’t destroy the world economy. The people involved in security and compliance in banks are more concerned at stopping the actions and behaviors (almost all of which was legal) that did, and so when the government has a meeting with bankers, phishing might be on the agenda, but it’s a much lower priority than bonus policy.

joequant March 28, 2013 10:18 PM

Morgan: A request for a copy of the company phone directory from an employee who’s out of the office at the moment, could you send it to my yahoo account because I can’t get to my bloody work email from this hotel? Now that is where that expensive security training pays off!

And in banks, part of security training involves getting a script in which lets you handle certain situations. For example, if you have a random person calling and asking for information, there is a 90% chance that they are a phisher, but there is a small chance that they aren’t, and so you have a polite and helpful script for what to do. One thing about phishers is that if you work for a financial institution, you will routinely get phishing calls.

The other thing is that if part of security training involves teaching about proper procedure. If your manager gives you a direct order and threatens you with getting fired if you don’t e-mail them a document, here is the procedure….. Also it’s setting up a culture. It’s pretty clear where I work that if a manager orders you to violate security procedure, and you refuse, it’s the manager that is going to get disciplined and you are going to get rewarded.

If the CEO of the company orders you to e-mail a document to an outside address, you are supposed to refuse and get guidance from the security people. If he really is the CEO and he really has a legitimate reason, then there is a proper procedure for doing things, and if he doesn’t follow the procedure the answer is no, even if he is the CEO.

The fact that the really, really big bank losses have been due to insiders bypassing security is why people focus on those.

Also one thing about banks is that “security” isn’t something that people want to skimp on. There have been notable cases in the news in which a bank lost billions of dollars because proper security procedure was not followed (i.e. rogue traders), and when you talk about security in a bank, the first thing that people think about is Nick Leeson.

joequant March 28, 2013 10:25 PM

Also being too harsh on security can also cause problems. If you fire people for not changing their passwords, then people will just stop cooperating with the people involved in security. Once people think of security as “the enemy” or as an “annoyance” then you are sunk. People will do the absolute minimum they can to keep from getting fired, which means that you end up with a ton of security theater and no real security.

Some of establishing a security and compliance culture involves actually being able to have difficult conversations. You can avoid people sending attachments to personal accounts only if you have the ability to have people work remotely with a sandbox, and that involves spending $$$$$.

Zooph March 29, 2013 6:35 AM

Computers are possibly the most treacherous objects ever invented. It’s like living in a house with doorframes but no doors and windowframes with no windows, and in a neighbourhood populated by burglars and looters.

Ollie Jones March 29, 2013 11:27 AM

I’d like to point out a type of security training that does in fact work tolerably well: patient-confidentiality training in health care workplaces.

It works for a couple of reasons:

(1) in the US you can get sacked or sued for disclosing private patient information, even if by mistake. There’s a compelling incentive to learn the rules and learn ways of following them.

(2) the wrong caused by disclosing patient data is easy to describe and imagine. “Would you want the world to know about your dad’s prostate trouble?” Personalizing the consequences of a breach seems to help people imagine what breach threats look like.

There certainly is a problem with the kind of training that says “change your password because it’s, abstractly, the right thing to do.” It’s easier to say, “here’s a good way to keep your job and keep your dad’s private business private: change your password once in a while.”

koen April 2, 2013 3:59 AM

I think this comparison is too binary. The goal of awareness campaigns is to reduce the security risk. Even if only 1 employee applies what (s)he learned the campaign has booked some success. I agree that effectiviness is low but how effective are technological security solutions today?
Another awareness campaign that works is road safety. Due to stricter controls and awareness raising the number of people getting caught that drink and drive is steadily going down. It is a combination of a control and a campaign that book the success. Perhaps information security should be approached by combining both instead of focussing on one.

Danny Moules April 2, 2013 7:28 AM

” Tell someone that chiropractic is likely to not help and possibly dangerous, you’ll get an emotional response out of many people. Something you’re not likely to see in anything you say with regards to security. ”

@bill People are very, very protective of their trust models – because if their trust models are wrong then they have to accept they have been screwing up their rationalisation their entire life. That’s not an easy thing to budge and people prefer to just cling to a trust model they understand and accept even when it’s wrong.

Trying to change someone’s perception of security models, who and what they trust, is similar to trying to change their religion. Sometimes it literally requires changing (or at least undermining) their religion!

“The holy book tells me that we should trust our spouse and keep issues of marriage within the family and church.”
“And that’s why you let yourself be punched by your husband in the middle of a crowded space and actively prevented people from helping you? Have you considered… say… ignoring the book?”

AlanS April 3, 2013 10:30 AM

@S.

“I’m not if/how you’d go about applying these lessons [on HIV/AIDS] in computer security to develop better computer security procedures, but that’s because I’m not entirely sure what the lessons are.”

There are a lot of general theoretical models of behavior change that have come out of public health research (see, for example, Karen Glanz et al. Health Behavior and Health Education: Theory, Research, and Practice, 4th Edition). You start by looking at these models and thinking how they might be applied, adapted, extended etc. to design security behavior interventions and you test them. If they don’t work you rethink them and retest. Public health interventions have worked because a lot of people did and continue to do a lot of hard work.

At the moment very few people in the info security field have done any of the hard work on these issues. Instead what we have are panels at RSA populated by people who know very little about security awareness and behavior change but who inform us that awareness training is a waste of time and money.

See for example: Specious Arguments Against Security Awareness at RSA
http://www.cio.com/article/731109/Specious_Arguments_Against_Security_Awareness_at_RSA

“Other panelists admitted that their experience with security awareness is tangential at best.”

“Tim Wilson, the moderator, refused to allow questions from the audience, though that is the norm during RSA sessions. That decision enraged the audience, which began to yell at the panelists when denied the opportunity to speak.”

See also ThreatSIM post on RSA: http://threatsim.com/2013/03/11/security-awareness-training-starts-respect-deserves/

“Note to RSA: Get some awareness professionals for your next panel”

It’s like having a conference on HIV where the only people allowed to speak on behavioral interventions are CEOs from Big Pharma. Funny that Dave Aitel works for a company called Immunity Inc!

AlanS April 3, 2013 11:37 AM

@joequan

2F doesn’t allow you to skip education as you have to educate management to get buy-in on implementing 2F or any security controls for that matter. Risk management is a function of the entire organization. If senior management doesn’t understand that and take ultimate responsibility for it, the prerequisite for organizational security doesn’t exist. Of course in the real world a lot of organizations are dysfunctional on matters of risk management for just this reason.

Coward April 5, 2013 5:02 AM

And this is why it really bothers me when I find a security flaw in a website, inform the admin, and their smarmy reply is that other websites don’t implement the security, and they know their users aren’t technical enough to realize why it’s a problem, so they don’t care.

There need to be real consequences for people who send passwords as plaintext POST data.

…I have a special place in my heart for lazy admins, and it’s right next to the part I reserve for murderers, child molesters, and people who cut in front of me in line…

Max April 15, 2013 10:48 AM

As you allude to security awareness training may be considered pointless, as security is as strong as the weakest link, and usually that weakest link leaves the back door propped open with a fire extinguisher! You also comment that we should potentially spend more on training for security developers. But surely the point you make still applies? Assuming that a develop creates something that an end user will use – the weakest link will still circumvent the developers controls by some means – for example, the classic in the health industry is the pharmacist using their smart card to access patient records to fulfil a drugs order, invariably the senior pharmacist puts the smart card in at the start of the day and leaves it there, logged on for everyone to use the terminal – all day long – rather than use their own smartcards, a piece of convenience for the senior pharmacist and a gaping hole for the security.

As I think you have also written before, we can make data secure, but this makes it almost inaccessible because of the security controls in place in order to access it, so what’s the point in having the data?

So are we looking at a spend of money on a single point or a levelling and balancing act across all points? Surely the latter? So we need to bring up the level of interest and buy-in that the weakest link has in using the security software and hardware controls that we have at our disposal, while getting security developers to discuss ergonomic user interfaces with real end users and get back end security professionals to implement sensible and encompassing security controls so that we have a multi-layered security approach. Again, something that I think you have lectured on before? A common sense approach to a modern tech security problem ?

Naturally this debate could loop forever. I do agree that more money spent on security software developers would make sense, however, you might also want to add, getting the developer to actually meet the end user, or get someone who can add the end user perspective to the development process. This rarely happens up front in the development process. The end user interface is often not the starting point – but in your argument the end user is the weakest link and always will be ! Food for thought. Maybe a change in the development methodology? After all development methods have also changed over time – object orientated to test driven development etc. Microsoft Office is a classic example of something that, we may not like, but we can all see how successful its been (even disregarding that darn paperclip :-). But it is developed and redeveloped for the end user to use. The end user only uses about 10% of the functionality, but the core pieces are simple and we can’t help use them. Perhaps software and hardware security should be the same? A blend of simplicity from the end user perspective?

I think that the answer is like the mythical end of the rainbow, always around the next corner, but surely an overall blend of spend balancing user interface design with training of both developer and end user would be the best balancing act?

Charles Killmer April 17, 2013 8:20 PM

I blame the personal computer for many of our security issues, though praise them for the wonderful flexibility they provide. If all businesses needed to use a purpose built machine for their job duties, think most restaurants, we wouldn’t be concerned about the user getting to the malware website. They would only be able to get where they should be able to access. Users would be unable to install the latest coupon app.

Office computers are far more functional than they need to be and that is getting us in trouble.

Additionally, that purpose built system would not consider Password1 as a complex password, which is still considered by popular operating systems.

infosec researcher April 20, 2013 3:45 PM

@itgrrl
i like your idea.

@HJohn and @joequant

Why are you so critical of incentives like “$50 for the first person to report an instance of a phishing email doing the rounds. $100 for turning in a USB stick”?

I think if the incentives are smaller than the cost to fake an incident than this might work. Maybe $10 for a phishing mail would be a good sum. No one would do the effort to betray the employee for lousy $10, however the amount seems high enough so that people keep the message in mind, which is the actual goal.

Doesn’t that sound like an approach that one should try?

pg April 21, 2013 7:38 AM

“Office computers are far more functional than they need to be and that is getting us in trouble.”

That’s a terrific point.

Outlook, Word, and IE specifically are far more functional than they need to be. There are a lot of capabilities that could be removed from those programs, and the vast majority of users would never notice.

At home I still use Eudora. It doesn’t render pretty HTML, but it also can’t screw up my system, no matter how careless the users. It would be nice if there was some middle ground.

Web browsers at my house are run under “DropMyRights” to keep it from installing software by simply clicking on things. Makes it a pain to install things, but less painful than fixing a malware invasion.

On the other hand, I am a sophisticated user, and I get sick of Excel trying to disable my computational macros every time I open a file. The policy seems to be “disable everything by default” approach on macros, when only a few commands have potential for misuse. That tempts me into turning off all macro security security by default, which then opens me up to something unexpected.

Derek Dougans April 29, 2013 4:43 AM

Driving seems to be a good example of training failing to be effective over the long term. If everyone drove as instructed there would be no bad drivers or accidents. However, over time most people become complacent and careless and as a result you see broken glass at intersections. Same with security; train people as much as you like but then one person rushing to leave for the evening loses concentration, does something they shouldn’t and the machine and/or network’s compromised. The (non) accident record for Google’s self-driving cars is the perfect metaphor for this problem – engineers must build better machines/software to compensate for the variability of human concentration over time.

Jason July 8, 2013 2:29 PM

I agree partially with this. Though I would still argue that this falls into both hands: administration/training, and end users. Yes, when given an option most end users will always do the wrong thing. However, as professionals we do need to spend significant resources to protect end users from themselves. This comes in the form of properly configured hardware and software solutions to cover our bases. But to fully secure our technology while closing our users ability to do wrong (in many cases) this would hinder the users ability to function or hurt the companies ability to grow and prosper. This is not acceptable either.

Any solution that depends completely either end user training or technology to solve this issue is a problem all by its self. You cannot address this without spending some time on all sides of the problem. You will never ‘fix’ end users and make address this in a completely different way, however you can raise their awareness to the ideas and most importantly to the signs when something isn’t right. Users of all levels (and yes, this includes techie IT people) are fully capable of clicking a bad link on accident, or visiting a bad website, and infecting their computer. It’s the rate that this happens that differs.

I have spent a bit of time on user training, education, and awareness in my career. Raising user awareness has not eliminated issues with clicking of bad links, or spyware/malware but it has helped in reducing the frequency significantly as well as the scope of the infections. An end user may not recognize the subtlety of a bad link, or the signs of a bogus ad or website, however they recognize when that window doesnt look right or there is an unfamiliar icon or popup. Users being aware enough to stop and call IT has saved us time on rebuilding, replacing, and recovery. I could not pose an argument that this combination has not been valuable to productivity and budgets. Combine this with proper IT solutions for firewalls, mail filtering, and user access controls such as password complexity and you will find yourself with a well rounded security solution with a strong impact and acceptance.

BJ August 13, 2014 10:12 AM

If your point is that you shouldn’t rely on security trainings, I fully agree. However, in this day and age, everyone needs to know the basics of security, if for no other reason to help protect themselves at home. Companies should not rely on security trainings to keep their company secure, but it certainly does no harm to train employees, and if you are able to help just 5% of the company population by training them, then why not? The bank teller may not be able to protect the bank from a sophisticated attack, but who is going to protect the bank teller from a personal attack? Security training should be viewed as one of those things that companies do to help their employees at home (such as health insurance or bonuses for getting in shape) also knowing that it will have some benefit at work as well (less time cleaning up security messes). Can we rely on employees always doing the right thing? Of course not. But we should still do it, and there is still value gained from doing it.

You note that health training isn’t working, that people hear it and ignore it. I suspect that if we didn’t do any health training, that things would be much worse than they currently are. We are fighting a prolonged battle. Of course it is not to be won with a 30 minute training. But if there are incremental gains, we should not ignore those.

DougM December 9, 2014 10:36 AM

I tend to half agree with you on this. I strongly believe all mediums of awareness are helpful. Different people learn in different ways but most importantly, we tend to treat our homes in about the same way on security. My training is often interactive with a lot of links to ‘your home’ which people connect with.

People want to talk about stuff when it is interesting and affects them. So if someone knows that an action could lead to them losing a job, affecting their mortgage, rent, car, family, etc., then they will act appropriately.

The fix is in January 15, 2015 9:26 PM

“Microsoft has a great rule about system messages that require the user to make a decision.”
They also happen to be the poster child of failing at exactly this. This utter and complete failure permeates their system design and their implementations at every level. Has done so since their genesis.

Deliberately, in fact, since it has long been (and still is) their marketeering model to go against this, to provide an “intuitive experience” that is exactly not this, but that “requires no training” thereby spoiling the willingness of users to acquire knowledge other than by accidental osmosis, thoroughly clogged by their wooly word salad marketese.

Understanding what you’re really doing is an anathema in this model, and that too permeates everything from back of chair to back of box and beyond, down to the back-end. It’s practically hardcoded in the software.

We can do better, in fact we already have better systems and even people who have mastered their use. Systems that don’t immediately drop their pants at the slightest provocation, and application programs that don’t seem designed as security bear traps to their users.

So the simple fix is to hire those learned people, give them those better systems, and only allow them to handle sensitive stuff. The rest, the unlearned great unwashed, can play with whatever is left over, processing that doesn’t cause large fines if it leaks to the world, that doesn’t cost the budget when it goes awry, using the broken and inferior software that “everyone” is using now. That would give an incentive to people to acquire the requisite skills: A more demanding job and therefore better pay.

That doesn’t benefit the home user? Of course it does. How did you think they ended up with what they have now in the first place?

peter sonnenthal April 14, 2016 4:55 AM

With the adoption of smart phones by the age of 12 becoming commonplace, we need age appropriate Awareness Training.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.