Security Design: Stop Trying to Fix the User

Every few years, a researcher replicates a security study by littering USB sticks around an organization's grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as "teachable moments" for others. "If only everyone was more security aware and had more security training," they say, "the Internet would be a much safer place."

Enough of that. The problem isn't the users: it's that we've designed our computer systems' security so badly that we demand the user do all of these counterintuitive things. Why can't users choose easy-to-remember passwords? Why can't they click on links in emails with wild abandon? Why can't they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we've thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This "either/or" thinking results in systems that are neither usable nor secure.

Our industry is littered with examples. First: security warnings. Despite researchers' good intentions, these warnings just inure people to them. I've read dozens of studies about how to get people to pay attention to security warnings. We can tweak their wording, highlight them in red, and jiggle them on the screen, but nothing works because users know the warnings are invariably meaningless. They don't see "the certificate has expired; are you sure you want to go to this webpage?" They see, "I'm an annoying message preventing you from reading a webpage. Click here to get rid of me."

Next: passwords. It makes no sense to force users to generate passwords for websites they only log in to once or twice a year. Users realize this: they store those passwords in their browsers, or they never even bother trying to remember them, using the "I forgot my password" link as a way to bypass the system completely -- ­effectively falling back on the security of their e-mail account.

And finally: phishing links. Users are free to click around the Web until they encounter a link to a phishing website. Then everyone wants to know how to train the user not to click on suspicious links. But you can't train users not to click on links when you've spent the past two decades teaching them that links are there to be clicked.

We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users' security goals without­ -- as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­ -- "stress of mind, or knowledge of a long series of rules."

I've been saying this for years. Security usability guru (and one of the guest editors of this issue) M. Angela Sasse has been saying it even longer. People -- ­and developers -- ­are finally starting to listen. Many security updates happen automatically so users don't have to remember to manually update their systems. Opening a Word or Excel document inside Google Docs isolates it from the user's system so they don't have to worry about embedded malware. And programs can run in sandboxes that don't compromise the entire computer. We've come a long way, but we have a lot further to go.

"Blame the victim" thinking is older than the Internet, of course. But that doesn't make it right. We owe it to our users to make the Information Age a safe place for everyone -- ­not just those with "security awareness."

This essay previously appeared in the Sep/Oct issue of IEEE Security & Privacy.

EDITED TO ADD (10/13): Commentary.

Posted on October 3, 2016 at 6:12 AM • 114 Comments

Comments

Anonymous1October 3, 2016 6:19 AM

Just because we can't fix the user doesn't mean it's not their fault, personal responsibility needs to start somewhere.

William ChenOctober 3, 2016 6:42 AM

I got your points. But I respectfully disagree the title. Security is everyone's responsibility including the users and developers. Whoever design those systems, those warning messages, those authentication mechanisms, are developers, also users. Developers, sometimes face the challengers of time to market, business requirements which conflicts with security. Those requirements are from USERS!

Therefore, I agree with you, stop blaming users from all security vulnerabilities, but 'fixing' (aka educating) the users cannot be stopped by assuming we can give them the bulletproof systems...

Thanks,
William

Non-AnonOctober 3, 2016 6:47 AM

Personal responsibility starts with the developers who sell people inherently unsecurable software.

Some of the biggest perpetuators of the cycle of exploits are the techies who would rather have fraud and DDoS bring down the internet than the barest whiff of an automatic update, because MY PRIVACIES! Fortunately, more companies are figuring out they're dealing with paranoid schizophrenics, not rational people.

hawkOctober 3, 2016 6:48 AM

Thank you! This is one of the best essays you have ever written, badly needed, thanks.

LeeOctober 3, 2016 6:54 AM

Users are still one variable in the security equation. Yes there are devices that can sandbox, yes auto-update takes the user out of the picture, however I would like to point out a few issues with this entry:

1) We have not always had sandboxing technologies, a relatively recent development and now that it is here it can be quite expensive.
2) Users are still part of a holistic security picture something I seem to remember Mr. Schneier has spoken/written of in the past. We simply cannot forget them and let them do what they want - maybe not as much of a focus now that we have these newer technologies but we cannot let them roam wild.
3) Newer technologies like sandboxing, micro-vm's, etc. are a positive step forward but many small to medium businesses simply can't afford them. Regardless of whether their ISP offers it as a service or they purchase the tools themselves the cost is prohibitive in many instances. This means that security attention on the part of users is more important - not less.

Thank You

PinocchioOctober 3, 2016 7:01 AM

Fully agree on the need of usability AND security philosophy in design, but there is a limit: as in the real world, everyone has to be held responsible for his/her actions, and ignorance is not an excuse.

In real world, people don't paying attention to warning signs get fined, or injured, or worse.

In real world, people trusting unknown guys get scammed, robbed, or worse.

I real world, ignoring laws (civil, criminal, as well as law of gravity) does not save people from the consequences.

Reversing the responsibility rule will open the way to countless abuse possibilities, possibly an even worse scenario both for usability and security point of view.

Of course we can (and should) try to build the digital world better and safer than the physical one, but experiences with any group or society of humans shows that worst things happens when people are deprived for personal responsibility duty (and right).

c6ba95b3303ce070b9667e837fee4561dc2557900c88670344f5b3f45dcf8e4eOctober 3, 2016 7:36 AM

The comments above miss the point. This is about good design principles security professionals should promote. Security professionals are not pastors or preachers, they have a defined objective of maximizing security based on known facts. You can moralize all you want but the facts are as Bruce has laid them out. Therefore the professional responsibility of those whose job it is to make the internet secure is not to issue lectures which are known to be ineffective in order to scratch a moral itch, it is to design systems that are secure despite users' behavior.

Couldn'tPossiblyCommentOctober 3, 2016 7:51 AM

While I completely agree that too many 'experts' like to sit in the comfort of their superiority and blame people lacking in their decades of domain-specific knowledge (and this is hardly unique to the world of software), aren't there user-based security problems that are, well, pretty much user problems by their very nature? Can we even apply traditional incentives to this domain?

At some point you do have to decide whether or not you're locking your front door on your way out - and, anyway, why do we do that? Insurance is the real answer. There just hasn't been enough of a real financial impact as a consequence of computer-specific crime (despite the allegations of costing millions) to result in insurers saying 'you must do this or we won't pay out'.

In the real world, we have enough risk that we get insurance against it and that insurance has requirements as a result of statistics e.g. 'you must have a door lock'. There isn't as yet an Internet parallel.

Take posting on Facebook. Without some sort of operational security (read: thinking before pressing Enter), deciding whether or not to indicate that they're on holiday this next week to the entire world is pretty much up to them, as are the consequences. It's impossible to tailor software to prevent that as a threat model, so the prevention has to be punitive on the user. What's going to be more effective - trying to default posts to only trusted associates, and then trying to figure out who is trusted and prevent the user from trusting random strangers, and... or an insurance company refusing to pay out for theft when the user posted their whereabouts on public media?

So we do train users all the time. The problem is that we currently try to train them to be computer specialists, and we should train them in managing their risks and exposure.

Just look at the phrase 'this certificate has expired' to demonstrate that the author of that message is completely and utterly clueless about how to convey risk to a user.

Ewan MarshallOctober 3, 2016 7:53 AM

Audio over USB type C,
HDMI over USB type C,
Display Port over USB type C,
Charging over USB type C,
PCI-Express over USB type C (thunderbolt)...

So all we need is one socket on our fancy new Mac book and there is such a massive attack surface, we can't even safely charge the device...

ollorwi osaroOctober 3, 2016 7:58 AM

The user remains an important factor but this is not to load him with security responsibilities which is outside his purview. His duty is to use the system, security provides the safeguard against viruses.

Name (required)October 3, 2016 8:02 AM

Why is this different from automobiles? Both automobiles and computers are complex and dangerous devices. We receive mandatory safety instruction in driver's education, but for some reason we expect people to use computers with no safety training whatsoever. Actually, the reason is obvious: profit. Sell as many devices to as many people as possible, forget the consequences. Anyway, at a young age, children should be receiving instruction on the dangers of computer and safe computing practices. This won't solve all problems, just as driver's education doesn't solve problems, but if you're still going to drive unsafely or compute unsafely after receiving safety education, you're doing it deliberately instead of out of ignorance, and then we can blame the user. When police start handing out tickets for unsafe computing, maybe people will start to get the message.

UgoOctober 3, 2016 8:02 AM

I think there are different sides of the story.
one part is that too often apps and websites are made with poor security design and poor users interfaces.
having studied perception and UX, i often have the precise feeling that a wide part of the net is undermined to the foundations by bad design.
and this is only going to get worse as the IOT is gaining momentum with this whole new bunch of terribly security-thought stuff...

the other part is that we still miss a basic internet literacy. the first generation since the web was born is yet to be completed, and while everybody learn at school how to use a pen still nobody learn how to use an usb drive and what it has behind.
even more, i'm afraid that the situation is getting kinda worse, since those back in the 80s/90s who grew up while internet was built around them had the possibility to see every advancement, every technology, and the reason behind every decision of the new worldwide connection in the making.
nowadays "internet" is some kind of wrapped and ribboned package nobody really wants to open and look into. emails? just messages like snapchat, what's the difference?

kantbebother'dOctober 3, 2016 8:14 AM

I've been saying this for 2 decades, and so have others. Does anyone not remember DJB? https://www.schneier.com/blog/archives/2007/11/thoughts_on_the.html He has written several programs that have withstood security scrutiny and abuse over an extended period of time. While BIND rewrites itself ever 3 years, from the ground up, and STILL needs patching weekly, DJBDNS has never had any significant flaw or patch needed. DJBDNS was one, if not the only DNS system that didn't require a patch to Dan Kaminsky's flaw in 08, https://en.wikipedia.org/wiki/Dan_Kaminsky#Flaw_in_DNS All DJB's software by comparison is superior with respect to it's security when compared to it's peers. It's not perfect, but by comparison, and with the principals espoused by Bruce, it is possible to write more secure software and leave the user out of the equation.

hawkOctober 3, 2016 8:43 AM

@Ewan Marshall

Explain why a single excellent interface and standard is worse than a bunch of different incompatible ones.

Why not argue the problem is copper, or aluminum? If only they didn't put connectors on stuff.

jbmartin6October 3, 2016 8:47 AM

A lot of the analogies posted here missing an important point. With car safety, the driver can look at a warning sign and know there is a school nearby or something else that requires caution. Contrast that with the author's example of malicious link clicking. In order to spot malicious links, I need to understand something about the underlying technology. There is a huge difference between 'do not drive your car on the ice' and 'do not click on a malicious link'. Everyone knows what ice is. What is a 'suspicious' link, or email? Now the user has to look at the underlying characteristics and make a judgement call. Hover over the link and examine the URL for x, y, or z? That's not going to happen as the author points out. In other words, I don't need to know how a car works to operate it safely. With technology security, too many experts expect the user to learn about the underlying technology to make good decisions.

ALexOctober 3, 2016 8:50 AM

Any time I see "We have to make this safe" I think that's code for loss of freedom/rights.

The safer something is, the less freedom/liberty/rights you have.
Nice touch throwing in "victim blaming is bad" though, that's a quick way to get people to agree with you.

John AndrewOctober 3, 2016 9:04 AM

Completely agree with the vision of having driver-less cars that require no driver education... and Bruce's vision having computer systems that one day require no awareness on the part of the users...

But - until we reach a point of 6 9's (99.9999%) computer security automated by well-designed systems - I don't see how we will get around the awareness issue. If/when we do get there - social engineering might become more prevalent, still requiring awareness training.

I blogged about this in the past when there seemed to be an "anti-awareness" effort from a number of security industry experts.

https://securingthehuman.sans.org/blog/2013/07/08/guest-post-ive-seen-the-future-we-still-need-awareness/

Let's look at a real world example: We have ransomeware that is absolutely ripping off companies world-wide. Do we NOT warn our employees about this, and how vishing/phishing is being leveraged to misappropriate funds?

If I'm a smart business person - I recognize that 'ideally' Bruce's thesis is spot on. I would LOVE for computer security to protect my employees. In reality, I'm on the horn with my people saying 'hey, you're going to see emails, voice calls trying to tell you that the CEO, CFO said wire the money - don't fall for it!'

This is a provocative subject. I appreciate you bringing it to the forefront again Bruce. Enjoy your blog, books, commentary, and leadership in the security space!

E.M.H.October 3, 2016 9:15 AM

I normally agree with Bruce here, but on this article I must disagree: There is indeed a burden of responsibility upon the user for security. After all, it's their accounts, it's their computers, it's their passwords, and in the end, it's their information that's at stake.

Yes, it's possible for IT people to go too far in blaming users, but there's too much pie-in-the-sky stuff written here in this article. How do you solve the problem of nullifying malicious links in spam email, for example? You have to educate the user to some extent. Also, while certificate warnings are hideously bad (there are times I don't think any of us in tech should ever be allowed to write warning or error messages like that; we all need plain english editors), isn't the point of those to let the user know "Hey, speaking as your browser, I think you need to be careful here, I don't know if I can trust this site"? The whole point is to inform that user's behavior. You can't choose what to click on for them.

After reading this post, I couldn't help but think of the old saying attributed to so many different people I can't tell where it comes from: "The problem with building something idiot proof is that someone will come along and build a bigger idiot". Saying "We must stop trying to fix the user to achieve security" both wrongly absolves the user of his/her responsibilities, and posits a utopian ideal of being able to control the experience so well the user doesn't have to worry about such things. What's more impossible, fixing the user? Or fixing the systems? Just because it's supposedly under our control doesn't mean the latter can be achieved to that extent. And just because they're independently minded, have a vast range of abilities from expert to "Grandma, this is called a mouse..." doesn't mean we can't modify their behaviors.

Yes, I know, the reality is somewhere in-between all that. We can get users to act better as time goes by and experience levels rise. And we can be better about how we design system behaviors (get everyone responsible for keeping their damn certs up to date and configured with the **right** information, and we can safely have browsers deny connections when confronted with cert problems). But this post by Bruce is way to idealistic. We do owe it to users to make the Information Age experience safe for them. But a large part of that involves giving them the knowledge to take care of themselves. It's simply not possible to do it any other way.

ChristopherOctober 3, 2016 9:22 AM

@Non-anon

The premise behind being able to toggle off automatic updates is not a matter of privacy, but of control. That you don't understand that while spouting very trollish rhetoric is telling.


@Couldn'tPossiblyComment

Careful on the OpSec route. It runs counter to freedom of expression and is a slippery slope as far as security advice goes.

Couldn'tPossiblyCommentOctober 3, 2016 9:25 AM

@ALex Any time I see "We have to make this safe" I think that's code for loss of freedom/rights

Isn't that obvious? That's the nature of safety; it usually prevents something.

In today's modern societies, one usually does not automatically have the right to:
* Yell 'Fire!' in a crowded theater
* Discharge a firearm towards a crowd of people
* Drive a vehicle while under the influence of a variety of drugs
* Build a bridge below tolerance and then drive a bus full of people over it

Safety is inherently about both individuals and society preventing something, whether by action or inaction.

I understand that particularly in the US, rights is something of a touchy issue, but would it be completely unreasonable and a violation of rights to add into that list:
* Leave an office router with a known default password?
* Leave a server with SSL 2 enabled

We only don't codify these things in safety standards so far because nobody has been seriously injured/killed - that's what safety standards are.

Safety's sibling, risk, is where things get more interesting. There's usually a tradeoff to be had. Bruce's article's point appears to be that so far, we push all the risk onto the user, expecting competence where none can be reasonably expected.

At the very least, we need the equivalent effort to ensure that systems are secure by default (equivalent: a firearm's safety) and don't endanger by default. Seems a reasonable & laudable goal - whether it's feasible is an entirely separate and much harder question, but right now it's a Wild West (pun intended).

My earlier post pointed out that we do as societies train everyone in certain basics such as locking one's door, and we can do so with operating technology as well if we so choose, but this is a case where users and industry must meet in the middle, not one expect the other to perform miracles (either direction).

David LeppikOctober 3, 2016 9:55 AM

To those of you who think user education is the key, what would you teach the users, especially given that:

  1. Most users will only have time for basic instruction, more like how to drive a car, and less like how an internal combustion engine works.
  2. In many professions, e.g. health care, the computer risks pale in comparison to the real-world risks of not having urgent information. In nearly all professions, the real-world risks feel more urgent.
  3. A lot of the advice coming from professionals (e.g. change ALL your passwords EVERY month, don't use consecutive digits), is counterproductive.
  4. Computer security is a moving target, and attackers adapt to the skill level of their targets.
  5. Several recent high-profile disclosures involve people who were advised by the best security experts and were doing their best, but slipped up once or twice.

MedoOctober 3, 2016 9:58 AM

One issue with the annoying red warnings you mentioned is that they often *are* useless and even counterproductive. Example: Say I would like to enhance the security of my small forum community a bit and enable https. I could just start out by using a self-signed certificate. This would be a big security win in my opinion, because now an attacker could no longer get at my user's passwords by just passively listening in. The lack of trusted signature still allows a man in the middle attack, but the bar for attackers has been raised a bit.

But if I do that, the users of my site will suddenly face big red warnings because my certificate is not signed, so that the connection is not *completely* secure. Why is this now a problem? Before, the connection was *not secure at all* and no browser showed a warning. The only reason I can think of is that the "https" protocol might inspire a false sense of security, but that is already being fought by modern browsers by very obviously striking out the "https" in the address bar.

Often when I visit an encrypted site and get a certificate warning, I do simply ignore it because I do not plan to interact with that site in any way that would require me to trust its authenticity or even the confidentiality of encryption. However, that somewhat trains me to not take these warnings seriously.

DanielOctober 3, 2016 10:03 AM

I'd like to offer an alternative view to both Bruce and the commenters above. Bruce is correct to the extent that it is a mistake to blame the users. At the same time, security professionals overvalue security. IF the choice for the future of the internet is between a mosaic of feudal states, each secure in their walled garden defending their precious castle OR a rerun of the wild wild West with hackers taking the role of gunslingers and the Five Eyes as the local sheriffs THEN I will choose the wild wild west. Why? The default is permit. Always has, always will be.

Stop blaming the users; stop expecting security professional to build perfect security.

Bob RobertsonOctober 3, 2016 10:17 AM

The fact that after more than 20 years there is still an "autorun" is mind boggling.

AJWMOctober 3, 2016 10:25 AM

@Lee: 1) We have not always had sandboxing technologies, a relatively recent development and now that it is here it can be quite expensive.

It goes back a long way, further than Windows:

From the wikipedia entry for "chroot":


"The chroot system call was introduced during development of Version 7 Unix in 1979, and added to BSD by Bill Joy on 18 March 1982 – 17 months before 4.2BSD was released – in order to test its installation and build system.[1] An early use of the term "jail" as applied to chroot comes from Bill Cheswick creating a honeypot to monitor a cracker in 1991."

AJWMOctober 3, 2016 10:39 AM

Adding to above: VM technology is even older than chroot. VM/370 was released in 1972, about the same time as Intel's 8008 (not the 8080, that was 1974).

Burroughs' tagged-word architecture (which prevents a lot of the low level issues that things like buffer and stack overflow exploits are based on) goes back to 1961.

The history of the personal computer industry in terms of security has been one of massive fail, throwing away all the lessons learned on multi-user mainframes about keeping users and processes from interfering with each other (whether deliberately or accidentally) because personal computers were, well, personal. Well guess what, Sun had it right back in the early 1990s: "the network is the computer".

(The whole IoT fiasco is rediscovering that yet again ... and it even has "internet" right in the name.)

IanashA_titocIhOctober 3, 2016 10:59 AM

@Ewan Marshall, @Hawk

Ewan, above, wrote:

"Audio over USB type C,
HDMI over USB type C,
Display Port over USB type C,
Charging over USB type C,
PCI-Express over USB type C (thunderbolt)...

So all we need is one socket on our fancy new Mac book and there is such a massive attack surface, we can't even safely charge the device..."


Yes, I get the point that it seems that you can't put super glue in the usb port(s) and still charge, etc., ...
fwiw, perhaps people more knowledeable than me, including Apple, may want to chime in here.

Questions:

1) Will Apple hardware prevent an evil maid attack, for example?

2) Regardless, might installing Qubes-OS.org as host, if possible, on a Macbook and running macOS, Windows, Linux, Openbsd, Tails, Knoppix, etc., in vms, if possible, mitigate some of these risks?

3) What USB type C risks above don't require access to hardware?

4) Is dual booting from hdd still considered unsafe? How about dual booting from a DVD or usb thumb drive (ie. Tails, or Knoppix live DVDs)?

5) How about using Virtualbox w/wo Guest Additions for VMs?

It seems to me that running live iso DVDs could be an important option to help protect users from themselves and malware in general. What would be some pros and cons of trying to use a non-persistent OS? What might 'best practices' be with a non-persistent OS? What might 'best practices' be with a non-persistent OS with productivity in mind (both at the individual and group level?

@All

From Snowden's 30 Sep twitter feed:

'On @Subgraph vs. @QubesOS: Both projects matter, but most face user-auth'd "click me pls" vectors, not NSA. PaX can't block the "OK" button.'

btw you might want to see Oliver Stone's Snowden movie

zOctober 3, 2016 11:31 AM

@Medo

I have the same complaint. I also think that the trend of making every single site TLS-enabled is a bit dumb. It would not be a bad idea at all if TLS was not so fragile, but TLS being TLS, it inevitably fails for some (usually) benign reason, which trains people to click through the warning. I do it too, depending on the site. I really don't care if the NSA can MITM my connection to a blog entry about proper dress shoe sizing.

The encrypt-everything idea is great if it can be done in a reliable way, but TLS is not the answer.

The Phisher KIngOctober 3, 2016 11:36 AM

I used to be mostly on the side of those who claim the users have some level of personal responsibility in their actions.
To validate that I used the analogy of driving a car on the road - it's not the fault of the car manufacturer or the roadmaker that you don't know what you are doing, like to drive drunk or are just inattentive.
Well, in the age of self-driving cars the analogy no longer holds.
So if the stupidity that people perform while driving cars can be overcome purely by technology then there is no excuse for the purveyors of internet-related stuff to not do the same.
Frankly, the amount of poor-secured garbage that spews out to the public for them to misuse, is nothing short of criminally negligent.

zOctober 3, 2016 11:50 AM

@The Phisher KIng

I agree. But it is perfectly reasonable to expect the average person to understand how to drive a car without hitting things. It is unreasonable to expect that same person to understand cryptography enough to "just use GPG", as the mantra went some years ago, or to understand why their browser is saying something about issuer chains not being provided. We're demanding that users develop expertise in a very niche field, and criticizing them when they don't.

markOctober 3, 2016 12:00 PM

Thank you, Bruce. I've been saying it for over 20 years.

However... I disagree with the distinction of usability vs security. I'd like to add another distinction: useful, vs. some idiot marketdroid's "bright idea".

I've been on the 'Net for a long time, and I remember when it was a classic prank on newbies to usenet to warn them that reading an email could give them a virus... until Bill the Gates cranked out the hairball that made it possible to run something just by viewing the email. And most of what's come since (not all - DDoS is different, for example) all descends from that.

My take is that there are too many marketdroids and managers who want to be Producing a tv show or movie, and so everything has to sing and dance, so you'll buy.

And then, there's the web pages with 15 - 20 linked in sites (try some media articles, esp. if they have a video, and you'll see 5-10 links to doubleclick, and gigya and addthisthatortheother.... and then folks complain the page loads slowly, because it's linking all those sites for content.

Which, of course, is part of what destroyed America (all coming down from the MBA degree) - outsource everything, don't be responsible for anything.

Bleah.

I read my email in *plain* *text*, so the IRS email from Vietnam, or Russia, or Brazil or gmail, for that matter, is obviously something to laugh at.

mark

SJSOctober 3, 2016 12:25 PM

E.M.H. asserts that the burden lies with them because its their computer, their accounts, etc.

Except it isn't.

When you can force me to run your code on my computer in order to accomplish something, it isn't my computer anymore. (Javascript, ActiveX, Java applets, etc.)

When you bundle UX changes with security updates, then it's not my computer anymore. (Microsoft and Apple love to do this, and just about any major application.)

When you bundle internal functionality change with the OS, then it's not my computer anymore. (RedHat/Debian/Ubuntu and systemd spring to mind.)

When you give me a 30-page EULA for my account, with the provision that you can update it at any time, without notice, then it isn't my account anymore. Very few EULAs grant me any rights whatsoever.

So training users to care about systems that aren't their own is going to be an uphill battle. You want users to care? Let them own stuff.

My personal favorite is the security email from someone I don't know, unsigned, containing a link I'm supposed to click on to go to a training website, and if I don't, my access will be revoked ... and when I don't click on the link, my access is revoked, and when I do click on the link, the training tells me to not click on links where I'm threatened with dire consequences if I don't click on a link....

We should expect users to just give up.


AnonymishOctober 3, 2016 12:31 PM

I couldn't agree more. However, we're not living in that world.

Can you provide examples of _why_ things are they way they are along with examples of how some of these things can be fixed?

For example, say I'm developing a web application. I'm relying on protocols such as HTTP, TLS, TCP/IP, SMTP. I'm relying on programming languages such as Javascript, PHP, JAVA. There is no way a developer can or should need to worry about protocols or language specs.

Take to the next level, why should programmers need to worry about writing secure code?

Perhaps I just answered my question. We can only build secure things on top of other secure things. If TCP/IP and other protocols are not designed with security in mind.. and most, if not all, of them weren't.. then game over, period.

ShacharOctober 3, 2016 12:42 PM

Many years ago, a group leader at Check Point would say this about the security/connectivity trade off. He said that Check Point sells connectivity. After all, everyone knows you cannot connect to the Internet unless you have a firewall....

The way I like to present it, which I think solves the conflict, is this. There is no usability/security trade off. You merely trading off security with security. On one far end of the spectrum you have a system that isn't secure, because it does not protect the user from anything. On the other end you have a system that isn't secure because the user has turned it off and bypassed its protecting features, or else they can't do their job.

Once presented like that, the fight isn't between two conflicting aims. Presenting the problem like that forces the designer to take users' actual needs (the things that will get them fired if they don't happen, as Bruce puts it) into consideration.

This way, whenever you introduce a new security feature (force the user to change password once a day), you have to ask yourself "will this cause the user to write their password on a piece of paper?".

Shachar

Fun fact: In every place I ever worked, the first thing I'd do is write down a random string of characters on a sticky note and stick it to the bottom of my keyboard. It never had anything to do with any of my passwords. I just like to give people the warm fuzzy feeling that they caught me.

KentOctober 3, 2016 1:03 PM

YES! YES! YES! YES! THANK YOU!!

I have always thought that the only people responsible for links to pages that exploit your computer are the developers who wrote the browser software. Downloading and viewing a text document should be completely and 100% safe, and it is shameful that it is not.

Frankly, I think the web has gone off the rails with the rise of WHATWG and people pushing HTML5, which is simply HTML 4.01 with client-side scripting integrated into the entire pipeline. Client-side scripting is a bane and should never have been allowed. Forcing my computer to execute arbitrary code in order to view what is 99.9% of the time simply static text and images is disgusting, browsing with scripting disabled should be the norm, not so exceptional that people are shocked when you say you do it.

ChristopherOctober 3, 2016 1:23 PM

@Z

TLS applied everywhere is hardly a bad thing when done right. It has tangible security benefits and is invisible to the end user when implemented correctly, which it is the vast majority of the time. Addressing misimplementation issues doesn't mean "tear it down" - the reduction in worthless error messages would isn't worth putting them all at risk of passive, bulk analysis by nation states. And I don't even mean the NSA here - plenty of nation states hostile to Western interests do bulk, passive SIGINT on their respective parts of the Internet backbone. It's grossly irresponsible to put everyone at elevated risk by those nation states just so the U.S. and U.K. can have an easier time gathering SIGINT on hostile entities.

RobertOctober 3, 2016 1:56 PM

These comments are full of finger pointers. And now you know why we still have a problem.

AndrewOctober 3, 2016 3:35 PM

People should understand that systems WERE DESIGNED this way, so some other people can steal their work, their lives and make them vulnerable.

Sancho_POctober 3, 2016 3:37 PM

“Stop trying to fix the user” is easy to agree with.

Unfortunately the essay’s usual appeal “what we need …” is disingenuous, it’s sad to hear @Bruce singing the corporate song in the conclusion.

“People -- ­and developers -- ­are finally starting to listen. Many security updates happen automatically so users don't have to remember to manually update their systems.”

Oh what an irony.
a) In the wake of Win 7/8/10:
Is there still any serious Mi$o user out there who hasn’t disabled auto update at all?
b) We can’t get software right, why should we get the autoupdate right?
One single “unintentional” autoupdate could fry half a continent or more.
Oh yes, set up autoupdate on teller machines, or your corporate servers, good idea!
(All that FOSS software stolen for big business should stop today!)

”Opening a Word or Excel document inside Google Docs …” … is insane. Did it ever happen to you that the Net was down when you needed a document? Do you want corps to vet your documents for “advertising”?

And "… sandboxes that don't compromise the entire computer.”
Um, easy to use and extremely “secure” software. Not.
Didn’t we learn that software isn’t perfect?

No, it’s not the user.
Think of your neighbor’s (not to insult you personally!) teenagers responsibility.
Do you see any?
The user is not responsible for the device, HW or SW, but for the use of it.

Keep the responsibility where it belongs to - here at the device manufacturer.

Sancho_POctober 3, 2016 3:50 PM

@Couldn'tPossiblyComment (09:25 AM)

“I understand that particularly in the US, rights is something of a touchy issue, but would it be completely unreasonable and a violation of rights to add into that list [of user responsibility]:

* Leave an office router with a known default password? …”

Yes, it would be completely unreasonable.
As it is completely unreasonable to sell devices with default passwords.
Imagine a safety lock manufacturer selling all locks with a default key and delegate the change of the key after installing to the customer.

The smarter idea would be to supply a piece of paper with the individual key to each router. If the key is lost by the customer (but needed), I’d happily sell them a new router (or two, one as a spare, just in case).

It’s not the customer, it’s the manufacturer.

James MacOctober 3, 2016 4:26 PM

So by analogy, you and Frau Sasse would be happy to let a blind 13-year old drive two tons of car on the highway and when the inevitable crash happened, you would blame the lack of adequate collision avoidance devices on other vehicles?

pepperOctober 3, 2016 5:53 PM

Funny thing is that everyone disagrees on what to do actually. People like Mrs Sasse and also Bruce in above post don't give much specifics on what to do, they only repeat what we already knew. People can't remember passwords, warnings are bad, duh..

The only people I know who had some useful/concrete answers were DJB and pgut001.

Sancho_POctober 3, 2016 6:39 PM

@James Mac

Good shot, my dear, but a better analogy might be that a blind 13-year old is qualified to operate a computer, but a brainless 13-year old is not.
Oh boy!

tyrOctober 3, 2016 7:11 PM


It seems to me that breaking the physical
connections that expose all of this to
risks makes better sense. Instead they are
busy wiring everything into the interNet
because it is possible, with no thought for
the consequences. If movie films were made
in the old way they wouldn't require any of
the elaborate nuttiness that surrounds the
digital worlds mad schemes. By restricting
the users to those with physical access to
the hardware you eliminate a horde of Nigerian
Phishmongers along with the giant corpseboys
looking ovver everyones shoulders 24/7.

So the first question to ask yourself "does
this have to be connected to the Net ??"
If the answer is no, don't hook it to the Net.

The same goes for the users, if they have no
reason to be on the Net disconnect them into
a walled garden so they can swap recipes and
jokes and grandmas favourite pictures without
being able to impact business.

Of course this breaks the mad scheme that has
all of everyones eggs in one badly designed
basket groaning under the strain of carrying
that load. In the good olde dayes, you weren't
allowed to touch a computer or even sweep up
around one unless you had a PhD and a lab smock.
The first passwords were there to keep the holy
machine sacred lest the unworthy touch them and
ruin things. now every member of the sub-genius
class of folk carries one in their pocket and
you want to blame them for being irresponsible.
That's a lot easier than getting smarter about
security as a collective action of the comp
types.

Any lock system only has to be good enough to
make honest people stay honest, after that it
is a waste of time and money.

Clive RobinsonOctober 3, 2016 9:20 PM

@ Bruce,

Traditionally, we've thought about security and usability as a trade-off

You forgot to add "because we had to".

This is because "security costs" not just in programmer time but CPU thus user time, increased memory etc etc. And for most of the time it was a miracle that things actually worked at all. Thus security has almost always been a very distant thought well behind "lets get it working", "lets make it usable within resources" etc.

We glibly talk about "Security -v- Usability" and don't go further thus we are in effect lying to ourselves.

The simple fact is the Internet was built without security in mind and it's to realy to late to fix it now. Because people are not going to let you tear it all up and start again from scratch, nor are they going to alow you to take the draconian incremental steps to fix it in anything less than several infrastructure equipment generations.

Even though we now arguably have the CPU cycles, memory and programmer time for user devices, the programmers and engineers can only work with what we currently know or have and that is a problem...

One fundemental issue is there is realy no such thing as a "secure network" many brains have worked on it and not much has happened, because it is a very hard problem[1]. In the past there has been the illusion of security by preventing users having unmediated access to a network, however it did not deal with other issues. One such major unsolved issue is the "insider threat".

Oddly perhaps it was the insider threat issue at the end of the day that drove an undertaker sufficiently mad to do something about the feckless girl who was stealing his lively hood. And as a result gave us the automatic switch which is the fundemental component of all multiuser networks[2].

If you want a secure network on which you can build the rest of the security for users, you need to solve the "Trust Issues". Not just securely but impartialy under assumed hostile and untrusted third parties and that may just be impossible to do. Certainly Strowger failed to eliminate the "insider threat", he just made it harder and moved the problem along, just as CA's have in more modern times. Each time the problem is moved along and not solved, attackers just come up with new attacks and security becomes compromised yet again.

So whilst it can be argued it's not the users fault things are not secure, passing the buck to those who build the infrastructure it's likewise not their fault either. Thus you have to pass the buck again... If as some suspect you can not have fool proof security as real security is not possible where does that leave you?

In almost exactly the same situation as physical security against burglary. Do we blaim the victim there? Yes of course we do, if we think they've been negligent. But we temper it knowing that we our selves are vulnerable to crime. Especially when we hear about gangs of as many as ten people breaking into houses just to steal "car keys" and that they are armed etc. Even in places like Cape Town South Africa, where security firms lock people in their own homes that are more secure than prisons, breaking and entering crime still happens.

It boils down to the old defence argument about how much money you spend. The reality is the only thing you can ever learn for certain is when you did not spend enough. Usually when you find you have a gun in your face...

[1] To get even remotely close to "secure" we have to be able to solve the "secure distribution" of encryption keys problem which is just one small part of Key Managment (KeyMan). Many would argue that we are not even close to having an idea of how to start let alone solve it. Our first attempt with CA's has not been a great success security wise.

[2] The undertaker was Almon Strowger and in 1888, some fourty years after the relay was invented came up with what became known as the "uni-selector switch" that alowed a customer to tap out a number rather than "ask an operator". The system got advertised as "girl-less, cuss-less, out-of-order-less, wait-less telephone", which many saw the advantages to in the way of commerce. The history of the telephone network gives many lessons we should have learnt from when building what is now it's replacment technology.

OuchOctober 3, 2016 9:38 PM

The same goes for the users, if they have no reason to be on the Net disconnect them into a walled garden so they can swap recipes and jokes and grandmas favourite pictures without being able to impact business.

Back in my day, we had gates that could be opened to share the company of acquaintances, as well as gardening tips and fresh produce and new recipes. The fences weren't tall enough to keep out the hungry, but then, they didn't appear to impact business as much when they were well fed.

ab praeceptisOctober 3, 2016 10:01 PM

Clive Robinson

Well, yes and no. For a start we should finally clearly differentiate between "safety" and "security", between "not building cardboard houses" and "storing or transmitting sensitive data in way that even nsa can't easily breach".

Funny thing is that it's virtually always the former that gets successfully attacked. It's not the crypto that makes us victims, it's implementation.

"Traditionally, we've thought about security and usability as a trade-off" - looking at that in terms of an observation that's true. The interesting question (and there I think I'm closer to your position) however is: Is that trade-off indeed necessary and unavoidable?

No, it very often is not. We do have the tools and mechanisms and knowledge to produce next to perfect implementation. We do have the means to drive costs for attacker extremely up and chances extremely down.

What is it that keeps us away from doing it? That's an issue of lots of discussions. My take is that it's mostly due to ultra-greed and to a generally very high tolerance for lies and tricks (gross case -> marketing) in our societies.

Well noted, this goes way deeper than the well known surface (companies thinking in sellable features rather than in quality). It's visible also, for instance, in education. Paying tens or even hundreds of thousands of $ for education and universities by far more influenced by corps and money than one should like is also an example. That situation leads to things being teached that are probably in demand (e.g. java) rather than things that are important and it leads to brutal and literacy and formation ignorant education and learning. Or, more pragmatically, it leads to the production of usable drone production (reflecting rather little) rather than to the production of well educated thinkers.

Nick POctober 3, 2016 10:32 PM

@ Bruce

It's well-intentioned and people might even do good thinking on some of it. Let's revisit what I told you back when you brought up the USB sticks story. Here it was:

"Why does this problem (USB stick auto-run) exist? Because manufacturers don't focus on building secure systems. Why don't they build secure systems? >>BECAUSE USERS DON'T BUY THEM!

Most users want the risk management paradigm where they buy insecure systems that are fast, pretty and cheap, then occasionally deal with a data loss or system fix. The segment of people willing to pay significantly more for quality is always very small and there are vendors that target that market (e.g. TIS, GD, Boeing and Integrity Global Security come to mind).

So, if users demand the opposite of security, aren't capitalist system producers supposed to give them what they want? It's basic economics Bruce. They do what's good for the bottom line. The only time they started building secure PC's en masse was when the government mandated them. Some corporations, part of the quality segment, even ordered them to protect I.P. at incubation firms and reduce insider risks at banks. When the government killed that policy & demand went low again, they all started producing insecure systems again. So, if user demand is required and they don't demand it, who is at fault again? The user. They always were and always will be. "

So, it's 2016. I've learned quite a bit since then. I learned about the following events in terms of developers/businesses doing what you asked:

1. Burroughs does first mainframe (1961) that's immune to most code injection, performs well, supports high-level code for long-term benefit, and so on. Most buy IBM's System/360 for backward compatibility with IBM garbage plus raw performance benefit. Burroughs survives in form of Unisys but eliminates hardware protections & focuses on price/performance/compatibility. Pay attention as you'll see that again and again. ;)

2. During minicomputer era, quite a few companies show up to have extra security on hardware or software level. They're all also-rans except for System/38 and OpenVMS. System/38, per user demand, eliminates hardware-level security in favor of POWER compatibility with increased price/performance. OpenVMS solid enough it gets retired from DEFCON. Many get off of it for machines with more features or speed at lower price. Many refuse to get on for backward compatibility with insecure systems. UNIX flourishs for performance/capabilities on cheaper hardware while eliminating security and maintainability benefits from MULTICS project that preceded it. And simpler ones. Those dominating markets are whoever crams most features and speed into cheapest boxes with backward compatibility. Intel tries with i432 and BiiN projects to change things at a loss of $1-2 billion when nobody buys it because it wasn't backwards compatible and is slower.

3. Microcomputer era happens. Security is ingored entirely by users so they can squeeze most performance and features out of boxes at lowest prices. Newcomers are welcome for a while as systems are too simple to really get backwards compability. IBM, Apple, Microsoft, and Amiga have strong offerings with IBM's the most robust and Amiga's the most powerful. Apple's is cool, insecure, and affordable with Microsoft's having business apps, insecure, and affordable. Apple and Microsoft win.

4. Compartmented Mode Workstations + high-assurance kernels, VPN's, thin clients, databases, and so on are developed. Virtually nobody buys them. DOD policy changes to allow insecure COTS. Almost everyone outside some high-security installations start buying Microsoft, Solaris, Oracle, etc. Only a few companies survive in what's close to life support mode compared to others' market share.

5. Next PC market. We have NextStep, WinNT project, BeOS, and some more secure desktop attempts. The secure desktop attempts fizzle out since nobody buys them. NextStep is insecure UNIX mixed with productive, beautiful UI. It sells well. BeOS redefines core of OS for *insane* level of concurrency and responsiveness on that time's hardware with a microkernel, too. WinNT mixes new hardware (performance), good core (VMS), insecure implementation (time to market), and backward compatibility with kernel and user-mode code from insecure platforms. Apple buys NextStep, Windows NT makes Microsoft billions, and BeOS dies.

6. Mobile market. Lots of so-so OS's with various interfaces. In a rare win, the one aiming for some kind of security gets dominant since it aimed at business productivity. The truly secure ones stay at around $3,000 due to low volume. Almost nobody cares but still enough market to sustain them. Apple puts a mini-Mac on a phone. Then almost nobody cares about security and Bill Gates starts looking poor. A surveillance company then buys Android, keeps mixing insecure platforms for ecosystem, and becomes biggest in terms of sales. Blackberry, which rebuilds on a more secure & reliable OS (QNX), dies since nobody wants to buy it or build apps. The others already died. The cryptophones, due to user demand, begin porting their stuff to insecure Android so they can also run Android's surveillance-oriented apps but in "hardened" way. Something similar happens with tablets where insecure one dominates for convenience and app ecosystem whereas QNX-based Playbook was extremely impressive technologically but not enough apps or users.

7. Server apps. The problem that things were too hard to do or configure securely was well-recognized. Companies built on things like OpenBSD and hardened Linux to build appliances that were easy to use, more secure, and relatively affordable. Consumers bought garbage instead. Companies like DefenseWall and Sandboxie made brainless solutions for Windows security at low prices. Consumers ignored them. Solutions showed up for big companies even pentesters didn't breach for DNS (OpenBSD BIND or Secure64), web (Hydra), email (djbdns), and so on. Most companies didn't use them. Even the most plug-and-play systems with five minutes configuration selling for lower than the big dogs had tiny market share.

8. Consumer apps. Signal vs Facebook Messenger. SpiderOak vs Dropbox. Easy, encryption apps vs storing plain files. Simple terms with FOSS license and usability vs long EULA with bullshit. Private and cheap vs surveillance-oriented and free. It can come down to a free or $1 messaging program that's just as easy to use as the insecure one. They still won't use it.

Conclusion

Computer security started accidentally with well-designed mainframe. Then there were well-designed minicomputers. Then there were more secure desktops. Then there were robust desktops. Then there were mobile and tablet offerings that were more secure or robust. At each step, user demand for things other than security forced suppliers to weaken the properties of the systems to remain competitive. Not just maintain profit: they had to eliminate security to even *exist* given both projects by big companies and startups doing INFOSEC both easy and right mostly disappeared when nobody bought them. If users don't buy quality or security, then it's their fault when the supply side produces neither. Matter of fact, our economic system in the U.S. even expects companies to deliver what users want no matter how much bullshit or damage is involved outside a few things that are illegal.

So, the problem is the users. That's why I don't expect to be a "unicorn" startup making an easy-to-use messenger that's actually efficient, reliable, and secure. Many companies have tried to market these as you'd advise them to. Instead, unencrypted email, very-insecure IM, Facebook, unencrypted text, WhatApp, and Slack dominated in that area at various times while the secure ones disappeared or made almost nothing. The insecure winners are still on top now that vetted, secure alternatives are easy, cheap, or free. I rest my case.

Gerard van VoorenOctober 3, 2016 10:32 PM

@ Bob Robertson,

"The fact that after more than 20 years there is still an "autorun" is mind boggling."

No it isn't. There is no incentive to remove this "feature". If there was accountability in the software world, it would have been quite different. Just think about the McDonald's coffee lawsuit. It's that simple.

Nick POctober 3, 2016 11:13 PM

@ Gerard

"If there was accountability in the software world, it would have been quite different. "

If there was accountability on user side, it would be quite different. Users notice there's an autorun feature that can cause problems. They intentionally keep buying computers with that or don't try to turn it off. The malware that infects their PC gets them disconnected by the Internet by carriers whose policy is disconnecting negligent or foolish people. Many would demand autorun was removed.

Whereas, removing features like that can have negative consequences for vendors in today's market given some chunk of people will consider ditching the product. Bruce and another commenter mentioned USB as risk vector. There previously were all kinds of lower-risk connectors on desktop computers. Users complained about them until manufacturers made a dirt-cheap, universal one that supported arbitrary devices and hacks. Kept buying those so much a series of them evolved into what it is today. Same with mobile where data and power devices were separate until users preferred them together + cheap for the convenience. Now they get data and hacks in same cable.

Accountability should work both ways. Supply and demand already does. Both sides caused current situation with demand side bankrupting about anyone that tries to do better. So, I don't encourage companies to do that if they want financial success.

Dill BaitsOctober 3, 2016 11:14 PM

It's the users responsibility, that they have an IQ of at least 140+, have parents and teachers well versed in IT, have had access to computers from an early age and were allowed the freedom to make mistakes, learn from those mistakes and then successfully repair them.

Now if all that goes to plan, place them in an environment where everyone else meets those requirements and security is a priority and funding of security is a priority.

I can't get quite intelligent people to even listen to basic security information without their eyes glazing over, and for everybody else, when you crash your car into a power pole at 140MPH you probably only take out local power.

Design of a system is important, look at Steve Jobs and Bill Gates, now tell me the design of microsoft word has nothing to do with Bill. I don't use Apple, can't say I'm fond of it, but it has a great user interface and desig. Windows is awful but is really easy to break into if you have to fix it (no one remembers to supply their weak passwords). Though you can get through OSX file permissions if need be, like anything if one bug is fixed today there's another tomorrow.

Windows is as messy as Bill Gates logic, and the awful boot system, the awful file system, the terrible permissions system, and the eleventy billion other security flaws that I can't tell you about due to NDA. All the security problems start with the terrible design and Bill's attitude. You try and tell the guy something and he'd flip his lid if it's not convenient.

Microsoft Word is Windows in a nutshell, tell me that is good design and simple to use? Every other OS is a slightly less bad one, no matter how god like I might feel with my magic kernel with the ultimate compile dude. I'm biased as I prefer Linux but have to use windows a lot of the time so I can fix everyone's problems. I have to know how to fix it when some genius break it good and proper in new and unusual ways.

Road rules = stop, give way to people depending on what side of the road you drive on. You don't have to be able to read in order to drive a car.

Computers = Try explaining protocols to people, try explaining binary to people, try explaining a transistor to someone, give up before explaining how data moves through hardware and try telling someone the hard drive is not the big metal box instead.

It won't be DDoS and Fraud that bring down the internet. It will more likely be some stupid email doing the rounds.

Now I got this strange email with a weird attachment. It looks a bit suspicious so I've emailed to you {insert name here} will you please take a look at it for me? &or Hey check out this exciting gossip! ;)

Jim DOctober 3, 2016 11:22 PM

@ Nick P

Now explain your argument in layman's terms to a user without using any complicated jargon or abbreviations, including detailed video instructions of where the power switch is located. It would be really handy for Introduction to Computers.

ab praeceptisOctober 3, 2016 11:35 PM

Jim D

You are somewhat oldstyle and simple-minded it seems. Power-switch? How uncool! What's needed is an iphone/android app that NFC powers up the computer.

Btw. For an extra of just 79$ one will get a stereo "bling" sound when the computer is activated plus WEP security for an optimized "switch-on experience".

Version 2 will add powerpoint capability and automagic twitter messaging ("xyz just turned on his/her computer").

Nick POctober 4, 2016 12:13 AM

@ Jim N

Haha. Lay version is you download Signal after news source you trust talks about how secure it is, fill out basic info, get your friends to do same, and now you're significantly more private than you were before. If business has the money, get one of the cryptophones. Back in the day, you buy DefenseWall and/or SandboxIE for cheaper than anti-virus to contain most web threats to your actual system. Business uses OpenBSD or hardened BSD/Linux with same amount of IT professionals still using cross-platform apps. That's it for a lot of threats. That's too much for 99% of consumer or business users to put work into.

They'll learn and play their iPhone games for hours, though. Don't get me started on how much time they'll put into Pokemon Go but not basic security of same device. Also, they skip the interesting EULA on that one, too, which also compromises data on that device. ;)

Anon10October 4, 2016 12:55 AM

@Clive
In almost exactly the same situation as physical security against burglary. Do we blaim the victim there? Yes of course we do, if we think they've been negligent.

At least in the US, I think most people solve the home burglary problem through insurance. To the extent that people invest in physical security for their homes, it's usually either the minimum required by their insurance companies or else driven more by a fear of physical harm than theft.

Wolf BaginskiOctober 4, 2016 1:34 AM

It is an old joke that the most dangerous part of an automobile is the nut behind the wheel, but have you see how many differences there are between the early vehicles? It was a while before the steering wheel became the direction control of choice. I think we're past that stage, but we still see the Model T Ford of the computer age coming off the production line with an epicyclic gearbox.

And we can only use what they build for us.

The con men keep trying, the sort of crooked snake-oil sales men still continue to try and sell to us, and yet the last year has been full of Windows 10 and its troubles. Users need to be a bit more careful, but the way Microsoft has carried on with getting people to update has gone from annoying to abusive. I don't use their software any more, and I have given up counting the crooks that ring me up and tell me they need to fix a problem with Windows. Even the pensioners on the bus are starting to notice.

At the same time, a year after switching to Linux, I am seeing companies going back into the Microsoft or nothing mode. They have flirted with Linux for years, but some key library gets brought up by Microsoft and that hope seems to die.

I only name Microsoft because that is what I am familiar with. Apple has its own clutch of problems, and Google runs a minefield of obsolescent, unrepairable, Android gadgetry. But Microsoft is the Great Old One, lying there dreaming in its dead city of Seattle.

Well, motor cars did change, but the change didn't come from the automobile industry. It started in a hotel room in Lincoln. The first use of the rude mechanicals came barely a hundred years ago, built be a company that made farm machinery. They almost couldn't work, right at the bleeding adge of engineering when they advanced at Flers-Courcellete.

Well, the world of the computer needs something like the tank, some alternative to treating the users like the Accrington Pals. And, just like they did a century ago, will anyone ever tell the next generation what really happened?

Birger KraegelinOctober 4, 2016 1:50 AM

Since years I'm talking about seurity. And I try to convince people, that the job of IT security ist to protect users. It's not our job to protect infrastructure from users.

It is our responsibility to use the best techniques and products available for this aim. But in my daily job I always see peoble training their users just to save money.

Do we ever learn?

JonKnowsNothingOctober 4, 2016 8:12 AM

The problem is quite simple to understand:

  • WE will FIX IT in THE NEXT RELEASE

Understanding why we have to "Fix it in the next release" is pretty easy to understand too.

  • It doesn't work
Why it doesn't work is also pretty easy to understand:
  • Don't worry: someone else is doing that part
  • Don't worry: we need to demo this tomorrow
  • Don't worry: you don't need to know about that
  • Don't worry: THAT DUDE said we didn't need to because: (fill in the crap answer of your choice):
    • too hard
    • too time consuming
    • NIH (not invented here)
    • Don't argue with the designer, engineer, VP, CEO
    • We have to get it out the door NOW!

What to do about this is also easy to understand but not easy resolve.

  • IT'S BROKEN

There isn't a computer project anywhere that doesn't have a Bug Tracker System. Hardware, Software with loads of QA entries about what is broken. If its a big project these databases contain decades of bugs.

So, where do these bugs come from? You might very well ask..

  • Do they drop from heaven?
  • Do they autospawn?
  • Do they jump off the white board into the code base?

Hmmm.. clearly not.

And what happens to the 500,000 bugs in even a smallish application?

  • Database Purge: 10 years and older
  • Database Purge: 5 years and older
  • Database Purge: today and older

Those 500,000 bugs didn't just happen by accident. And the database purges, selected fixes don't recognize that the bugs happened because of Code and Design Flaws from the start. And these flaws are perpetuated down the process from one product to another, from one keyboard to another.

Programmers don't write code, they write the bugs they know how to write. The claim that developers don't know this is easily proved by every QA report ever posted to a fix it system.

Until corporations AND the computer industry are WILLING to change this model: There is no Reasonable Expectation of User Safety regardless of how much you would like this to happen.

As for auto-magically loading anything into a system on the presumption that It make things Easier for the User ...

Ahhhhh No Thanks.

Why not? You might very well ask...

  • IT'S BROKEN
  • IT'S NOT TRUSTWORTHY
  • IT'S COMPROMISED
  • IT'S COMPROMISES OTHER STUFF
  • IT'S FAKE

There is no way for a User to know that it isn't cracked, hacked or forged.

The system is not sustainable. The methodology is faulty and cascades design flaws forward forever.

There isn't any reason for this to continue. Why can't a user click a link safely? Really needs to be reworded:

  • Why can't a programmer implement a method safely?
  • Why can't a designer have confidence that the device inheritance is stable?
  • Why can't the device code be trusted?

Why are we still:


  • FIXING IT in THE NEXT RELEASE

DivyaOctober 4, 2016 8:31 AM

I like the tone of this article asking security professionals to work on strengthening the systems and technologies, rather than "fixing people".
If people are indeed the "weakest links" in the system, then I think security plans should rely minimally on them to keep the systems secure.

And it is usually simpler to work with technology and code, than on personalities!

IanashA_titocIhOctober 4, 2016 9:07 AM

Consumer Reports' (CR) current print issue, November 2016, has written about some things a user might do regarding security and privacy:
https://www.consumerreports.org/privacy/the-consumer-reports-10-minute-digital-privacy-tuneup/
https://www.consumerreports.org/privacy/66-ways-to-protect-your-privacy-right-now/
https://www.consumerreports.org/privacy/protecting-your-digital-privacy-is-not-as-hard-as-you-might-think/ ; by Julia Angwin

https://www.consumerreports.org/cro/index.htm ; CR home page

Questons:

1) Does anyone have any thoughts about best practices or things to consider for using, or not using, 2FA. For example, if using 2FA, might it be preferable to use a second sim card for 2FA purposes?

2) Although CR mentions VPNs, CR doesn't mention Tor or Tails. Now that around 2 million people might be using Tor, might Tor be ready, or about ready, for mainstream use, at least in some countries?


Antonio CostaOctober 4, 2016 9:07 AM

Excelent essay!

Leave the user alone.

Industry should focus on better software/system practices for architects, developers, reviewers, testers, managers, admins, etc. Let insurance companies validate, or not, software and systems...

Sancho_POctober 4, 2016 10:35 AM

@Nick P (”So, the problem is the users.”)

Honestly, I’m shocked by your comment. It wasn't a joke?
You got all facts, you correctly use the terms capitalism, government, economy but finally conclude the users must control the system.
Sure, you didn’t mention voluntary self control.
Time to meet the real users (grocery, railway station, stadium, …) to encounter your future.

But wait, in a couple of years we, the people(s), will have changed the system anyway.

Tommy DuhnOctober 4, 2016 11:05 AM

> "Blame the victim" thinking is older than the Internet, of course.

Reminds me of Tesla and Autopilot. It's always the user's fault, not their system for not being good enough yet marketing it as it is - with footnotes *** (might as well tell them to read the full Terms of Service).

JaneOctober 4, 2016 12:54 PM

You make wonderful valid points, however, both must be addressed.

I came here from some reddit ama and don't know much about you. I hope you are not one of those people who are trying to please the user by sticking up for them. The security industry has enough putzez that keep doing that.

YES, the user must be blamed when they do silly things they have been warned about many times. AND YES the developers must be on top of things as to not rely on users behaving correctly and not falling for new and old tricks.

I work for a company outside the US, we have a list of 4 things that if the user does they get fired on the spot, all sing this as part of their employee agreement.

1. If they open any file other than a pdf, word or excel file, they get terminated on the spot and we fire people almost weekly. YES, we know pdf's etc can contain viruses and we have systems in place to alert and yes sometimes the user messes up etc I won't go into the entire protocol now.

2. If they log into any personal email accounts while at work using our network.

3. If they download anything off the internet without approval from security, this includes images or anything.

4. If they have social media accounts that have any hint of their place of employment.

I know in the USA this would not fly, however I am glad in this country it does.

We have a list of rules for developers that is even more elaborate. I can tell you that developers dont pull the junk they sometimes pull elsewhere. They check and recheck things before announcing their product is ready to go live.

Stop appeasing the users, THEY MUST KNOW that yes, they must be careful and their actions can put a business out of business.

When employees learn they get fired on the spot for blatant stupid stuff, just like leaving the doors open and leaving the keys and access badges in the street, they might wake up.

Nick POctober 4, 2016 1:14 PM

@ Sancho_P

Where's the joke? All the top encumbents and startups in each category produce insecure products. The ones with easy to use, low cost, and secure systems barely sell anything. My conclusion sort of speaks for itself from what users actually do. Although, I did listen to and survey them in a public facing job to tune of 22,000+ a week across many demographics. They consistently wanted more stuff & convenience over quality. Could be American culture but similar software dominates overseas, too.

Of course, if you doubt this, please link to one high-security, usable product in each category that's used by at least a million people. They steadily appear out of academia or startups. Should be a pile of them with at least 1% of WhatApp, Gmail, or Facebook users if demand side isnt the problem.

Gerard van VoorenOctober 4, 2016 1:15 PM

@ Nick P,

If there was accountability on user side, it would be quite different.

Yes quite so. On the other hand it's also a couple of orders of magnitudes harder to achieve. And it's the world upside down.

Let's say that I build and maintain a highway that you have to pay for when driving on it. If I don't fix that hidden pothole for years, no decades, and people break their tires and cars because of that, repeatedly, I have a problem. In the software world they solve that with the "we are not responsible" clause. But we can all agree that it's a ridiculous clause, isn't it? No, accountability belongs to the manufacturers of the product because they are the guys who make money with that product. It doesn't matter whether you are a user or a paying customer.

Whereas, removing features like that can have negative consequences for vendors in today's market given some chunk of people will consider ditching the product.

I am not saying removing features but rethinking features and if after careful rethinking a particular feature just can't be made secure then indeed you have to ditch it. It's all part of accountability. So yes it will cost more. I don't care. But if there is no accountability then autorun will still exist 50 years from now because MS values backwards compatibility more than security. Especially you should know better Nick.

@ Sancho_P,

But wait, in a couple of years we, the people(s), will have changed the system anyway.

Explain yourself to the skeptical person that I am ;-)

Jon CamfieldOctober 4, 2016 1:42 PM

This is the specific problem we're trying to tackle with USABLE (https://usable.tools) - if we want to scale secure tools for the most at-risk users, they have to first be usable and valuable to those using them, not another burden or barrier.

BillOctober 4, 2016 1:57 PM

@Jane

"3. If they download anything off the internet without approval from security, this includes images or anything."

I hope they get approval for EVERY SINGLE INDIVIDUAL WEB PAGE THEY VISIT AND EACH IMAGE ON SAID PAGES.. because their browsers do download images as they browse the web.. Your red tape must be pretty long, with people lining up around the building to get permission before visiting each web page.... You might as well just shut off the internet in your country, more work will get done at that rate.

RajOctober 4, 2016 2:03 PM

I think that it's always a two-fold approach; and both parts must happen to have any chance of success:

1. We owe it to end users to educate them accurately on the risks they face, and how to spot them. We can also tell them about the limits of the protections we have in place.

2. Simultaneously we must design systems that limit the damage that an individual can do - whether deliberately or accidentally - and we must implement technical controls to protect the end user and the systems they use.

They're both hard things to do but that's why we're here.

We can't give up on the user: when it comes time to ask for money/resources/time we have a better chance of explaining the cost/benefit if the users already have an understanding of the risk the business faces.

Conversely we can't expect the users to be experts on spotting/blocking phishing emails or malicious downloads. That's our job.

PantsWearing701October 4, 2016 3:25 PM

We all know that users don't like to have long passwords and all that stuff.

But just telling us the obvious wont make the problem vanish.

I miss some practical information indide this article. For example what exactly could we do to prevent the need of fixing the user?

Lock the user in like MacOS does it?

rOctober 4, 2016 3:51 PM

@Gerard,

In all fairness, if you ignore me because you don't want to respond to some wind-bag I understand. But I'm not sure it's fair to paint the backwards compatibility thing soley on Microsoft, it's more of a market thing. Businesses don't want to upgrade their grandfathered-in industrial equipment and mom-and-pop-shop warez. They expect to be able to run something for 10 years no matter what, that's why when they buy software they their extras on CD or DVD - not pre-installed on some dell. Warez excluded of course, but let's look at this further - Microsoft knows that alot of people aided their starter businesses with "free warez!", it's really not in their (or linux's) business model to alienate their cli & tell.

Now, for the salt water: 32bit linux compatibility layer. Is your distro a pure64 or does even it support some form of **cough** "backwards" compatibility layer in the face of the GNU World ?

Nobody wants to compile their own shit, they want things pre-built pre-bugged and pre-cocious.

We, (even us non-contributors) are making motions in the right direction. Inch by inch. And, when the time comes for the big move it will move quickly - as the plans are slowly solidifying.

That's just my take, at the table of "cheap software solutions".

Sancho_POctober 4, 2016 5:47 PM

@Nick P

Nah, it’s not that the users wouldn’t want cheap (read: free) and convenient things, now, not tomorrow. Yes, they give a shit for security, instructions, how to use and warning labels.
They are not interested, lazy, simple minded and eager for fun.
Like my kittens.

But I love them!
I’d never blame them for things they don’t understand.
It is my responsibility to care for them if I realize and can.

I blame people (the 1% of population) who are able to think and do not realize that they are responsible for the well being of our society.
They are to blame for not governing but to take short sighted advantage of the ordinary people.
I do not expect my neighbor to make decisions for mankind.

Mind you: The lower you go in society the better friends you’ll find.


@Gerard van Vooren

Seriously? [because of the ;-) ]
Are you watching the news?
I mean our economy / life is built on never ending growth but our environment is limited. It is like being obsessed with gaining weight.
Out of control, although we have all data at our fingertips.
In the age of perfect communication technics our "sovereigns" can’t talk to each other, let alone they’d know what to talk about.
Rest assured, nature / we the people(s) will take control.
It may not bode well.

Nick POctober 4, 2016 9:16 PM

@ Gerard

"Let's say that I build and maintain a highway that you have to pay for when driving on it. If I don't fix that hidden pothole for years, no decades, and people break their tires and cars because of that, repeatedly, I have a problem."

It's not how software construction and purchasing works in most places. It's more like this. Several providers build highways. Some are robust, safe to drive on, and take a little longer to get there due to the path they were laid on. The people who build this maintain old highway more than build new connections. The other companies build highways that go straight, sideways, and in circles. Careful use can get you there faster but people crash into each other. These highways are poorly maintained with new paths constantly made. The customers reject the safe, robust highways for the unsafe, faster, downtrodden ones citing specific benefits. They continue to pour money into the expansion of these highways and their development style. They build their own side roads, companies, and houses almost exclusively on these. Most companies making good highways go out of business with the few remaining charging higher tolls due to low traffic with less connection points, business, etc using them.

Now, at this point, do we blame the drivers for pouring money into shitty highways or highway providers for not continuing to build better highways nobody drives on? I'm for blaming the drivers for almost exclusively using and rewarding shitty highways. You're blaming the builders for not choosing to scrape buy or go bankrupt building what nobody uses.

"No, accountability belongs to the manufacturers of the product because they are the guys who make money with that product. It doesn't matter whether you are a user or a paying customer."

The consumers benefit from the crappy products (and highways) too. They build who lifestyles or businesses on their benefits. They continue using them even when damage happens because the better stuff does not provide what they are accustomed to. So, accountability should either go both ways with each required to do better or to consumers for punishing anyone who does something better. Suppliers that build what consumers don't want will already be punished by market by bankruptcy. Unfortunately, that's mostly people making high-quality or secure systems. Like my model predicts. :P

Anon10October 4, 2016 11:03 PM

For once, I agree 100% with a blog post. 95% of users have no idea what a certificate is and certainly no clue what it means for one to be expired. Even if you put a plain English warning: "You might not be able to trust this Web site", you train users to ignore these warnings when they have to go to sites with expired certificates to accomplish their jobs.

rOctober 4, 2016 11:16 PM

@Mod, that should likely be in the squid pro quo but eh, shoot me.

Edit: ewe, but I think yew can be made an offering too.

ab praeceptisOctober 4, 2016 11:22 PM

Gerard van Vooren, Nick P (and maybe others)

I think the problem goes much deeper and dependig from what level one looks at it, all of you (and Bruce) are right.

To show what I mean: Nick P is right insofar as users really are stupig(ized). That, however, is not (or at least only to a degree) in their genes; it's not simply that they are born stupid. It's rather the result of a long process and their experience.
Moreover and more intricately it's a premise problem; (Most) users are based on the assumption that proper engineering is a given and the set of choices available to them are on top of proper engineering; that set contains features, glitz, and stereo blings.

The reason for that being that humans tend to think in simpler analogons when confronted with complicated things. Additionally most legal/state systems either actually care about or at least project a warm fuzzy feeling that certain basic requirements are required to be taken of by law anyway. All together this leads to users looking at software like they look at a bridge or a car, assuming that the basics, in particular a reasonable level of security, can be considered as taken of properly. A car has working brakes and blinking lights and a bridge will carry normal load under normal circumstances (no hurrican, etc) and consumers/users are free to chose from the features on top of the basics (features, design, performance, etc.)

To make the situation much worse, computers and related devices (e.g. networks) are critically and dimensionally more complicated than other common engineering fields. politicians (who ask their secretary to print out and file the internet) may justifiably feel to have a sufficient understanding of what a "good" bridge is or even of what a reasonably well specified and built fighter jet is - but they certainly have no grasp of the IT field. Which leads to the situation where the state simply is incapable to provide the necessary structure, frame, etc.; and those agencies that do have a good understanding of at least some part of IT tend to be far away from parliaments and people.

Finally, we must also look at ouselves, at the professionals in the field. Frankly, it's a sad picture; the vast majority of software developers have never formally specified, let alone modelled and verified code. In other words, they are "well trained typists" rather than engineers.
To make it worse, there is still somewhat of a "gold rush" paradigm in many companies. I know more than one director, CEO, or shareholder who told me in one way or another that IT is a great business because one can "make lots of money out of next to nothing".
And that shows. Just look at all those "security" related companies (AV, etc.). They usually don't sell what they seem to offer; they rather sell a feeling of security. But then, unlike Mercedes or Caterpillar or a large construction company they are in a market where very few customers actually understand what they need and want and where expensive marketing often is more valuable a tool for the companies than a good engineering department.

It's not the users *or* the developers and engineers. It's *both* - plus corps, insurances, incapable state agencies.

That's btw. also a reason for me to look somewhat friendlier at microsoft than I used to. Since quite a while they are on their way to a mature company by investing heavily into formal methods and tools. Or, in other words, they are getting closer to the bridge and car builders, to proper engineering.

FigureitoutOctober 4, 2016 11:42 PM

Nick P
Most companies making good highways go out of business
--Actually from what I've heard it's gov't approved contractors abusing a gov't system. Citizen's (customers) don't have a say in that, we just get crap shoved on us w/ no real means to do anything about it. I have to remember all the roads around me and dodge potholes, I do constant situational awareness anyway, but it's annoying. Then I think it's when gov't doesn't hold contractor's feet to fire if they do a crap job, let's them get away w/ it. Make sh*t roads that have to constantly be repaved, and the contractors have never-ending business. Where I'm at, new bridges need rework immediately, and there's structural cracks in them already.

There is a difference between contractors. From what I've observed, the "public works" people patching potholes do a truly terrible job, looks like zero training and they hire anyone (don't use liquid, but dry rocks that just crumble and rattle on your car, maybe fling up on your windshield), but there appears to be a separate small contractor (seen their truck just 1 time, probably charges more) that patches potholes like they actually studied them, and the patches hold better b/c they fill them just BELOW the hole, and use more liquid tar and smaller rocks sink into crevices better, and at least don't cause more damage, especially when a snow plow destroys the crap patch job.

JaneOctober 5, 2016 2:23 AM

@Bill

We have a whitelist of sites they can go to, we update often, add some remove some. Within our sector we are known for showing extreme respect for the security of our clients, hence the company keeps growing.

Yes, it does get annoying sometimes, however, nothing is more annoying than getting hacked.

Erlend Andreas GjæreOctober 5, 2016 3:00 AM

Absolutely true that we need to design systems as secure by default. Yet, in face of reality, we still need security awareness and training because systems are obviously not there just yet. People can still be tricked into going around the technical systems.

Secure practices among users adds to our defense in-depth. People can even discover threats and breaches that bypass our technical barriers, contributing greatly to our incident response abilities. But, this requires us to team up with people and not blame them because we are unable to 'fix' them (or technology).

This is all about combining technology and people, in my experience - to create good user experiences. Which is possible if we strive to take the pain out of security technology (and policies) we use. Not letting security controls be perceived as a barriers for productivity that the users cannot understand/agree with, e.g. periodic password changes, just to name a popular family member. And, as a consequence - start listening to people. Eye contact can be surprisingly effective, even (especially) if they have some security grudge to shake off at first. In which case, they wouldn't have taken our awareness and training efforts very seriously anyway.

Usable security starts - unsurprisingly - with the users. But it's not only a matter for researchers and technology companies - it is just as well a matter for the IT department.

Clive RobinsonOctober 5, 2016 4:32 AM

@ Erlend Andreas Gjære,

Absolutely true that we need to design systems as secure by default. Yet, in face of reality, we still need security awareness and training because systems are obviously not there just yet.

You are more optimistic than I am about "secure by default". Realisticaly I don't think it will ever happen in the consumer OS / App space (and I've yet to see it with any general connected system[1]).

Thus the bottom line is we are always going to "need security awareness and training" in users --and more importantly in managment-- because systems will never be sufficiently secure.

[1] That is although certain areas have made great strides in the past few years there will always be faults in design at some point on the computing stack. And as with crime in the tangible world we won't be able to stamp out the various forms of "cyber" crime.

Sancho_POctober 5, 2016 10:31 AM

@ab praeceptis
” [users] … it's not simply that they are born stupid.”

This is a small misconception in your post I can’t ignore, because it’s a basic distinction between human and animals. This is interesting so far as as it also relates to computers and machinery (mainly RAM or ROM type “OS”).

Humans are born as basically stupid, helpless individuals, ready to die, while (most) animals are ready to go when born. They know nearly everything to survive and reproduce, even without schools, mentors, role models.
For humans the place of birth plus the society is crucial for development and life. So e.g. to develop catholic (or more or less free) or muslim mindset depends on the environment at birth and early life, same goes with wealth and education (e.g. the ability to abstract thinking).
- Granted, there are exemptions, but I’m talking about the 99%.

So there is a huge difference between humans born in a hut in Ghana or in the center of Stockholm, Sweden. This beautiful world is full of fascinating differences.
We have to realize that huge scope of nature. Not everything is America.
Sorry for going off topic.

Users have to be considered stupid. They are. Warnings don’t change that.
This is not an offense.
Ever seen a kitchen knife with a warning label? No? A knife has to cut, period.
But when it starts to cut things in secret, without the user’s knowledge,
we can’t blame the user.

But now we see that there is no difference between two computer operators, the guy in Kenya or the President of the US, when suddenly faced with the request “Enter your password to proceed”.

A device sold to connect to the Internet has to be ready for the Internet, may it be computer or IoT (= machinery).
If it is not we can not blame the user.

Sancho_POctober 5, 2016 10:41 AM

@Clive Robinson

”Thus the bottom line is we are always going to "need security awareness and training" in users --and more importantly in managment-- because systems will never be sufficiently secure.”

It would be a slippery slope for me to fully agree here. Part of my business was cable car safety and there was absolutely no “security awareness and training” for the users involved. They simply want fun in the snow, no security training, and it would not help.
Yet you couldn’t open the cabin door from inside when the car was out of station.
And if something happens (yes it does) there are laws to protect the user, not to blame them.

This is the Bill G. saga: Let’s rewrite the law and make money from it.
The american wet dream.

ab praeceptisOctober 5, 2016 11:39 AM

Sancho_P

That may well be the case but that wasn't my point. My point was that (at least western) societies, arguably based on being "market value" driven, are being stupidized. evil corps windows. for instance, stupidizes people; it is *not* simply "friendly, easy" layer but it that "friendly easy" layer a) dumbs users down and b) prevents them from learning many things by blocking access to the deeper layers. A friendly layer would be optional and could be bypassed.

Aso note that a healthy society strves to educate its citizens. Our field, however, at least to a large degree *wants* dumbed down people because dumbed down people are good customers.
Looking somewhat brutally at it, hackers are the most valuable friends for whole industry segments because they create problems for which companies then can sell "solutions". Looking closer, yes, that also means that at least some companies in the snakeoil field of IT do *not want* good products. Selling shitty software is double-good; first it keep development costs low, and second it makes sure those products will be hacked or create trouble, which restarts the cycle.

There are other signs, too, for instance the many titles used in IT that propagandize and subtly (or less subtly) suggest that we are experts, gurus, etc. and the dumb users need us.

I don't want to go deeper into discussing that rather political matter here but I for myself I'm convinced that brutal proftism, shareholder value driven corps. etc are very close to the core of the lack of security in our field. To put it bluntly, I'm convinced that "value for the customer" or even "safety and/or security of the customer" are ranking *very low* in the priority scale of the IT industry. More than a few actually live and live well on the customers being ignored and f*cked.

YvesOctober 5, 2016 2:43 PM

I agree to the fact we definitely should endeavour to build secure, mistake-tolerant systems. But as far as I can teel, in-depth security teaches us not to rely on one single security mechanism. To me, user awareness is as necessary as technical security. These are just two different, complementary ways to achieve one goal, and we should leverage them both.

Bowser BrowserOctober 5, 2016 3:22 PM

Why can't they click on links in emails with wild abandon?

Perhaps it is because other people profit from the insecurity. My first reaction to this is that "proper trust escalation" is perhaps the answer. What does a 'link' demand as far as impacting the user's security? From an email viewer, it sounds reasonable to me that an abandonly clicked link ought to be able to display formatted/variable colored/sized/fonted text and .jpg images. If it wants to do something more complex like open a pdf viewer or an image format that isn't widely used (=.jpg, perhaps use 'existing default standard' verbiage), then give the user a generic 'security complexity' escalation dialog, with details hidden by a standard 'click here for tech details you are not expected to understand' button. Also having a button that explains the _general_ security complexity escalation theories involved, at a succinctness appropriate for that context, and a link/button letting the curious dive for details while not giving the impression that any user needs to or is expected to know that level of details.

And to those who think I've just made the point that that is all too complicated and clearly not a viable solution- BWAHAHA

Gerard van VoorenOctober 5, 2016 3:59 PM

@ r,

In all fairness, if you ignore me because you don't want to respond to some wind-bag I understand.

Well, don't fill in the dots by yourself. The problem is that my time is a bit limited these days and I don't read and reply to everything. I still have in my mind what you said about OpenBSD roughly a month ago.

But I'm not sure it's fair to paint the backwards compatibility thing soley on Microsoft, it's more of a market thing.

But I do blame MS for this and for good reasons. I am witnessing to much people and companies these days that are victim of ransom ware.

That said, I don't only blame MS.

Businesses don't want to upgrade their grandfathered-in industrial equipment and mom-and-pop-shop warez. They expect to be able to run something for 10 years no matter what, that's why when they buy software they their extras on CD or DVD - not pre-installed on some dell. Warez excluded of course, but let's look at this further - Microsoft knows that alot of people aided their starter businesses with "free warez!", it's really not in their (or linux's) business model to alienate their cli & tell.

I am not saying you have to break compatibility. I am saying that there needs to be accountability. THAT is the problem and the only problem.

Now, for the salt water: 32bit linux compatibility layer. Is your distro a pure64 or does even it support some form of **cough** "backwards" compatibility layer in the face of the GNU World ?

FOSS isn't a product. A product is something you can sell. It becomes a product when people are making money with it. THEN there needs to be accountability as well.

We, (even us non-contributors) are making motions in the right direction. Inch by inch. And, when the time comes for the big move it will move quickly - as the plans are slowly solidifying.

Like what I said to Sancho_P, I am skeptical (this time without the wink). We'll see.

AnonOctober 5, 2016 6:04 PM

@Jane

What country are you from? What company do you work for and what do they do?

MilkusOctober 5, 2016 11:48 PM

The problem is with users, developers, experts, execs, software, hardware etc.
It is not fix one or the other.
We should strive to address security from all angles.
If the training methods are wrong, and not hitting the mark, change them.

The ideal that security is just in the hands of good design is unrealistic.
To highlight the failings of training humans , can be done just a easily as highlighting the continual flaws of well developed security products.

Just like car saftey, the responsibility lies with the driver, road engineers, the car manufacturer, road rule makers etc. We dont say that training the driver is futile, give it up.
We know people can learn new habits, follow safe directives and have a desire to stay safe and secure. Yes a good driver is not made in a week or even a year, but with continued practice most people improve and their behaviour becomes safer.

That said its the manufacturers role to design a car with safety features that minimise the risk of harm. They need to make sure components dont fail, and saftey is in the frontline for technology advancements.

We must stop trying to fix the user to achieve security. We'll never get there, is an odd quote...we dont advocate to stop building security products because we will never reach 100% security.

I find it very odd that excluding the 'human factor' from a heavy user-reliant activity is possible. Driverless cars may become the norm because it is no longer a human activity, I will look forward how one can implement userless personal computing.

2 cents


Who's Asking?October 6, 2016 5:26 AM

"The problem is with users, developers, experts, execs, software, hardware etc.
It is not fix one or the other.
We should strive to address security from all angles.
If the training methods are wrong, and not hitting the mark, change them."

There's no mark to hit. Consider the comic of "In this corner, we have Dave." As the comedian once said, you can't fix stupid. No amount of training can fix the truly imbecilic because it'll just go in one ear and out the other. Problem is, some of them are in EXECUTIVE positions.

"We know people can learn new habits, follow safe directives and have a desire to stay safe and secure. Yes a good driver is not made in a week or even a year, but with continued practice most people improve and their behaviour becomes safer."

No, we DON'T really know people can learn new habits. It's like the old saying, you can't teach an old dog new tricks. That's why you have so many habitual DUI offenders and so many serious accidents with obvious fault.

"We must stop trying to fix the user to achieve security. We'll never get there, is an odd quote...we dont advocate to stop building security products because we will never reach 100% security."

But again, as the comedian said, you can't fix stupid. You can only educate so much and then things don't stick. And we're not even talking 100% security. We're just talking acceptable security; we're saying you can't even get that far with user education. A world where any link, any e-mail can hit you is essentially a Sword of Damocles world: a world where hyper-awareness is a base requirement. Your average person doesn't have the mental capacity to live in that kind of world. The computing world and the Internet wasn't built on distrust by default, and there's no way to fix it in situ. You pretty much have to start from scratch, but that also runs smack into the user demand for usefulness, and "The Customer is Always Right."

Eugene October 6, 2016 12:49 PM

When will our email environments for one be smart enough to screen a link for it's legitimacy? It seems like we are just going around in circles. How can a link trace be made and verified as safe before it slips through to the inbox? What would it take to get a flag on a suspect email link and what more to get the flag passed or cleared by non human intervention? What are we talking Mars for when we can't even get these type issues solved?

rOctober 6, 2016 4:29 PM

@Eugene,

When will our eyes be smart enough?

Mars, is a moonshot - absolute security is too. Alot of what you're asking for is already done - just not 100% accurate or effective. We scan incoming emails all the time, we challenge server to server exchanges, we have spamhaus and preloading scanners... but do you know what the perps have? randomized exploitation, persistant WAITS (#ofDOCS,Mouse,etc), single-fire-exploitation, bot detection, ipblacklists ETC.

It's a complex environment, we used to have office space (which we could lock, monitor, etc) - now some of us our office travels around in our pocket or our truck. The shape of the world and our office spaces are changing, it's alot to keep up with when it comes to invaders and fraudsters.

It's the reality of miniaturization, it enables alot of beneficial things but excludes alot of others we take for granted - like the ability to install a CCTV - or tangible receipts.

@Gerard,

The windbag comment was less directed at you and more directed at others :P I thank you.

Ransomware scares me for the public, my solution to mitigate scareware for small home-office contractors has been to deal with the dependancy on legacy windows xp software by pushing linux and virtualbox and then training them for proper sandboxing. It's really not that hard, thankfully most of the spammers and miscreants out there aren't of the type to possess some of the more dangerous classes of exploits out there.

Yes, being dismissive is a vulnerability too but it's not like I'm capable of offering fort-knox to people yanno? :)
I make sure to convey that level of military-esq understanding to the few people I still work with.

JaneOctober 7, 2016 4:08 AM

@Anon

I cannot share info on the country or company I work for, sorry. We do cater to clients worldwide and because of what we do we are always targeted for hacks.

In the US I worked for a similar company. Myself and two others were tasked with making sure the company does not get hacked.

We have our limits from a technical standpoint and never once did we get hacked because of something we could have done. We had rolling bounty's for hackers who found holes and outside teams as well running pen tests all the time.

EVERY hack was because the user was a complete idiot, sometimes not following simple instructions they were given just days prior of us getting hacked.

Myself and the other two in security made a list of things that can be done to curb some of the idiotic moves these reckless users make. We presented it to the board. They rejected most of it on the grounds that they would open themselves up to employee lawsuits if certain rules were implemented. I had no reason to doubt them and legal is not my department. It was that same day that I gave my two weeks notice, there was no way I can work for a company I was convinced will continue getting hacked thanks to user stupidity. Keep in mind, these hacks result is serious business loss and in one case they settled with a client because of a hack, big settlement too.

I was being bothered by a headhunter for a while already that had a great job offer in another country for me. The company is similar to the company I worked for and I told them I will join if they agree to implement some of the suggestions I have. They agreed to everything in writing and a little while later I relocated and have been very very happy there.

Hacks don't happen here, employees know the drill, they put the company at risk, even if no hack they are terminated.

The decision of the security team overrules the CEO! Recently he complained about the firewall system we have and how everytime he calls in from home to add his IP it takes us 5 minutes to do so (we do extensive verification including live video). He demanded we remove the firewall for a week as his IP will be changing often. We said no, he said no problem and he appreciates we stood up to him, it gives him confidence! Try pulling this stunt on an American CEO.

I won't go into fine detail, however, here you are allowed to hack and destroy hackers. The honeypot system here involves keyloggers and encrypters, in one case we extracted the data as to who hired the hacker, was a significant win.

If you cannot train the users correctly you will never win this battle. For every patch and solution there is a technical method to override it, ESPECIALLY when you have idiotic users running around.

We do live in a time were everyone wants to be nice to the fools, thugs and terrorists. After all, they have a big presence on social media and maybe if we are nice to them we will eek out a few more followers or likes plus they might change.

I cannot afford those gambles, political correctness creates hacks and holes. Here people don't talk business on their lunch break, here you must put the security badge in your pocket when you leave not let it hang for everyone to see if not worse. And as of last week if anyone here periscopes or facebook lives in the office their broadcast will feature them getting fired LIVE.

As long as companies keep up with their nonsense that the user is some innocent victim, the hacking will just get more common.

Having said all that, I am confident, that we are still hackable, and the day I think otherwise is the day I will quit this industry. I simply know one thing, if I feel there is something more we can do today, I will do it.

Clive RobinsonOctober 7, 2016 5:10 AM

@ Jane,

Hacks don't happen here...

Oh dear that is an ill advised statment. Others more battle scared will hear it and think "What about the ones she and her colleagues did not see". That is, the more subtle forms of APT.

Which brings me onto this,

The honeypot system here involves keyloggers and encrypters...

Odd, that suggests that your Honeypot is for "insider threats" not outside attackers.

Like you "I won't go into fine detail" but just say some years agi I have thought up and prototyped a way to detect Honeypots of the sort used by the HoneyNet people. When informed they exhibited "Not Invented Hear" symptoms. More recently our host has blogged about something very very similar which is malware that detects it's runtime environment so we know the way the malware developers are going. My method of detecting Honeypot environments did not involve getting code onto the host it was enumerating.

You might want to consider your game a bit.

As for "hacking back" it does not matter if it is legal or not in your jurisdiction, your chance of actually attacking anything other than an innocent individuals computer is slight at best. Because experienced hackers, the ones you realy want to defend against use multiple "cut outs" as SOP.

Further if you are not ultra carefull you could find that you have crossed out of your jurisdiction into another jurisdiction where it is a crime, thus face the posability of sanctions in various forms. From what you say you appear not to be a national of the country you are in, you need to be aware of the legal implications of that otherwise you might find the sanctions are not what you might mistakenly believe.

Any way it's up to you what you do and say no matter how ill advised.

OrvOctober 7, 2016 11:59 AM

@Name (required): The automotive analogy actually doesn't help that much, because cars have been engineered to avoid a lot of user mistakes. In the Model T era, you could set the spark advance wrong and the car would break your arm when you tried to crank start it. But instead of pointing at drivers and saying, "you need to stop doing it wrong," we invented electric starters and automatic spark advance.

A modern car is full of systems that protect both the car and the driver from the driver's mistakes; everything from fuel filler restrictors (prevent putting in the wrong fuel) to stability control. Many will automatically shut down or go into a limp-home mode if the driver abuses them in a way that overheats the engine.

The two biggest drivers of these technologies have been government mandates and a desire to reduce warranty costs. If you stop the owner from abusing the car, you don't have to fight them over who's responsible for repairs.


@Jane: There are places in the US with those kinds of restrictions on employees, including the social media ones, but they're mostly limited to contractors for high-security government installations. (In some of these cases it's not "if you do this you're fired," it's "if you do this you'll go to prison.")

It makes getting a new job a little tough, though. "Where did you last work?" 'I can't tell you.' "Can you give me any references?" 'No.' "How about a LinkedIn account?" 'I can't have one.'

Sancho_POctober 7, 2016 5:56 PM

@Orv

Actually the automotive analogy is completely insane.
Required by law to operate a car are age and a license (knowledge + ability).
In some countries (cough) the requirements to operate a gun are less.
The reason for a law is liability.
This is the missing part with computers, in both directions, manufacturer and user.

AnonOctober 7, 2016 8:40 PM

@Jane

Your story reeks of total BS. There's very few cases where stating the country and what your company does, would give away which company you worked for. Even if you did name the company, you've neither said anything derogatory about your employer nor given away anything remotely approaching trade secrets. I can't think of anywhere where you could legally "hack and destroy hackers" in your home jurisdiction, never mind multiple 3rd party jurisdictions as @Clive pointed out.

DhavalOctober 9, 2016 6:41 PM

I think this is a very interesting article.

You can't and don't have to either fully agree or disagree. I think the article makes some good points and so does the commentators here. I think most of the anger or opposition comes from developers who are indirectly being blamed for incorrectly pointing the fingers at users.

Here is what I think and it probably is a good middle ground.

1. The writer does have a point to a certain degree. I think furthering the point of the writer in different direction, I would say that we have to make security a bit easier on the user's part. Yes the users do need education but the biggest excuse you hear from users is "oh i don't know computers" because the perception of the "computers" is that you have to be a geek or nerd to know about it these days. If the security procedures and protocols were made easier to understand for normal users, it would really make a big difference. The problem with people who design security is that they, inevitably and not by their own fault, assume a certain level of knowledge from the user. While it is quite reasonable to do so, you do have to understand that if the users have to put more effort in learning security, they wouldn't then be called "normal users".

The other thing the writer is trying to say is, we need to change the perception of what is "reasonable security". So many times we have seen in last few years that there have been software programming and design errors that lead to security issues. I know 100% safe and secure system is not possible even with unlimited amount of time, I think sometimes, security issues originate from the most basic and laughable errors in coding and design. There should be no reason why they can't be avoided.

So it comes down to the definition of "reasonable security". Everyone has different definition of what's reasonable. But we can understand from the car analogy. No matter what country you go to...and by that, I am covering all kinds of people and their perceptions and cultures etc, if you want a driving license, you have to pass a test. And for the most part (remember nothing is 100%), that test requires you to have a certain level of skill to drive a car. Without that, you don't pass and don't get your license. Now, THIS HAS HAPPENED. Humans have achieved this. So, however improbable, atleast it is possible to do the same for technology. If the future contains life full of technology, it isn't that crazy to think of such a task.

2. Now let's look at users themselves. Commentators and some software designers are also right in saying users do need to take responsibility. If we are to take the article literally, it would mean that "we should design a car that even kids can drive and that it should be safe to not crash". Just like anything else, the "users" are "consuming" technology where the "security" is required. And when there is consumption, the "consumer" has to bear some responsibility. It is quite lazy, irrelevant of anything else, to not try and learn the very basics of how does something work. Especially if you are using it THIS MUCH everyday. I would even go on to say it is arrogant. So there is no excuse on the user's part to not make an effort to understand the consequences of not using technology properly, just like we all know the consequences of not using a car properly or the lawn mower or the knife. We can use the car analogy here as well. A user shouldn't be allowed to use a computer until he has been tested for certain skill level. While it is totally impractical to test every user for every piece of tech or software they will ever use, it is most certainly possible to make it part of life by introducing it in the education system from early age. This way, it is just normal to make an effort to understand security and not a burden.

Even with the two action points I have described, there will always be a gap between the user's education and increasingly secure systems. However, that gap would be small enough to be acceptable and we all know that we can only do our best as humans. Right now though, there are two sides who are not doing their best.

So there you go folks.

Anon10October 9, 2016 10:54 PM

@Dhavel

If we are to take the article literally, it would mean that "we should design a car that even kids can drive and that it should be safe to not crash".

From the standpoint of a driver, and since manuals are almost obsolete except in high end sports cars, a car is a fairly simple black box with a go faster pedal, a go slower pedal, three gears: park, reverse, and drive, and a steering wheel. Rear view cameras, which will soon be mandatory in the US on new cars, make backing up and parallel parking much easier. If most cars were designed so that kids could see over the steering wheel, I think you could teach them to drive at ages far younger than 16 which seems to be the US standard for a permit.

Sancho_POctober 10, 2016 5:07 PM

@Dhaval
”A user shouldn't be allowed to use a computer until he has been tested for certain skill level.”

Are you kidding?
Who would define this test? You?
Or John Brennan (how to use Yahoo in private and office)? Justin Gray Liverman? The admins from OPM? Sony? Yahoo? Hillary (how to log in in plain from all over the world, + sharing pwds) or her advisers? The DNC? Belgacom? French ministry of Finance (Paris G20 Summit)? The German parliament? Target or Home Depot? Lockheed Martin? HB Gary? The Pope?

And the kids must take it before they are allowed to operate a computer, right?
Oh, to hang on to the stupid car analogy, for driving the new cars there will be an additional license to operate several computers and networks? And for a driverless car would you suggest an extra cert, for the mandatory skill how to get to the wheel, obtain oversight and react correctly within 7 seconds?

Wake up, get real!
Really intelligent, educated people and even experts fail completely to handle that mess in motion (I mean technology).
And you want to educate the user? Try it. Good luck!
But to meet the real user you better leave your ivory tower and go to the streets!

alloOctober 12, 2016 3:48 PM

This does not account for one thing: Malicious people working both against your security and your users.
Malicious? This does not neccessary mean bad intent in the sense of "Hey, i know i am a criminal, that's what i do for a living" but often just tricking the user into doing whats best for yourself.

Prominent example: Websites trying to invade the user's privacy.
Do you have gmail and no mobile phone number there? Like every 20 logins it asks you for the phone number with a big button "Add number" and a small like "later". But no "Do not ask me again" button.
Another example: Twitter. Twitter e-mails each time you login from a new device. New device? No. Every time you cleared the cookies. They want you to keep their cookies.

Both could argue they're doing it for security (which would try to fix the user), but actually they do it against privacy.

If you look at tracking cookies, you find actors which you could classify as malicious, when you say acting against the user's interests is "malicous".

Just a few examples, all of them from companies which do not see themself as shady, even the tracking companies.
When you add hackers, it gets even worse.

ctznOctober 19, 2016 6:26 AM

Amen Bruce! In fact, this is true of ALL design, not just security design. Users are humans. Human tendencies can't be miraculously altered. Good product design works with the user, not against.

coderaptorOctober 20, 2016 4:33 PM

This post reminds me of Idea #5 ("Educating Users") of the "The Six Dumbest Ideas in Computer Security" [1]. Technology changes rapidly than our ability to cope, so educating users ain't going to work - ever! Of course, it is a good way to make s***load of money though (which is what most of the security industry is cashing out on).

[1] http://www.ranum.com/security/computer_security/index.html

DjangoNovember 21, 2016 4:07 AM

Analogy -

The wallet should protect itself from getting picked. The owner should be free to flaunt it, throw it anywhere, forget it in the bus and still nothing should be lost!

ZakhariasJanuary 5, 2017 8:40 AM

How many logins are really security-relevant?
Login to download a paper free of charge?
Login to post a then publicly viewable comment?
etc.

Kostadin KostadinovJanuary 23, 2017 8:51 AM

I totally agree with you. Security process should be seamless, it shouldn't be annoying.
When security solutions are easy to use they will penetrate broader audience hence chance to stop and turn back rising trend of cyber attacks.
I have seen security solutions heavy to implement and not easy to be used. And users try everything to bypass and this way security is not improved and money wasted.
Security countermeasures shouldn't change human behaviour and company business processes.

In my work experience I found that putting in email signature sentence like "Our first priority is to serve to business" made great positive impact on perception of security practises in the company I work. There are pieces in security puzzle but I think security solution designed with humans in mind plays significant role making decreasing impact on cyber attacks.

I believe if data is protected from its birth to its death with friendly easy to use security shield (including human FW - for that, training should be fun and gamified) we can witness impeding of cyber attack growth.

Bob CrowterApril 8, 2017 8:28 PM

I personally go to sites that are sometimes unsafe to visit. Security annoys me. I save everything to a computer not connected to the internet, only way to get it is, to break in here & shoot me. I found your blog through another about big data. http://glinden.blogspot.com/ While reading it, I thought I'd add this.
I'm looking for ANY data on people that I can find, download, put into a database, if it is not already there. I'm not into genealogy, but I do have a few team members that are into it & I let them know I found it. I'm a FREE (not paid) searcher, I solve adoptions & reunite families & a few friends, now & then. (I check out those looking for someone, to make sure they are not a stalker, if I think something is not right with them.)
My collecting of data is for personal use, never sold or given to anyone, other than those I've worked with for years, including a doctor, a few PI's & others that are professionals (one works for a state government office & does adoptions as a hobby. (I've been at it for 20 years now, & I'm 70 yo)
My team & myself use it to find people, mothers & fathers that gave up a kid over 18 years ago. The women have charged their names more times than their underware, in some cases. I once solved an adoption that the guy looking was born in 1937 in Chicago to a woman named Smith. He paid $11,000 over 15+ years looking to find her & they took his money & never found anything, stopped excepting his calls & never gave back a nickel. My team & I found her alive in Florida after retiring & moving there from Colorado where she moved to right after he was born. She was born in 1916. (this all happened about 10 years ago). So, now you can see why I need this data, even thought you can use a Pay Wall, like Ancestry, VitalSearch, PublicData, etc! etc! ........... to do some of it, you still need things that aren't on those sites. Besides, I do this for FREE, which means I make no money at it, so I can't afford to pay blackmail for Public Records. I own most of the data I use to solve these cases. I use old junk computers that someone gave me just to help other people. (again, the FREE thing, can't buy computers & have many friends that give me things to help, including free WiFi or internet service.) You might think I'm cheap, but I live on Social Security & don't have a lot of money for anything. I started this stuff when my step son gave me an old Windows 95 computer & I asked him what the hell do I need it for. He told me I could look at things & send emails. (I used to be a radio tech, fixing CB's, Ham radios & other radio stuff. Plus, I fixed watches, the old windup kind, for years I did both.) Anyway, I played with it & got bored to death 'looking' at things, till I found a site that had people looking for other people & I learned how to find them. Through that site, I met a few other people doing the same thing. That is how this 'team' stuff started. What I could not figure out, they could & visa versa, so we 'teamed' up. A lot of them were looking to solve an adoption, so I taught myself how to solve one. I helped put those thieves who charged people money to find someone & didn't complete it, out of business & off the internet. No more PI/detective wantabees, stealing their money. They can't beat my price, FREE.
Anyhow, the reason I'm on here, this site, is that I'm looking for things I can use. I've enjoyed reading most of the stuff on here. I like the messages connecting to other sites & about things happening, especially the books. I found this site while looking at something on Archives.org, the Wayback Machine. (your old postings from years ago). I'm looking for many things, like old CD's of CSRA data. I have most of them, but want a few I'm missing. They are mentioned on this old site: http://smarttrace.tripod.com/ (they are 20 years old now). Also mentioned here: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC61238/ I collect them, plus any others like them, including Voter Registrations, phone discs, consumer data, etc! etc! I don't believe it should be posted on the internet, but to get it, that is where I have to find it somehow. SSDI files (the Social Security Death Index, I have 6 of them in different years. They are hugh files. Some guy posted these & they work:

SSDM files CSV format

2010-03-09:
https://www.dropbox.com/sh/urxs2ifssb9oq78/AACSHOilKwsV8xwGVVpX1-nEa?dl=0

2010-11-17:
https://www.dropbox.com/sh/pneyuzakntq8fxa/AABqJCKJ6N-qDo9X4AFDcQxda?dl=0

2011-11-13:
https://www.dropbox.com/sh/hb95kjo3qlnn682/AAAS9UT1ckKukLkIbXI2CcNla?dl=0

2013-05-31:
https://www.dropbox.com/sh/naiq7dqgha8svn0/AACH2RFiu4ZY6oA884NiErnZa?dl=0

If you ever have a need to find anyone, use these sites: http://www.gsadoptionregistry.com/ , http://dnaadoption.com/emla/ (medical emergencies) , http://dnaadoption.com/ , http://www.the-seeker.com/relative.htm , http://www.the-seeker.com/general.htm

If you have something I can use to help them, I'm easy to find on G's Adoption. Especially any Voter Registrations I don't already have. Old, new, it doesn't matter, I want them all. City Directories, Year Books, you name it, I collect & use them to help those in need of solving an adoption. Thanks for taking the time to read my long winded message.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.