Updating the Traditional Security Model

On the Firewall Wizards mailing list last year, Dave Piscitello made a fascinating observation. Commenting on the traditional four-step security model:

Authentication (who are you)
Authorization (what are you allowed to do)
Availability (is the data accessible)
Authenticity (is the data intact)

Piscitello said:

This model is no longer sufficient because it does not include asserting the trustworthiness of the endpoint device from which a (remote) user will authenticate and subsequently access data. Network admission and endpoint control are needed to determine that the device is free of malware (esp. key loggers) before you even accept a keystroke from a user. So let’s prepend “admissibility” to your list, and come up with a 5-legged stool, or call it the Pentagon of Trust.

He’s 100% right.

Posted on August 1, 2006 at 2:03 PM54 Comments

Comments

Gnu Tzu August 1, 2006 2:39 PM

The seemed like a perfect opurtunity to take a poke at you know what. But, thoughts of what the correct approach would be brought to mind the complexity of the subject. It seems that this fifth leg faces a similar level of complexity that exists for the other legs. That is, different enironments will need to approach this fifth leg in different ways. The solution to fixing “you know what” is to stop using “you know what” in every situation.

Also, the suggested name for this, “admissability” isn’t all that intuitive, and security terminology is daughnting enough. Could we possibly find a better name before this gets set in stone (or written about on the Wikipedia)?

Locke August 1, 2006 2:46 PM

I thought “authenticity” covered that. “Authentic” data from “authorised” people is necessarily “admissible,” n’est ce-pas?

Xyz August 1, 2006 2:53 PM

“Also, can we please refer to it as the ‘pentagram of trust’…”

Indeed, “Pentagram” has a slightly less negative connotation than “Pentagon”. Just slightly 😉

Joe August 1, 2006 3:07 PM

Actually the pentagon is very good at keeping secrets when it needs to.

And people still have some idiotic connotation of the devil with the pentagram.

As for the admissability its valid. Require endpoints to be running software x,y, and z to scan and protect is a must. I know of several large banks that do this. Without the various software running you will be denied now matter how authentic you are.

mpd August 1, 2006 3:15 PM

“Without the various software running you will be denied now matter how authentic you are.”

But then you have to know that x, y and z was used. You must also trust whatever it is that is telling you that x, y and z was used.

Of course there’s a term for this: Pedigree Management.

Venkat August 1, 2006 3:20 PM

I think the admissibility problem is very hard to solve, if not impossible, without invasion ofprivacy at the end point.

For end points that are under the administrative control of the server, this is fine — as the bank example states. For end points that are desktop machines, this requires “trusted computing”!

Benny August 1, 2006 3:31 PM

The “admissibility” requirement involves both software and hardware. For instance, even the best anti-virus/spyware/malware software wouldn’t be able to tell you if someone snuck into your house and installed a hardware keystroke logger. The simplest way to meet the admissibility requirement may be to sell user computers as single tamper-resistant units?

Zwack August 1, 2006 3:40 PM

You could argue that the same is true not just of the end-point device, but the network between the end-point and the “data provider”

There are mechanisms that will improve security on the network layer (SSL for example) but don’t say anything about the trustworthiness of the end-point. But a trustworthy end point without any network level security connected via an untrusted network is just as bad.

I think I would prefer “Access method” to “Admissibility” but it’s not my list…

Z.

bsewall August 1, 2006 3:42 PM

I like the CIA approach – Confidentiality, Integrity and Availability – instead of the Four A’s. But the Integrity component has always left me cold, because it seemed limited to only whether the data is accurate. Dave Piscitello’s point is a good one and it adds a necessary dimension. However, I also agree that “Admissability” is not an intuitive designation. I’d opt for “Trust”.

derf August 1, 2006 3:43 PM

If we’re nitpicking: Identification comes first – you tell the system who you are. “Hi, I’m Bruce Schneier”. Authentication is actually validation of identification – it verifies that based on some criteria in our conversation, our system thinks that you are who you say you are. “Thank you for your password Mr. Schneier, it matches the one we have on file in our database for you.”

Another leg needed is “are you safe?” You could have a perfectly safe client that is identified, authenticated, authorized, available, authentic, and admissible, but the user has a gun to his/her head (like at an ATM machine). I don’t think we’re ready for that in software yet, but I’m sure some bio measurements would need to be included.

The Heretic August 1, 2006 3:52 PM

Just to nitpick: How about ‘Acceptability (is the device acceptable/in an acceptable state)’ I find it clearer than ‘Admissibility’.

Benny August 1, 2006 3:53 PM

@derf:

We could just steal a page from home alarm system design, and allow users to enter a “duress” password, which would quietly notify the system that the user has a gun at their head so that apppropriate actions can be taken (send police to ATM location, etc.).

gary August 1, 2006 4:04 PM

I don’t think we need a new step as much as we need to successfully implement the existing four steps on all components of the information infrastructure, not just servers and applications. That means desktops, phones, PDA, routers, DNS servers, and everything else that is just one piece of the puzzle.

If there is malware on the desktop, authorization and authenticity have already both failed.

Qian Wang August 1, 2006 4:05 PM

I hope I don’t sound spammy, but this topic relates directly to what my company is doing. This is interesting timing because just over the weekend we launched an anti-keylogger tool called KeyScrambler which encrypts keystrokes at a kernel driver level and then decrypts them within the browser, thereby defeating most common kinds of keyloggers without the need for signature based detection. There aren’t very many anti-keylogging products out there. One we are aware of is coming from StrikeForce, but it’s not yet launched. Other anti-spyware apps suffer from the defect of only being able to remove keyloggers after the fact and gives the malware a window to do its damage before it’s removed.

Of course our software and any software of its type cannot entirely solve the admissibility problem, given the current security architecture of Windows. But we do think that our product will provide the average user some much needed protection against keyloggers (including zero-day ones). With Microsoft requiring more kernel code to be signed for Vista (actually for 64-bit Vista, everything that runs in kernel space will need to be signed), our approach should become even more effective.

See http://www.qfxsoftware.com if anyone’s interested in trying KeyScrambler out. The personal version is free and protects all login pages while the pro version protects everything one types into any page. (e.g. SSN, credit card numbers, etc.) If Bruce or any fellow readers have any comments for me directly, my email is qzwang at qfxsoftware.com.

Prohias August 1, 2006 4:17 PM

What is the difference between legs 2 and 3: Authorization and Availability? In all systems I have worked with, the data you are allowed access to is enveloped by what you are authorized to do. Code wise, your “role” maps to a database connection with appropriate credentials for that “role”.

Frank McGowan August 1, 2006 4:26 PM

I like the idea of a duress code but they are not fool proof, as demonstrated by a coworker.

When I was a night-shift computer operator, the facility management guys had a new security system installed. The old system had a single password for everyone which was changed weekly. The new system had a separate password for each authorized person and, though I didn’t know it until later, a duress code.

One night, many months after the installation of the new system, I was speaking with one of the guards at the main guard station when the alarm started going nuts. When I asked what the emergency was, the guards laughed and said “Mac just entered the duress code instead his password; he does it every time…” When I asked what the duress code was, they wouldn’t tell me. Nor would they tell “Mac” that he was routinely using the duress code or what his correct password was. They just assumed he knew all that and was being difficult; he had that reputation.

The short version: the system administrators may not divulge the duress code, thereby discouraging its abuse… Oh, and making it useless in the bargain.

Glenn August 1, 2006 4:35 PM

Maybe the Pentateuch of trust? Or the pentathlon of trust? Or put it in verse and make it the pentameter of trust?

Stiennon August 1, 2006 4:45 PM

Au contraire. He is 100% wrong. You cannot trust an end device to report its own health. Spoofing the state of an end device is easier than spoofing an identity. Network admission control (NAC) is bad security.

See my CioUpdate.com article posted yesterday.

-Stiennon

Matthew Skala August 1, 2006 5:14 PM

This may be the “you know what” that Gnu Tzu was going to avoid taking a cheap shot at, but a lot of people who are interested in “trustworthiness of the endpoint device” aren’t interested in protecting it from classic malware, but from anything that could allow the nominally authorized user to have full control – like, say, a fast-forward button.

Jim Liedeka August 1, 2006 5:16 PM

I vote for pentacle of trust. It’s nicely esoteric without the satanic/heavy metal baggage of pentagram.

Anonymous August 1, 2006 8:29 PM

@ Gary: Authorization and authenticity might not have failed. Example: you have one department setting up the secure servers and a larger, less-trusted PC maintenance department setting up their desktops. As far as I am concerned, the system is already compromised in this situation, but because of the political structure of the company there is nothing that can be done about it.

jjj August 1, 2006 9:33 PM

Why should this fifth dimension be considered anything new?

O.K.. key loggers are relatively new, but how is that effectively different than someone standing over your shoulder, or videotaping what you’re typing on your laptop in a semi-public place?

This dimension has always been problematic, and there’s no technological way to solve it.

Theuns August 1, 2006 10:21 PM

Having learnt from the mistakes of the past, there’s always the octagram of trust – that way, there are three spare legs available for future expansion…

Davi Ottenheimer August 1, 2006 11:36 PM

@ Stiennon

“You cannot trust an end device to report its own health. Spoofing the state of an end device is easier than spoofing an identity.”

Not sure how/why you distinguish an “identity” from an end device, since an end device can have an identity, no?

A spoofed identity from a user is certainly a big problem, often because of the weak link/connection between them and their end device. In comparison the identity of an end device is not foolproof, but it seems like there are some cryptographic systems that make it reasonable to try and secure.

For example, would you find it easier to trust/verify a person holding a token or an end device with a stored token? A token holder’s identity is usually based simply on possession and a simple secret — a PIN — both of which are at risk of a user simply giving them away (ok, nevermind biometrics, since it’s still reasonably uncommon), whereas an end device’s stored token can be protected with a far more tightly/centrally controlled number of certification keys — protected “in depth”.

On a different note, I tend to agree with much of your article, but see the host-based Network Admission Controls as entirely compatible with network flow control (or whatever you call it) rather than in competition. You can use a default deny rule and frequent network assessments, but eventually bad traffic will find ways to flow over your good tubes (as Sen. Stevens would say) and you’re right back to the question of how to trust end devices. Why leave them exposed when you can try and establish as least a basic form of trust?

Roger August 2, 2006 12:55 AM

The problem here is that in order to determine admissibility, you have to prove that you are communicating through a trusted device. This could to some degree be established with special purpose hardware (e.g. cellphones try to do this), but on a general purpose computer it means — Palladium. Yech.

By the way, “a 5-legged stool” should be a pentapod, but all these names sound silly. We could go for “five pillars of security” but I suspect some people would find that offensive. How about this coinage: Pentathyr (fivefold door) or Pentapule (fivefold gate).

D V Henkel-Wallace August 2, 2006 1:00 AM

Important issue, but doesn’t this go back to the 1960s? Wasn’t this the whole “Break key” and “trusted path” discussion in the rainbow books?

My memory is fuzzy but I think I remember using BREAK with Multics so I would know I really was talking to the exec…

Sorry that this is so vague, but it was a long time ago.

And of course it all died with the advent of the network terminal anyway.

Joe Auricchio August 2, 2006 1:16 AM

When you reduce “admissibility” to a more simple and general form, it becomes the question “What is going to happen to the data after it changes hands from the provider to the user?”.

The intention of a good secure system is, at the end of the day, to provide the right data to the right code and eyeballs. With that, necessarily, is the risk that other code and eyeballs will steal that data from the intended recipient – and that is a problem that can only be solved on the client side. The parameters of the problem for the client are the same “Four As” (or “CIA”) we already know. The security of the entire system scales like a fractal, to smaller and smaller scopes, until at last we have CRT to eyeball and fingers to keyboard (which crosses into the realm of physical security and becomes a different sort of problem).

Ultimately, the admissibility question reduces to “What are you going to do with this data after I give it to you?” This is not a new question, nor is it an easy one to ask and answer. Some of its unhappier answers include users taking laptops home after work where they can be burgled, or users deliberately giving or selling data to third parties. These are all completely outside the design of the security system itself.

Until the day when data itself grows teeth and can defend itself against misuse, we have to carefully limit who we authorize, do our best to teach how to secure their own environments, and finally, trust those authorized users and their environments. That’s the way we do things now, and it’s not always a very fun way (especially that second part), but it’s the only way.

I hope somebody’s working on that teeth thing, though.

A longer version appears on my blog (click the linky)

Pedro Soria-Rodriguez August 2, 2006 1:29 AM

I agree with several of the comments already posted here: the trustworthiness of the end device is covered in the traditional four-property security model. Trustworthiness is a characteristic of a system, and it depends on its integrity and authenticity. That is: integrity and authenticity should not be thought of as properties of “data” only, but also of systems. In establishing a connection with a system or service “XYZ”, I am assuming that I trust it to be the XYZ I am expecting. If the service is infected or it has been tampered with, this means its ‘integrity’ has been compromised, and it is no longer the ‘authentic’ service I expected it to be.
Therefore I would not add trustworthiness as a fifth element of the traditional security model.

Michail August 2, 2006 2:55 AM

@Pedro Soria-Rodriguez

That’s point of truth.

Admissibility is subset of Authentication. But Mutual Authentication!

And discuccion is over.

gal_sec August 2, 2006 3:14 AM

It’s a bright idea and some vendors are already trying to implement this (i.e. Juniper in its SSL-VPN solution).
Still I think proper and “bulletproof” implementation of this principle is a tough problem.

Steve August 2, 2006 5:41 AM

What’s the technical difference between the trustworthiness of the end device, and the trustworthiness of the end user?

Isn’t the danger of a user being keylogged equivalent to the danger of the user typing their password while on CCTV, writing it down and losing it, mumbling it in their sleep, or selling it to criminals for cash? For secret information that you send to them, is their spyware any different from the danger that they might lose their laptop, or sell your data to your competitors?

Sure, you’ll want to use different methods to measure it – we have some idea how to trust people (and plenty of legal precedent for what happens when they breach trust), but in general can’t currently trust a PC belonging to a random member of the public. But is there a difference of type between the fact that your customers’ PCs contain spyware, and the rather less common possibility that at any time some attacker might be trying to trick them into revealing information which your security model considers secret, stealing their tokens, or faking their biometrics in order to spoof identification and hence authentication?

As such, is “admissibility” just a subset of “authentication”? Granted, it may be becoming so important that professionals should be reminded of it, and naming it as an independent category might achieve that, but is the 4-step model really incorrect (“no longer sufficient”), or is it just that ideas of what it takes to achieve “authentication” need a rethink?

M. Mester August 2, 2006 5:43 AM

And people still have some idiotic connotation of the devil with the pentagram.

Well, I have a connotation of even more evil devils with the pentagon. And I don’t think it’s idiotic at all.

Adam August 2, 2006 7:30 AM

We always considered this part of Authentication. If somebody else installed software secretly, the ‘you’ authentication refers to includes that other. Didn’t MLS researchers encounter difficulty springing from the same source?

Anonymous August 2, 2006 8:47 AM

I understand why organizations wish they could secure the workstations of their employees and clients. But the concept of “admissibility” is not practical beyond a certain point.

Controlled software environments will not help against hardware keyloggers, people and (tele)cameras looking at keyboards and screens, social engineering and other surveillance techniques. To get at root passwords of some company, you could just bribe regular staff to install sneaky hardware keyloggers at administrator workstations.

The best organizations can do is run a reasonably good standard software image on intranet workstations, possibly utilizing TPMs (Trusted Platform Modules), in physically secured rooms with tempest shielding and no windows in the walls.

Organizations or people running servers on the internet cannot even do that. They have no control whatsoever over the machines of their clients (and this is good so for various reasons, including privacy and security of the clients).

And there is no way to stop users from breaching security, be it deliberately or due to naivity, stupidity or an exceptionally strong attack against them.

Authorizing users means trusting the whole endpoint they represent: their good intentions and social behaviour as well as their ability and care to setup and operate their equipment and environment in a secure way.

On a more abstract level, systems and users are subscribers to contracts that involve security. If one subscriber violates the contract, the other(s) should either be compensated, accept the risk in the first place, or not subscribe at all.

Economics suggest to trust any person or organization only as far as their ability and motivation (violation risk) are in sound relation to the value of the data and (possibly) prearranged compensation.

Of course you should only agree to terms that you can both understand and adhere to. This means the security of systems must be simple to understand, verify (!) and use even for ordinary consumers.

Oh, and I don’t see how it helps to reject a valid password if it was entered into an “inadmissable” terminal. Someone who logged it (it’s “inadmissible” because that can happen) can now authenticate at an “admissable” terminal. Unless it’s a one-time password.

Alan Porter August 2, 2006 9:01 AM

@ “x, y and z” Joe:

I find it funny the way Windows users treat security. They assume that the machine has more viruses than an Amazonian monkey, and then they install patches to work around the malware.

If I used that same security model for my home, I would leave the door wide open, and then train a battalion of poodles to go room-to-room, counting how many burglars they found.

@ “spammy” Qian Wang:

Good luck with your company’s band-aid software. We need more band-aids.

Alan

antibozo August 2, 2006 10:45 AM

  1. Admissibility is not a good word for this; it is not clearly distinguishable from authentication. Arguably authentication is covered by admissibility. A more specific word is called for, and it doesn’t really have to start with the letter A, unless we are children.

  2. The metaphor is broken. People use tripods as a metaphor because three legs is the minimum required for the stool to stand on its own. Once you go beyond three, the additional legs are redundant (unless three were colinear), so the metaphor no longer denotes something where every component is necessary.

Pat Cahalan August 2, 2006 12:02 PM

Piscitello is right, and he’s wrong.

As many people have already pointed out, “admissibility” should be considered to be part of “authorization” – what you are allowed to do can be quantified both in terms of who you are, and what method you are using to access whatever you’re trying to access.

Without reading Piscitello’s original mail (Bruce, can you post a link to a web archive?) I’m going to imagine that what he was trying to point out is that the current “big problem” is n’t trusting the user, but the device or the communication channel. In that sense, I agree with Bruce – he’s totally on the ball.

The four step security model is useful, but all to often when people are discussing it they gloss over the fact that they are implicitly trusting their platform, and talk about covering the other aspects of the four steps. I think Piscitello was just pointing out the emperor has no clothes.

arl August 2, 2006 12:37 PM

Authentication should include the trustworthyness of the endpoint. For example “who you are”, “what you know”, “where you are” should be considered all at once. Even Microsoft Windows allows you to set permitted workstations for logon.

zencoder August 2, 2006 2:23 PM

As a mentor has said to me every time… true authentication includes “Every Device”, “Every Network”, and “Every User”. A bit grandiose, but it’s on the right track. Really glad to see this mentioned Bruce!

roperipe August 2, 2006 2:46 PM

Determining “admissibility” feels like it should be part of the “authorization” process. One of the conditions for being authorized to access a given piece of information is that the endpoint meets certain standards. Why break it out into its own “A?”

Glenn Marshall August 3, 2006 9:44 AM

I’m surprised no one brought up the Parkerian Hexad. Donn was discussing these issues years ago.

miw August 4, 2006 4:12 AM

To me admissability reflects the trustworthiness of a computing device, whereas the authorisation has more to to with trusting the person i am communicating with. You can authorise people to have access to sensitive information, because you know they won’t use the information in a bad way. Now, such people might use some incredibly compromised PC platform.

Steve August 4, 2006 5:04 AM

@miw: “such people might use some incredibly compromised PC platform.”

Well, then they’re using the information in a bad way, aren’t they? Just as if they lost an unencrypted backup tape with the data on it, or moved their lips while reading it in the same room as someone who can lip-read.

The only difference is that if someone sells the information, that’s criminal, if they lose it we might call that negligent. If they let someone see them typing in their password, then we’d probably blame them for any resulting abuse. But if they use an insecure endpoint they’re, what, just unlucky? Sometimes yes, there’s nothing they can do because they’re at the mercy of Microsoft or Apple or some Linux distro. Other times, they’ve clicked something they shouldn’t and installed a trojan.

I see that the endpoint device needs to be part of any security model. But the network route needs to be part of any security model too (either “it’s physically secure”, or “we’re layering secure encryption over it”), and there’s no need for an extra A to cover that.

Qian Wang August 4, 2006 8:16 AM

@Alan Porter – “Good luck with your company’s band-aid software. We need more band-aids.”

Alan, thanks for the good wishes! 😉 I happen to not consider being labeled “band-aid” a particularly bad thing. Almost by definition, most security products are band-aids. If the original designers of the system had anticipated the security flaw, there’d be no flaw. If we were Microsoft, we could integrate our software into the operating system, provide an API, and maybe even tie it to some hardware. But you could still call that a band-aid solution.

I think we can discuss the concept of admissibility and whether that’s covered by authorization or not ’til the cows come home. But if we’re talking about consumer-facing apps such as online banking, we’re faced with the fact that there are users running windows who don’t know (and don’t want to know) anything about security. They have keyloggers on their computers right now stealing their passwords. If I have a cut on my hand right now, I’ll take the band-aid rather than wait for someone to rethink the security implications of this whole endoskeleton covered by soft tissue design.

Only the paranoid are secure. And most ordinary users are just not that paranoid. So every security product is just a little paranoia expressed in code. And paranoia, like security, is a never ending process and I can only hope that my company’s product will become popular enough that we will have to deal with malware actively trying to subvert it. In the mean time, we have something that will defeat the vast majority of keyloggers in the wild and provide protection that complements anti-virus and anti-spyware programs (which are even more like band-aids). It’s not perfect, but it’s a lot more effective than waiting for Microsoft to fix all their security problems and for all the users to finally start caring about security. I’ll probably have grown some carbon-nanotube cum kevlar skin before that happens.

solumbra August 10, 2006 12:10 AM

I know of a bank that is currently scanning the remote endpoints of its employees before allowing them access to the VPN.

One thing I’m wondering about is how to implement it. The OpenBSD packet filter pf has a program called authpf that can be used as a login shell; when invoked, it changes firewall rules to (usually) allow that IP to access the network. Perhaps something similar could be used, perhaps a web applet or something (signed, natch) that checks for common problems, and if it finds none, sends back a message that it’s okay to proceed.

In that case, some people or malware will probably willfully try to reverse engineer the mechanism by which it sends back the okay. So how can we make this as difficult as possible? It sounds a lot like the anti-software-cracking and anti-reverse-engineering problem to me; that is, the playing field is even and neither the malware nor the checker has a positive advantage, which means it will be a neverending arms race.

By the way, this problem is known as “remote attestation”.

I think the best way to handle it would be to avoid it altogether; give employees a Live! CD that they can use to boot up and access the server resources. No malware can stay resident on it, and it would have just enough functionality for them to do their job. Another alternative is to give them systems with a virtual machine; they do their fun/play stuff in a guest VM, and do their work stuff in a seperate guest VM, and never get access to the host VM, so that it remains secure.

Mark Walker August 16, 2006 9:37 AM

It seems like there is still something missing from even the proposed 5-step security model.

What about the trustworthiness of the host?

It seems like the same concerns about host system integrity and admission to the network are valid from the endpoint device and remote user’s perspectives.

The mere presence of a listening host on the network does not demonstrate it’s trustworthiness or assert the host is the genuine article.

Publickey crypto techniques can help deal with the genuine article (imposter) problem, but I’m at a loss as to how the admissibility problem could be tackled. If the endpoint can’t tell it is compromised, how will the host determine admissibility of the endpoint?

jonhaug August 20, 2006 7:59 AM

This is really old news. The much critisized “Trusted Computer
Security Evaluation Criteria”, aka “the Orange Book” contains the
following passage:

3.2.2.1.1 Trusted Path
The TCB shall support a trusted communication path
between itself and user for initial login and
authentication. Communications via this path shall be
initiated exclusively by a user.

Rediscovery is also a kind of discovery.


Jon Haugsand
Dept. of Informatics, Univ. of Oslo, Norway, mailto:jonhaug@ifi.uio.no
http://www.ifi.uio.no/~jonhaug/, Phone: +47 45 00 39 94

Anne April 20, 2008 9:36 AM

I have a question that maybe you can help with. August 01, 2006 included a blog entry about updating security models- and I am researching the CIA Triad and other newer models (McCumber Infosec Model 1991, and the Parkerian Hexad) that continue to include the original principles of the CIA Triad, but consideration to add new principles to the model have been proposed.
Do you know if any of the standards institutes have picked this controversial topic up? Can you direct me to information on this topic?
Best regards, Anne

Jero Guo October 22, 2008 10:01 AM

I think the limitation for data sources is just the operation policy to be used to assure the message/data is OK, and it is just because of technology or business shortcoming. It is not an real factor to decide the security for data/message. For example, a man with mandacity tells you a truth while you don’t believe it despite it’s really true.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.