Malcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The "cost" of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Posted on July 21, 2015 at 6:51 AM • 37 Comments

Comments

Slime Mold with MustardJuly 21, 2015 7:15 AM

You forgot Blunt and Cairncross.

James Jesus Angleton said at the time: "The difference is that they catch their spies".

Also: There are no 'raincoats' in the UK. They're "Mac's"

Steve MarshJuly 21, 2015 7:21 AM

This is interesting, and exhibits something about trust that is worth reiterating. The reason I think trust has power is its inherent flexibility and acceptance of failure. Properly considered trust allows for verification, adaptation and contingency in a way I am not sure that security (as control) can (I bow to more experienced security commentators than I - I just 'do' trust).

To be honest, I'm not sure one can 'err on the side of trust.' One can give too much of it, or too little, but it's always there. Trust is a measure, not a binary value. The real mistake comes from seeing it as the latter and behaving accordingly.

TualhaJuly 21, 2015 7:25 AM

Thought that sounded familiar. You do realize that was published nearly a year ago?

Slime Mold with MustardJuly 21, 2015 7:41 AM

Bruce is right about the trust model(aka "old school tie").

When discussing the possibility of a mole in MI-5,
Eric Roberts told Guy Liddel that anyone who belonged to two of the right London clubs could pass completely without suspicion. Although he did not finger the Apostles' club at Cambridge.

I favor the Wright model, but I have prejudices.

ramriotJuly 21, 2015 8:16 AM

Even without all the old-school-tie dressing, simple statistics will point out the key issues of Philby Vs Wright.

in 2012-13 MI-5 had around 3,900 people working there. Lets assume 2 of them are double agents.

If you err on the side of trust lets say you would not spot the 2 in 3,900 so your false negative rate is 0.05%.

If you err on the side of mistrust by oh marking 0.5% as potential double agents. Then you would mark ~20 employees as such, and destroy perhaps 20 peoples lives unjustly.

Even if the 20 included the 2, your minimum false positive rate would be ~0.46%.

But mostly you would end up spending a small fortune chasing these 20 around with a vanishingly small probability of any of them actually being one of the two. While in the other 3880 the chances are almost certain that your "men" are in that group.

WinterJuly 21, 2015 8:46 AM

"If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal."

@Ramriot
"Even if the 20 included the 2, your minimum false positive rate would be ~0.46%."

This whole problem has no solution: The False Positive paradox
https://en.wikipedia.org/wiki/False_positive_paradox

With a low prevalence of traitors, your specificity and sensitivity must be astronomical to not be swamped with false positives.

AlanSJuly 21, 2015 8:47 AM

Gladwell: "It was the late nineteen-thirties, when the British class system was still firmly in place..."
 
The UK is still run by people educated at public schools, such as Eton, and who then attend Oxbridge. For the numbers and discussion see here and here. The elite still trusts its own and suspects everyone else. The Soviets, ISIS or whoever are hardly the threats the elites worry about most.

Here's the elite celebrating an event that happened 200 years ago. Back then the 'enemy' was French. And here's the reality then (doubt this event will be celebrated in 2018) and now.

rgaffJuly 21, 2015 8:53 AM

@ramriot

Or you could do like we do in the USA now.... just treat every single person in the entire universe as a suspected terrorist... at least then you know you didn't "miss" them... We need some bigger prisons don't we...

Tim#3July 21, 2015 9:09 AM

As an aside to this topic, people on here might be interested to look into Micky Burn MC, a Commando of the legendary St Nazaire raid of WW2 , Times journalist, communist, lover of Guy Burgess, and who even met Hitler during his remarkable life. His autobiography, Turned Towards the Sun, was published a few years back and copies are remarkably expensive, however a DVD version of it has been released in the last couple of weeks or so, I’m awaiting delivery of a copy so cannot give any more info yet.

Slime Mold with MustardJuly 21, 2015 9:30 AM

@ Alan S
RE: "Old School Tie" in the US

During the 2004 presidential race, some liked to point out that John Kerry and George W. Bush had both belonged to the Skull and Bones Society at Yale.
Try searching "Harvard Obama Administration" or "Yale Bush Administration" or vice versa. You'll also get a fair number of hits for Princeton and MIT. Peggy Noonan claimed this attitude was prevalent in the Reagan White House (of course, Reagan himself was an alumni of Eureka College in Illinois: ).


paulJuly 21, 2015 9:53 AM

Maybe it's a developmental thing. One thing I've noticed about (some) kids in the under-10 age group is that they attribute any action that hurts them to malice -- and then seek "justice". It takes a lot of work to get them to believe that someone elbowed them in the hallway by accident because the hallway was crowded, or ran into them on the playground because they weren't looking where they were going. Or that gravity isn't a dangerous force personally aimed at them by the universe.

This kind of paranoia/vindictiveness may be particularly common in cultures of scarcity, where people really are after you. Less so in cultures of abundance and security (which pretty much describes the upper-class brit).

Coyne TibbetsJuly 21, 2015 11:23 AM

The model has application outside of the vetting of spies. Today, it's also being used for surveillance decisions. The whole grounds for "capture everything" is the presumption that everyone is guilty until proven innocent.

So we have, for example, the do not fly list that catches millions in its net in order to protect us from...how many real threats? As far as I can tell, none, but it sure catches lots of non-threats like Laura Poitras. Since there is no way for someone falsely caught in the net to prove their innocence, the damage must be ongoing.

ArclightJuly 21, 2015 11:58 AM

Both of these models are correct, in a way. Class issues aside, the "human trust" model described in the article sounds a lot like "Hire smart security people and give them proper training and discretion." We often praise Israel for doing exactly this in aviation security.

Humans who are empowered to do the right thing and properly trained are adaptable and capable of noticing subtlety. Algorithmic and prescribed security is not. The "trust noone" model seems to fundamentally give up a lot of the advantages we get from trusting other humans and is costly in resources.

The reality is that "trust inherently" and "trust noone" are both deeply flawed ideals. What I think we need are fewer secrets. And organizations that function despite there always being a fixed percentage of insiders working towards interests other than the org's.

This is where "resilience" and "attack surface" come into play. A group whose function is security has an optimal size - large enough to cover the most important needs but as small as possible, to lessen the threat surface. Lines of communication, org charts, numbers of subcontractors and lots of other things increase exponentially as a project doubles in size.

Key functions should be separated and entrusted to different groups whose interests may compete in some ways but ultimately work toward the same end goal.

Examples of this would be data storage and key material escrow being separated, or offensive and defensive work being split up.

And fewer secrets means fewer things to protect, fewer places to control damage and the ability to take those secrets that are important much more seriously.

The U.S. security mission post 9-11 has become much more brittle and, I would argue, less effective due to its rapid scale-up, mission creep, and absolute dependence on the keeping of a vastly larger set of secrets.

Arclight

rgaffJuly 21, 2015 12:03 PM

@Coyne Tibbets

And since Laura Poitras is so obviously not a terrorist threat to the travel industry, then it must be done for retribution because they don't like her reporting. I wonder how many others on that list are clear examples of abuse of power too (not even mistakes, deliberate misuse)!

rgaffJuly 21, 2015 12:08 PM

"No, we're the 'good guys' so all of our 'misuse of power' is just 'for your own good'"

Slime Mold for MustardJuly 21, 2015 2:22 PM

I hope no one mistook me to mean that I advocate applying the Wright model to the general population. I was speaking to practices in organizations that handle materials of high value, or whose disclosure would represent extraordinary loss.

As for "ruining peoples' lives" or ruining morale - this can be somewhat mitigated by:
1. Making security reviews routine, and exempting no one.
2. Keeping the very fact of an suspicion based internal investigation tightly controlled.
3. Never allowing mere suspicion to determine some one's fate. You must develop proof or overwhelming statistical evidence (that a judge and/or jury can understand). This also limits lawsuits.
4. When possible, allowing the guilty to stay at their desks for a week or two after you have cut off access and opportunities for sabotage - so they can look for work. This suppresses office gossip. It also allows the organization to do what is possible to ameliorate damage, anticipating that they will soon be with a competitor.
5. Agreeing not to seek legal remedy in exchange for cooperation (they rarely confess their full duplicity, but this makes them feel better and less likely to sell the stuff in their heads - besides, who likes court?).
6. Depending on the nature of the crime and their attitude toward it, you may offer a decent reference.

@Paul
Our own dear @Clive Robinson occasionally likes to remind us:
Once is accident
Twice is happenstance
Thrice is enemy action

@Coyne Tibbits
The actual use of "collect it all" is less that everyone is a suspect, than that upon identifying a suspect, a substantial history can be amassed and reviewed for further evidence, suspects or lessons. I oppose it anyway.

The No - Fly list, at its inception, was an example of a phenomenon best described by Bruce as: "Something must be done! This is something , therefore we must do it!"

albertJuly 21, 2015 2:24 PM

OK, first of all why do we have to dichotomize everything?R

Yes, there should be a spectrum of degrees of trust. No, it's not 'everyone is guilty...'. It's CYA. It's 'we can't trust our system to spot the bad guys'. It's a good way to punish folks we don't like*. No system is perfect, yet 'threat' propaganda generates public demand for perfection. Even folks who are motivated by firm religious, moral, or psychotic convictions can be swayed by temptation, and become traitors. But then we get into metaphysics. No one has figured out that part yet. 'Trust models' aren't going to work; this stuff can't be quantified.

@Arclight is quite correct. Too much secrecy is the problem. Read the FAS secrecy blog. The more secrets you have, the harder it is to keep them. If these secrets hide immoral or illegal activities, they are even more vulnerable. I don't recall the numbers, but there's a lot of classified stuff that doesn't need to be classified. It's a waste of time and resources. It may not be physically possible to declassify fast enough to keep up with the exponential generation of information. Expense is often cited as an excuse. The IC apparatchiks may be starting to believe their own propaganda. Can they prevent terrorism? They are either very effective, or there's a helluva lot less the they like us to believe. One thing is sure: mass surveillance is a very effective adjunct to population control.

What are the root causes of organized terrorism? One is revenge or payback, for some real or imagined wrongdoing. This brings to mind the parable of the Boy and the Dog. Despite being warned again and again, the Boy continued to tease the Dog, until one day the Dog broke his chain and bit the Boy. It was a minor injury, but the Boy insisted on killing the Dog anyway. And he did.
.
...
* formally known as 'investigative reporters'. Laura Poitras was specifically targeted because of her activities. 'No fly' lists are useful for punishment. Security? What's the point of keeping someone off the plane, if they are searched? Unless you arrest them, there is no point. It's harassment.

RandomDrJuly 21, 2015 2:58 PM

@ramriot:

What if the tests themselves create terrorists? And if so, can you quantify it?

Gordis's Epidemiology has the best explanation of how physicians and medical researchers use sequential and/or simultaneous tests to handle false positives and false negatives. It's a gigantic issue in medicine because the tests and treatments themselves are so dangerous.

To extend your analogy, it'd be as if 30% of the 20 people with unjustly ruined lives decide to actually sell information after their lives are ruined. 30% of 20 = 6, right? In other words, the investigation created a total of 6 enemies of the state, as opposed to the original 2 double agents you were looking for. Much of the intelligence they had would have expired and/or been changed, which wouldn't happen with a double agent scenario. However, a lot of the information -- office politics, public/private relationships, and so on -- would still be the same.

So is handing that information over to the adversary really worth it? Can the adversary get the same information from LinkedIn/GPS/etc.?

Or is it less about CBAs and more about CCCs (conform, cower, and comply)?

Here's where I'm coming with this. I always found fascinating is that psychologists and psychiatrists have always followed very basic, virtually freshman-level, procedures when it comes to test development than researchers in other specialities. When I was in graduate school, I thought that it was because they took their stats classes separately from other clinicians and medical researchers. For whatever reason, the "hidden curriculum" of their stats courses trained them to view statistics as a rhetorical device that's ultimately irrelevant to clinical care instead of as a tool that could actually answer a question, settle a debate, or develop better clinical practices.

I wonder if something similar is going on here. It also explains a lot that's in the news with the APA.

albertJuly 21, 2015 3:35 PM

@Slime Mold for Mustard, (are you 'Slime Mold with Mustard'?)
.
IANAL. In the US, employers are not limited by law in what _factual_ information they can release about ex-employees (except such information which is explicitly covered by law, like medical data). Employment dates, salary, and position are safe. Big companies have very specific guidelines, and err on the minimum side. This helps to reduce lawsuits, which serve neither party. Agreements not to prosecute are useless, unless they are bulletproof. If the employee violated company policy, that's probably doable, but violation of state or federal laws could result in prosecution by those gov't entities, regardless of the employers wishes*. If 'national security' is involved, all bets are off. What's done in secret, between individuals could work, but I wouldn't want to be on either side in one of those situations. Small companies can run into big trouble with this sort of thing. Always consult an attorney about termination decisions.

Your points are well taken. IIRC, employers aren't _required_ to say _anything_ about you. It would be an unusual company policy:)
.
...
* an employer is unlikely to risk being prosecuted for a coverup.

rgaffJuly 21, 2015 4:56 PM

@Slime Mold for Mustard

"The actual use of "collect it all" is less that everyone is a suspect, than that upon identifying a suspect, a substantial history can be amassed and reviewed for further evidence, suspects or lessons. I oppose it anyway."

In the good old days, when our government actually followed the constitution, they were not allowed to invade a person's private stuff unless they got a warrant first, based upon pre-existing suspicion. Meaning: they could only do that to suspects. Nowadays, when they just "collect it all" from EVERYONE in order to go on fishing expeditions looking for the real suspects... everyone is equivalent to a suspect by the old constitutional way of looking at it. If I were innocent, why are they scooping up all my data and searching it? This is not what innocence is by the constitution, only in an authoritarian country. This is the new Democratic People's Republic of America.

milkshakenJuly 21, 2015 4:59 PM

Gladwell is a compelling writer but too glib to my liking, and he did not do his homework about Soviet nuclear espionage. Otherwise he would not use it as a supporting argument (that the damage from witch-hunt is worse than the actual betrayal).
If USSR did not detonate their first fission bomb in summer of 1949, the Cold war history could have been vastly different. (Soviets could have purged and executed their best physicists as "saboteurs", Oppenheimer and his friends could have succeeded in derailing thermonuclear weapon research, US would probably have used the Bomb in North Korea and so on.)

tyrJuly 21, 2015 5:21 PM


All it takes to put you on the radar of mistrust and
harassment is a working brain.

https://www.muckrock.com/news/archives/2015/jul/21/nothing-indicate-nothing-indicate-subject-had-any-/

The best formula to use is Trust but Verify. The next
thing is to start believeing your varifications. The
endless repeating of non-productive actions is defined
as insanity. If you think there is something wrong with
everyone else your problem is inside your own skull.
When that escapes into the world it creates a lousy
world you have to live in too.

Slime Mold WITH MustardJuly 21, 2015 5:53 PM

@albert

Yeah, that was me. I read something about the symptoms of cognitive decline, but I can't remember them ; )

I have extensive training in employment law, at federal level and the states where I work. Employers are required to provide dates of employment, position, and salary when contacted for a reference. However, if that is all a former employer will give you, you should see red flags like a Soviet May Day parade. I have never heard of a suit over a positive reference, but suits over negative references are legendary - even when the employer can prove the truth of their statements. Providing an untruthful positive reference can even benefit a former employer if the 'problem' has been foisted on a competitor and the former employee is not collecting unemployment.


RE: Prosecution

In something approaching 99.9% of cases there is no chance of anyone outside the executive suite ever finding out about it. For a long time I advocated for prosecution of the truly large cases, but discovered that corporate America is very reluctant to publicly air its dirty laundry. Also, sometimes the data/funds are actually property of clients and the CFO wants time to slip them back to the clients without alerting them, or the COO alter operations to limit damage. If the malfeasance were exposed, the company would be in the position of; a) Having their reputation sullied b) Losing that client and probably others c) Owing immediate restitution. You would be surprised at how seldom outside auditors are sent, and their incompetence when they are. I have spoken to peers at other firms both inside and outside my industry and find this to be true elsewhere.

In short, lacking a copy of the internal investigative report or a confession prosecutors have no chance of showing the company was aware of the issue. In the few cases we have prosecuted, the prosecutors have demonstrated a distinct lack of enthusiasm.

Slime Mold with MustardJuly 21, 2015 6:14 PM

@rgaff

I agree. I was not trying to say how they should see it, but rather how they do.

gordoJuly 21, 2015 11:16 PM

Slightly Off Topic.

On the Iran nuclear deal, trust, and suspicion: Implementation of the compliance and enforcement mechanism is key. This deal seems to be more an example of a suspicion-oriented security model than one of trust.

Transcript: Secretary Of State John Kerry On Cuba, Nuclear Deal With Iran
Sec. State John Kerry spoke with NPR's Steve Inskeep at the State Department.
July 20, 2015

Two quick questions and I'm going to let you go. The people have raised the question of trust, quite often, and said, "You can't trust Iran."


That's right, you can't trust Iran — and nothing in this deal is based on trust.

The administration has responded, "Don't have to trust them, we're going to inspect them."

But you were there negotiating. On some level, don't you have to trust that somebody in the room is serious? And did you, in the end, trust that you were dealing with serious people who really wanted an agreement that would last?

What you have to trust are the words that you get on a piece of paper that allow you to do something or don't allow you to do something. And you have to trust that those words are going to be implementable by you — yourself.

That's what we trust. We trust that we have the ability to enforce this deal; we trust that the deal, if implemented, will do the job. And if it's not implemented, we trust that we have every option available to us that we need.

http://www.npr.org/2015/07/20/424769835/transcript-secretary-of-state-john-kerry-on-cuba-nuclear-deal-with-iran

GweihirJuly 22, 2015 1:29 AM

There is also another cost for the restrictive/suspicious model: You will be unable to hire or retain certain people because they will be unwilling to work in such an atmosphere. The cost of that may well be extreme, as it can both cause selective blindness and unfortunate tendencies by a too-narrow minded staff.

Ends vs. MeansJuly 22, 2015 7:44 AM

Thoughts/opinions about this:

When A asks and expects B to keep A safe, what happens? What kind of relations and dynamics form and act on and play? Any expanding on thought in any direction would be interesting as well.

albertJuly 22, 2015 10:30 AM

@Slime Mold WITH Mustard,

I stand corrected. I was a manager at a huge multi-national corporation. Our policy was minimum disclosure. That should _not_ be a red flag to prospective employers. Everyone needs to be on the same page on this issue. The law should _prevent_ former employers from releasing anything information beyond the minimum. A glowing report (usually from a competitor) means nothing to me. There are other, better ways to evaluate an individual.
.
Prosecution: I've heard stories about employers covering up actual crimes by employees. I won't repeat them, but I believe they're true. It is done. It's risky. It's MAD. Not a pleasant way to live.
.
...

Ends vs. Means (Follow-up thoughts)July 22, 2015 12:22 PM

Let us say that "When A asks expects and relies on B to keep A safe and secure, B becomes the actual Master of A"

Does the king become a prisoner, the castle a jail?

---A castle or jail its who hold the key--- OK

Why does the king ask expect and relie on security and protection to begin with?

Internal and external threats

Thoughts anyone...

Ends vs. Means (Follow-up thoughts)July 22, 2015 6:08 PM

My thoughts are my own- I don't actually expect any thoughts or opinions:)

A few other thoughts with relevance perhaps:

Basic needs - Maslow

(The farmer, the policeman)

Circular dependency

Learned helplessness

Mutiny

Coup

Directed influence

Coyne TibbetsJuly 23, 2015 9:24 AM

@Slime Mold for Mustard - "The actual use of "collect it all" is less that everyone is a suspect, than that upon identifying a suspect, a substantial history can be amassed and reviewed for further evidence, suspects or lessons. I oppose it anyway."

But that is suspicion.

Consider two sets of persons:

(1) those persons we trust;
(2) those persons that we don't trust, that we suspect now or might suspect someday.

It should be clear that, for the intelligence agencies, the former set is the null set; there are no trusted persons.

Therefore, there are only persons we suspect or that we suspect we might suspect later: everyone is a suspect.

"Collect it all" cannot be justified without that basic assumption.

albertJuly 23, 2015 12:21 PM

@gordo,

Thanks for the comment.

Excellent answers by Kerry*.

Nothing like a little truth and logic to completely throw off the MSM. What a bunch of idiots.

.
...
* never thought I'd say that:)

Gavin July 23, 2015 10:36 PM

>Does the king become a prisoner, the castle a jail?

It is for this reason shrewd kings build secret tunnels to their castles, not only to fulfill their fantasies but for practical purposes. As all things are relative, modern malls can be prisons. The build is inconsequential to the applicable as we learn politics double speak runs amok. For example Free and Freedom Acts often do the exact opposite as do champions of humanity.

Coyne TibbetsJuly 25, 2015 10:47 AM

@rgaff - "Not only is it wrong to define it that way, additionally there's nothing to keep it from expanding in all sorts of ways. Don't think that it's not your color or nationality or religion so it doesn't affect you."

@albert - "Propagandists love these kinds of labels. They are 'flexible'. They are emotionally charged. They can influence public opinion in serious ways."

Very true. Right now, the propaganda is against Muslims; using it as propaganda is wrong. And, yes, it could change.

Worse, by using such useless definitions, our security actually suffers. The government spends all its time looking at Muslims; who knows what other group is cooking something up as we speak?

-----

Personally, I think terrorism is useless as a term.

Consider "terrorism" as a process for change. It must fail: all such acts only ever harden resolve of the people and/or government attacked. War, even rebellion, can lead to real change; public process (democracy) can lead to real change; "terrorism", never.

So-called "terrorists" are therefore pretenders; their acts cannot have the result upon which the act is predicated. Therefore, the "terrorist" is either insane (unable to weigh acts versus consequence) or else is a simple criminal (committing violent acts knowing those will not have the desired consequence).

Whether they are insane or not, we call those who commit violent acts criminals and restrain them for the good of society. Granting an extraordinarily violent criminal the title of "terrorist" is literally to offer an "honorarium" that is undeserved.

The term "terrorism", as noted by Ben Saul in the wiki article, "lacks [...] precision, objectivity and certainty." It is therefore, useless except for exciting public opinion; useless except for propaganda.

On that basis, the terms should never be used at all.

JMCAugust 26, 2015 3:49 PM

Think Canary Traps...

Erik Erikson on trust - formed during first 18 months of life.

Lets put that aside and assume we have a "Baseline of Trust" (BoT)
that everyone is measured by (i.e. not gut feeling/intuition)

LET:

U= Untrusted
T= Trusted
E= Evidence
P= Probability
BoT= A priori (can be deduced from the variables used to establish the BoT).

-----------------------
EXAMPLE Baseline of Trust (BoT)

In this example we have 5 measurements, each with their own space and you have
established a scoring system for each - You have decided that a score greater than 55 is considered Trusted and a score less than 60 is considered suspect.
Every person has a BoT score generated and is on file - updated yearly or per event.

{
Credit .8 * .2 = .16
Marriage .7 * .2 = .14
Vices .35 * .2 = .07
Psych_Eval .8 * .2 = .16
HR_History .2 * .2 = .04
}
TOTAL TRUST SCORE = .57 or %57

Note:
ATOMICITY: There are 5 Items in the BoT set = 1/5 or .2 or %20 changing the values - Think of it as averaging.


-----------------------


HYPOTHETICAL SCENARIO:

Corporation AXIS places a Canary Trap on a File server in a R&D Center.

E1: Evidence for Mistrust: John copies the Canary Trap to his workstation.
SCORE=.8

E2: Evidence for Trust: John does NOT exfil the Canary Trap in any way.
SCORE=.4


Applying Bayesian (formal):


P(T|E)* P(U|E)
P(T) -------------
P(BoT)

OR

.32
------------
.57

TOTAL TRUST SCORE = .561

John is Borderline Trusted and placed on a potential suspect list. More monitoring and evidence is required.



Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.