Lessons From the FBI's Insider Threat Program

This article is worth reading. One bit:

For a time the FBI put its back into coming up with predictive analytics to help predict insider behavior prior to malicious activity. Rather than coming up with a powerful tool to stop criminals before they did damage, the FBI ended up with a system that was statistically worse than random at ferreting out bad behavior. Compared to the predictive capabilities of Punxsutawney Phil, the groundhog of Groundhog Day, that system did a worse job of predicting malicious insider activity, Reidy says.

“We would have done better hiring Punxsutawney Phil and waving him in front of someone and saying, ‘Is this an insider or not an insider?'” he says.

Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence.

Posted on March 20, 2013 at 11:51 AM39 Comments


Clive Robinson March 20, 2013 1:31 PM


Rather than getting wrapped up in prediction or detection, he believes organizations should start first with deterrence

And he still does not get it…

As far as stopping an insider attack the only thing that works is “catching them at it”, or “catching them afterwards”.

For which you need “detection” banks and other places have tried “deterance” for several centuries and people are still robbing such places with insider information/assistance.

And as for deterance what is going to work?

We know that as far as exec level and up want BYOD and this is going to fillter down. Accountants and the like want to reduce overheads so it’s “hot desking” and “StarSucks meetings” and “Home Officing” to make savings.

The only tool that appears of any use if people can use it properly is “auditing” but only post event…

In all honesty they would probably be better of buying insurance.

Lucas Pudding March 20, 2013 1:45 PM

Senator McCarthy, where are you now? Hoover, …anyone out there? Oh, hello Janet!

Dinosaur March 20, 2013 1:49 PM

No wonder, it is easier to influence a caothic system rather than predicting its evolution.
Shaping public opinion is by far a most effective mean to drive the mass towards desiderable behaviors rather than predicting undesiderable behaviors in single persons – either if you are a police dictatorship or a venture capitalist.

Nick P March 20, 2013 1:53 PM

I agree with Clive about deterrence. I’ve essentially been pushing deterrence or prevention-based designs on this blog for years. Using deterrence successfully would require mandatory access controls on all significant operations with access, integrity and temporal properties. Additionally, the security policy must be very flexible and provably able to stop security breaches. This isn’t doable in general for most organizations.

I’d also note that the article used almost all its supporting points to justify a detection strategy. So, saying that deterrent is the first thing to do or even important kind of contradicts their own evidence/claims.

Petréa Mitchell March 20, 2013 2:00 PM

The level to which it’s paraphrased makes it hard to figure out what they’re really trying to say in places, but the parts I can make some sense of sound like cutting-edge psychological research for the 1960s.

Clive Robinson March 20, 2013 2:11 PM

IF you have problems reading the page with your smart phone or other mobile device try sticking,


On the end. Darkreading is known to have a broken mobile interface so why they don”t just sort the issue out I don’t know (maybe they are all “Fanbois” when it comes to testing 😉

Clive Robinson March 20, 2013 2:30 PM

@ Bruce,

One thing made me smile,

The article says,

He believes the FBI and other organizations should be looking for ways to “automate out of this problem set” by focusing on better user education.

And low and behold there is also a piece on the same site from you saying “user security training” is not realy working….


I liked your line,

… sitting in front of the television all afternoon with a McDonald’s Super Monster Meal sounds really good right now.

Yup I can second that, the thought sort of crosses your mind when sitting in a UK hospital bed. UK hospital food is realy yuk even in comparison to a Mucky D’s “grease trap special” (luckily I got out of hospital befor lunch today and made a nice steak lettuce and tomato sarnie for lunch, with a nice freshly home made fruit salad 🙂

Out of curiosity in the US do Mucky D’s realy have a “Super Monster Meal”? In the UK all we realy get is a few extre chips for the equivalent of about 0.5USD.

Clive Robinson March 20, 2013 2:56 PM

Oh something that might amuse,

In section three on deterance you will find,

Additionally, the agency has found ways to create “rumble strips” in the road to let users know that the agency has these types of policies in place and that their interaction with data is being used

What you call “rumble strips” in the US are known in the UK as “Sleeping Policemen”, on reading the article it suddenly seems a whole lot more appropriate 😉

Oh and on a more serious note, he says,

We have to create an environment in which it is really difficult or not comfortable to be an insider

That might work in a quasi-military or Gov bureaucracies that thinks “compartmentalisation’ is the way to go, but beating up on employees in a company is just going to make them resentful and even more likely to become an “insider threat”

As Bruce has pointed out in the past one way to make employees unhappy is to put rocks in their path when they are trying to meet this months evere increasing work target so that they can earn the money to keep the roof over their heads and food in their childrens stomachs…

I get the feeling that these Feebies have not actually had to work in a market driven environment.

Oh and the article is helpful to those who are actually planning an “insider attack” because of what he has said about in general they don’t luse hacker tools etc.

So rule number one, as an insider you know the weaknessess of the system so make like an “outsider hacker”. That is use tools that get you past the fire wall in some way. And in the process load a bit of “botware” onto one of your colleagues computers and make it look like APT etc…

Petréa Mitchell March 20, 2013 3:05 PM


What you call a “sleeping policeman” is actually what we call a “speed bump”. “Rumble strips” are things at the side of the road that make noise and shake the car when you drive over them so that you know you’re starting to drift off course. I don’t know what the equivalent UK term is.

Nick P March 20, 2013 3:18 PM

Rumble strips


@ Clive

“That might work in a quasi-military or Gov bureaucracies that thinks “compartmentalisation’ is the way to go, but beating up on employees in a company is just going to make them resentful and even more likely to become an “insider threat””

Yeah, I totally agree. Users in certain types of companies will tolerate a certain amount of compliance, esp. if it makes sense to them. Just making things hard for no reason will only make them mad. They will then start acting like a malicious insider for a LEGITIMATE reason: circumventing nonsense security to get actual work done.

The last thing an insider threat mitigation program needs to do is make insiders want to scheme more against the company. This article confirms my suspicions long ago that working for the FBI might not be such a great job.

paul March 20, 2013 4:21 PM

I think perhaps instead of “deterrence” the phrase should be “defense in depth”. The goal isn’t ultimately to reduce the number of bank robberies with insider help to zero, it’s to reduce the number enough.

I’ve been reading a lot of old mystery novels, and one of the fascinating things is seeing how many single points of failure were apparently considered normal, e.g. one key to the vault, held by a single trusted manager, or checks of any size payable on the word of a trusted cashier, or inventories performed by the same person responsible for authorizing withdrawals…

And always the stories turn on how unfortunate it is that the wrong person has been put in a position of trust, never that a few simple changes in procedure would make all these defalcations impossible.

Simon March 20, 2013 4:48 PM

@paul – yes, BUT adding layers and making it complicated can introduce multiple points of failure that are hard to find. This is another problem. A chain is only as strong as…

Clive Robinson March 20, 2013 5:07 PM


What you call a “sleeping policeman” is actually what we call a “speed bump”. “Rumble strips” are things at the side of the road that make noise and shake the car when you drive over them so that you know you’re starting to drift off course.

Err no, what you call a “speed bump” we call a “speed hump”.

Sleeping policemen are thick painted lines that are often graded in width and appear on the road in various places.

The first place I remember them which was a good twenty years before speed humps was on an aproach to a roundabout that had a corner that also narrowed down the number of lanes.

As you drove over them they made the car ratle and back in those days the stearing wheel shake slightly. The effect of the grading of wide far apart down to narrow close together sub conciously slowed a driver down.

Although not now common across car lanes they are still around on road intersectioons and where rooads narrow to wake up sleepy or inatentive drivers. You see similar with certain types of slipin lanes etc.

Clive Robinson March 20, 2013 5:45 PM

@ Petréa Mitchell,

Sorry my above was to you I had a call in the middle of typing it and forgot to cut and paste your name (to ensure I get the “é” which is otherwise a right pain to type on this “not so” smart phone).

@ Nick P,

The type of rumble strip you showed was tried for experimentaly for a while in one or. two places in the UK because it was quicker to make but our “maritime climate” of repeated rain&Freeze on a daily basis in late fall early spring tended to break up the surface to quickly with that type, so it was back to the old slow tripple thick layer of road marking bitumen based paint. As a one time made keen cyclist I hate all “traffic pasivation” systems becausse cyclissts tend to get forced into them by inconsiderate motorists and they quickly wreak racing wheels.

It’s interesting to see how they do it on some race tracks where they use what is in effect cement bricks with alternate ones raised about an inch or so and painted alternatly red and white. Yould think they would wreak a car but apparently at speed you more hear than feel them.

@ Paul,

The goal isn’t ultimately to reduce the number of bank robberie with insider help to zero, it’s to reduce the number enough

That should be the goal of all security. I know in information theory we have the promise of absolute secrecy/security, but in practice it is just to problematical to do and usually relies on something that cannot be guaranteed such as “only two unique copies” of a One Time Pad.

@ Simon,

BUT adding layers and making it complicated can introduce multiple points of failure that are hard to find.

This was actualy found to be the case when a certain old venerable and well known UK bank moved over to computers. They found that sometthing like six hundred of the proceadures in the manual methods were redundant. But more interestingly some of tthe systems analysts discovered that some actualy when combind with others either effectivly negated each other or worse covered up fraudulant transactions.

A clasic example of “why do we do that” is double entry book keeping. The original idea was to stop insider fraud/theft as the two halves were filled in by seperate clarks from different ledgers. But these days one person fills the entries in on one computer program…

Dirk Praet March 20, 2013 8:18 PM

On deterrence, these folks may benefit from paying some attention to Dan Pink’s TED talk on rethinking the traditional sticks and carrots management ideology. See http://www.ted.com/talks/dan_pink_on_motivation.html .

One of the best resources that your security program has is the collaboration of the HR department.

That is if they’re actually doing a good job interacting with employees instead of just being “the guys in charge of hiring and firing”. Over the years, I’ve met very few who were actually helpful in any way either to get your job done or helping out with both work and non-work related issues. In the immortal words of one of my former HR managers: “Being popular is not part of my job description”.

JD March 21, 2013 1:42 AM

The program itself is a little horrific, but I’m aghast at his grasp of statistics and reasoning. If you have a system that performs significantly different from random in either direction then you have a meaningful prediction. If the system is wrong significantly more often than random, then just do the opposite of what it says.

So far as information content goes, random is the worst case scenario. Anything meaningfully different gives a positive amount of information.

joequant March 21, 2013 2:31 AM

The thing about attacks is that there is no easy metric for the attack that doesn’t happen because people don’t bother with it. Banks use deterrence and it does work. Yes there are bank robberies and theft, but it’s not so much that banks to out of business of even lose a large amount of money.

There are some really simple things that can work. One is to not leave anyone alone in the bank vault either figuratively or literally.

Praet: In the immortal words of one of my former HR managers: “Being popular is not part of my job description”.

HR has many roles which vary from company to company. One of them is “corporate scapegoat.” They also are involved in compliance and policy enforcement.

One China-related note is that the term for human resources in China is “political engineering” and you often have people whose job title is “political engineer” or even “senior political engineer”.

joequant March 21, 2013 2:38 AM

Detection and deterrence aren’t separate. Knowing that you will get caught if you do something makes it less likely that you will try to do it.

One interesting anti-fraud measure which turns out to be highly popular in financial firms is mandatory vacation. Each year you are required to spend two weeks out of touch of the office and this makes it easier to catch “something odd.”

One other question that I have to is “are the police really the best people to give advice about insider fraud?” Something that I’ve found that people in the military and police often “don’t get” is that in business people routinely “switch sides” if your competitor offers more money. If you make work a living hell, then you are actually encouraging people to work for your rivals and taking their inside knowledge with them.

Autolykos March 21, 2013 4:20 AM

@bf skinner:
My thoughts exactly. If your detection method performs worse than chance, you just invert the criterion and you’re better again.
So (just a wild guess, I don’t know their method) it seems that insiders actively avoid to do suspicious things, and are good enough at it that this alone makes them stand out.

Clive Robinson March 21, 2013 5:26 AM

@ JD, Autolykos,

. If your detection method performs worse than chance, you just invert the criterion and you’re better again

Err not always…

Take the case where the output is always “sinner” then inverting it produces always “saint” in either case it’s just as usless.

What you have to check is that your rules are actually meaningfull to the problem.

For instance if you see a few hardend criminals in jails or photographs of executed criminals faces and you notice they all have some combination of “sinister” facial features, you might consider “detached ear lobes” or “heavy brow ridges” to be an indicator, then some other indicator etc.

Simply inverting from “has detached ear lobes” to “does not have detached ear lobes” actually makes your detection rate worse because a smaller percentage of the population have detached ear lobes than does not, so you simply move from 10% of the population falsely accused to 90% of the population falsely accused…

The result is a tool such as phrenology and physiognomy (that have long since been discredited) that can be quite usless, but sufficiently captures peoples minds to linger on apparently indefinitely. One reason for this is given as, it becomes accepted in popular culture some how (crime stories in this case) and thus people continue to say in (supposed) factual reporting “the accused had heavy brow ridges and a guilty demeanor”.

Unfortunatly many other edge cases exist such as the famous “MO” which says a criminal will always commit a crime the same way and thus if you find a crime scene which shows this MO it must have been done by this criminal. It ignores two salient facts and probability,

1, copycats.
2, Sometimes there is only one way to do a job that is efficient.
3, A combination of other factors in cause often give the same effect.

It’s one of the reasons that some individuals get pressurized into either making false confessions or taking a whole load of other crimes (not done by them) into consideration during sentencing.

The last point as I’ve remarked before is why forensic science although often involving science is actually not science.

Toby Speight March 21, 2013 6:19 AM

A bit OT, but I can’t help wading into the US/UK terminology discussion.

Where I grew up[1] in Yorkshire (UK side of the water), ‘sleeping policeman’ meant a speed hump (or bump), designed to physically deter speeding. Generally a hand or so in height and perpendicular to the the direction of travel. Back then, you only saw them on private driveways, but they seem to have spread like weeds onto the public roads.

‘Rumble strips’ were (and are) made out of paint, and just a centimetre or two in height – enough to give a haptic warning to the driver, but unlikely to damage the vehicle even at illegal speeds. They are found as crenellations on the (longitudinal) outer lane markings of motorways and major roads (according to diagrams 1012.2 and 1012.3 in the Traffic Signs Regulations and General Directions), or as lines across the full width of the road prior to a hazard or change in speed limit (often in conjunction with anti-skid surfacing, and usually only where there’s a perceived need to reduce speed, not at every roundabout or village).

[1] Insofar as I’ve ever done any growing up…

Gweihir March 21, 2013 6:19 AM

@Autolykos: Indeed. Any classifier worse than random can be turned into a detector better than random by just inverting it. It may still be pretty bad.

The real lesson here is that those defining the criteria are worse than incompetent. My guess is that personal prejudices played a big part. If the scientists are actually to blame here, then they are worse than useless at science. Not that it would be the first time in fields like psychology, politics, economics or social sciences. These people do typically not understand the scientific method.

Autolykos March 21, 2013 8:40 AM

@ Clive Robinson: I’m not 100% sure if I understand you correctly, but we’re probably not talking about the same thing or making different assumptions.
I concede that a method and its inversion can both be useless (like your physiognomy example), but they can’t both be worse than useless (in the signal detection sense – damage to society is another thing entirely).
My point was more of a technicality than an actual suggestion anyway. Inverting a detection method that is (significantly) worse than random will usually not turn it into good detection method. But it will be better than random.

paul March 21, 2013 8:48 AM

Making a good predictor out of “worse than random” is only straightforward if you have a binary choice and can normalize your results to somewhere near 50% of each choice. Otherwise either Bayes will bite you or “inverting” won’t tell you anything useful or both.

(For the “both” case, imagine a card picker that has good specificity, but really lousy sensitivity, so that every time it claims a card is going to the the ace of spades it is, but still misses 95% of the aces of spades and calls them as some other card. Inverting those results is not going to give you what you might hope for.)

Autolykos March 21, 2013 9:03 AM

@paul: Ah, so that’s the point. I assumed the detection threshold to be adjustable and “worse than random” to be an AUC<0.5. Your card picker would probably have an AUC>0.5 if it was adjustable, so inverting it will clearly make it worse.

Autolykos March 21, 2013 9:06 AM

grr, it took my operators for HTML-Tags. Retry:
I assumed the detection threshold to be adjustable and “worse than random” to be an AUC smaller than 0.5. Your card picker would probably have an AUC greater than 0.5 if it was adjustable, so inverting it will clearly make it worse.

John Campbell March 21, 2013 9:32 AM

Sadly, the insiders that should be considered FIRST are those at the top of the food-chain. Once you’ve exposed– and hopefully excised– the culture of corruption at the top, you can start working your way down the food-chain.

Remember, cultural corruption spreads from the TOP of the septic tank, NOT the bottom and, I suspect, there will be lines of rot all through the organizational tree.

Autolykos March 21, 2013 9:37 AM

@ John Campbell: Good point. That probably explains the poor performance of the detection algorithm. Scientist: “Every time I run this test, it detects half of the bosses as moles. Need to fix that.”

Clive Robinson March 21, 2013 10:22 AM

@ Autolykos,

One of the points I was making was that of false correlation assumptions.

If = examine a very small subsset of something I will find corelations between the samples (looks shifty / is a criminal). The question then becomes is it a valid correlation or not, there are a number of outcomes,

1, All people who look shifty are criminals.
2, All criminals look shifty.
3, Some people who look shifty are criminals.
4, Some criminals look shifty.

The first two cases can be used to filter for criminals but they are different. The second case alows for the case where looking shifty and not being a criminal is possible.

Although the second rule cannot be used as a test for a criminal on its own it can be used as a filter to rule out those who don’t look shifty. However it requires that no person who does not look shifty be a criminal.

When you get to the third and fourth rules you start to have problems because they alow for non criminals to look or not look shifty and likewise for criminals to look shifty or not shifty.

Thus these rules may be valid or invalid if valid the closer their outcome is to a 0.5 probability the less use they are. However they may be invalid and there is no correlation between looking shifty and being a criminal.

The problem is that sometimes when you test one or more sub samples an invalid rule can show up as being valid and thus you head off into a world of false assumptions.

Now we get into the fun of bounded and unbounded sample populations. If your rule is false but appears otherwise on a sub sample of a bound population then when you invert the rule and test it on the so far untested remaining samples of the population you would expect it to also show it was not false…

Now as we know from logic that a rule that is true for both it’s inverted and non inverted state is not possible therefore the rule can only be false…

Thus the question is the competance of the (presumably) FBI employed persons who thought up the rules used

Ollie Jones March 21, 2013 12:50 PM

This problem of detecting and mitigating insider threats before they can do harm is well known to the church and education communities. How do we detect bad actors: the ones who will mistreat children or elders?

Because of the horrendous revelations of the past decade or so, there’s plenty of tolerable training on the subject.

It’s not hard to learn to spot a potential threat. It’s not hard to assess how serious the threat is. We’re learning to handle false positives: to investigate these kinds of things in a way that balances personal reputations and child safety.

It’s all about relationships. Handling insider threats can be done well. It’s easier to do when we realize it’s based on fallible human wisdom rather than wizzy algorithms.

John Campbell March 21, 2013 1:03 PM

@Ollie Jones: yes, it is all about relationships… and transparency.

However, no matter how good a system you have– and no matter how thorough your checks and balances are– it all comes apart because ALL systems have meat somewhere in the loop.

Never forget, also, that a system is vulnerable to the person with authority. Control Fraud isn’t going to go away until human beings are extinct… which would render all systems moot anyway. The most dangerous person in any organization is the one with the most authority over information or finances.

And, never forget, it’s not always who you know, or what you know, often it all comes down to what you know about who.

Ralph Hitchens March 21, 2013 3:38 PM

The insider threat is a cost of doing business. Of course, it depends on the business. The national security & intelligence business, which I’m in, sees a lot of paranoid whining about this threat, but they keep harping on the same handful of names: Ames, Hanssen, the Walkers, Pollard, et. al. Go back to World War II & the Cold War, & you find it’s an insignificant fractional percentage of all the people who held security clearances. For the sake of those few we all jump through a lot more hoops than we should have to. Cost of doing business, I say.

BW March 21, 2013 6:07 PM

It sounds like it’s actually working. If it’s worse than random sampling, use random sampling on the algorithms rejects.

Nobody March 22, 2013 8:03 AM

A key issue he mentions here is “in twenty years of espionage cases”. Let’s see, the FBI does counterintelligence in the US and has been victim to some of the worst moles in global history. They also investigate very serious incidents of espionage here in the states and to a degree, abroad.

So, that is an obvious, very critical strategy there for the FBI. What I see happening in this article is a sort of smoothing and combining of very different situations: insider threats of the Joe Blow type and insider threats of the mole variety in the FBI, defense contractors, and elsewhere.

They are two similar — but entirely different animals.

It is way out of line to try and “get normal data patterns” in corporate America (or anywhere) to watch and see if one employee might go bad.

That is insane.

It is not out of line to consider such rigorous programs in the FBI, because counterintelligence threats and other FBI corruption situations which are similar are that serious.

You can not merge those situations together to come up with one easy push button solution. Surveillance on employees and strict paranoia like that deeply degrades workforces. It strongly impacts producitivity. With defense and policing agencies, maybe it would not.

But, as far as I can tell, these actions absolutely would not have caught Robert Hanssen or Aldrich Ames. [It definitely would not have caught Kim Philby and that sort.]

Just on consideration, from the dozens of other cases I have read of, it would not have caught them, either. I could be wrong, the FBI guy going public on such information should, above all, have more facts available.

But, NM the obvious problem: There could always be all sorts of moles in policing and intel agencies and many very well could go undetected.

It is extremely difficult to trend.

A spy’s business is to be a spy and be undetected. When detected, they have made a failure in their business. One should not assume every spy, therefore is detected or has been detected.

Robert Hanssen, that is probably what the FBI most smarts from, and there were many indicators not all was right with him. A huge reason no one suspected him was because the FBI had wanted to believe The Mole could not have been in their organization.

Major fault number one. People believe what they want to believe and if you want to be good at such things you have to fight that every waking moment of every waking day.

Hanssen, was by no means a slick Kim Philby. He was accessing data he should not have had access to. He was making trips to asian countries. He was a sexual deviant who played himself up as a super religious nut job.

Above all, like Aldrich Ames and Kim Philby and many others, he was a counterintelligence go getter.

There is your pool of suspects right there. Why? Because by becoming heads of counterintelligence or otherwise moving up those ranks they gained control over what scared them the most – what threatened their lives the most – moles in the country they were working for.

“It takes a mole to catch a mole”, the saying goes. Wisely.

These sorts have a lot of motives and capabilities for rising in counterintelligence. It keeps them from being caught and killed, it increases their pleasure in their fantasies of power and prestige, and it makes them much more powerful double agents. The adversary nation is capable, being the adversary, of giving them winning cases to advance in that cause. What they want, also, is data on their moles, above all. After all.

But even these characters can not give you definite, definable statistics because of the unknown quality of unknown double agents and their unknown methods of operation.

You can not, that is, say, “oh we caught X number of spies in the past thirty years and this is Y pattern of behavior” with absolute certainity because those are just the ones that were caught.

There are definitely different species of moles. There are the types that are sloppy and try and do a few jobs and get caught. Then there are the long term ones like Ames, ones other countries have run, and ones like Walker or Hanssen.

Then there is the problem of retrospective analysis. “Oh, I always knew something was funny about him”. Sure, after he was caught he is no longer in your mind as someone who is definitely not a spy and now is in your mind as someone who definitely was all along.

Fact is you have to secure data, have routine and rigorous assessments on key employees, surveil key employees, keep track of unusual finances and not ignore them, watch their travel patterns, check up on their excuses for having money they should not have, and above all watch the money.

If they may be “true believers” like Philby, many of those indicators would not work, unfortunately. So pattern “who a true believer might be”. Sadly, this may mean ethnic and religious profiling.

Nobody March 22, 2013 8:42 AM

… on various comments above, agreeing with some, disagreeing with others…

I do think the religious pedophile problem is a great example here.

There you have a religion which teaches “there is such a thing as an appearance of goodness which people can cultivate to hide wrongs”, and yet you have a history of this very thing.

In fact, it gives people ideas. And they can expertly cloak themselves in all the indicators of goodness and yet be involved in some really serious wrongs.

In practice: this is mind bendingly tough for people to get around.

Inside the delusional bubble of belief culture it is next to impossible to see this, either with the organization or with especially malignant individuals.

Historically, it is outsiders to those delusion bubbles who see the situation clearly and realistically. Critics. Those who are objectively not involved and have no reason to believe the lie, and those who have negative involvement and have every reason to hate the lie.

Ironically, the mole or malignant insider (or more rightly most malignant) also tends to be an outsider.

For instance, did the Branch Davidians comprehend David Koresh’s misdeeds? Or the people of Jim Jones? But outside critics and objective observers did, and above all those who had been burned by them and woken up to the truth.

While it may be easy for a Scientologist to say the Jehovah Witnesses are a cult, or vice versa, can they say their own group is a cult?

For more “normal” people, can they objectively step outside their own group and say their own group is cultic so they can detect what is right and wrong about it?

Any human being who believes one or other group is deluded is well aware of this.

They just have a really, really hard time applying the same standard they use against enemies of their own group to their own group and to their own selves.

So they are blinded to problems in their group and malicious insiders work on this serious vulnerability with ease.

By nature human beings use double standards in judgment and this effects everyone. Those who think they themselves are free of this vulnerability are the ones with the worst degree of it.

Cult operate in very similar ways to mainstream religions, to families, to friend networks, to radical and moderate political and religious groups, to corporations, and to policing and intel networks.

John Campbell March 22, 2013 9:09 AM


Realize that memetic groups tend to be insular and xenophobic… and those who can fake the necessary level of sincerity can wear sheep’s clothing.

It is easier for a wolf to wear sheep’s clothing (memetic camouflage) than for a sheep to wear wolf’s clothing.

Also… remember that, in most such memetic herds, the idea of questioning– showing doubt– is one of the first heresies.

You may also noticed that sexual repression/suppression, in almost all such groups, is present, simple because, in our male inheritance culture, respectability is inversely proportional to sexuality. (Ask yourself: why are sex scandals so much juicier than financial scandals?)

Honesty is seldom rewarded.

Clothes make the man… and are very useful camouflage.

I sometimes wonder if naturists might have an advantage, here, in evaluating people, but, then, in our textile-centered culture, they’d be considered deviants and outsiders.

We are all SOOOO f**ked; All of our systems seem DESIGNED to be suborned because of a dependence upon “trust”.

Fast Fud March 22, 2013 9:27 PM

We used to have Super-sized at Mickey D’s, but I think they took it away. You can get the large combo with more fries, err, chips, if you like. About .35$. USD

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.