Insider Threats

CERT (at Carnegie Mellon) just released a study on insider threats. They analyze 49 insider attacks between 1996 and 2002, and draw some conclusions about the attacks and attackers. Nothing about the prevalence of these attacks, and more about the particulars of them.

The report is mostly obvious, and isn’t worth more than a skim. But the particular methodology only tells part of the story.

Because the study focuses on insider attacks on information systems rather than attacks using information systems, it’s primarily about destructive acts. Of course the major motive is going to be revenge against the employer.

Near as I can tell, the report ignores attacks that use information systems to otherwise benefit the attacker. These attacks would include embezzlement — which at a guess is much more common than revenge.

The report also doesn’t seem to acknowledge that the researchers are only looking at attacks that were noticed. I’m not impressed by the fact that most of the attackers got caught, since those are the ones that were noticed. This reinforces the same bias: network disruption is far more noticeable than theft.

These are worrisome threats, but I’d be more concerned about insider attacks that aren’t nearly so obvious.

Still, there are some interesting statistics about those who use information systems to get back at their employers.

For example of the latter, the study’s “executive summary” notes that in 62 percent of the cases, “a negative work-related event triggered most of the insiders’ actions.” The study also found that 82 percent of the time the people who hacked their company “exhibited unusual behavior in the workplace prior to carrying out their activities.” The survey surmises that’s probably because the insiders were angry at someone they worked with or for: 84 percent of attacks were motivated by a desire to seek revenge, and in 85 percent of the cases the insider had a documented grievance against their employer or a co-worker….

Some other interesting (although not particularly surprising) tidbits: Almost all — 96 percent — of the insiders were men, and 30 percent of them had previously been arrested, including arrests for violent offenses (18 percent), alcohol or drug-related offenses (11 percent), and non-financial-fraud related theft offenses (11 percent).

Posted on May 18, 2005 at 9:28 AM15 Comments


Brian Stanko May 18, 2005 11:07 AM

The element of the report that I find disturbing is the subtext that strange behaviour in the workplace is more than likely a sign of sabotoge. And even more disturbing is the suggestion that employees should be watching each other for the telltale “signs” and reporting said signs to the powers that be.

From the report: “Developing a formal process for the reporting of such behavior in the workplace is important, including the consideration of whether a mechanism for anonymous reporting should be provided. Employees should be informed of the process and encouraged to avail themselves of the opportunity to report suspicious or inappropriate behavior.”

Smacks of 1984 thought-crime logic and fear mongering: saboteurs act strangely-you are acting strangely-therefore you are a saboteur.

Keith Schwalm May 18, 2005 11:36 AM

The study was an addition to the first Insider Threat Study. This was actually a cooperative effort between USSS and SEI. The goal was to address the “physical and online behaviors and communications that insiders engaged in” pre-event. The work was focused on events that had happened and had some type of data behind it for review.

The first effort was focused on the banking and finance sector — this expands that to include more sectors. The methodology stems from similar studies out of NTAC.


piglet May 18, 2005 12:53 PM

Good point, Brian.
Also: “30 percent of them had previously been arrested, including arrests for violent offenses (18 percent), alcohol or drug-related offenses (11 percent), and non-financial-fraud related theft offenses (11 percent).”
Whether they have been arrested shouldn’t even be reported in a study. So how many have been convicted?

Anonymous May 18, 2005 5:35 PM

We’are all too paranoid, no point looking behind your back, we are already here.

Filias Cupio May 18, 2005 6:38 PM

Some years ago, I worked at a medium-sized software company, and had root access to their main servers. Purely as an intellectual exercise, I thought about how I could most malliciously use that access. This is what I came up with:

Step 1: Hack the backup system: all backups are secretly encrypted as they are made, and decrypted when read back (so that checks of the backups shows nothing.)
Step 2: Wait a year or more.
Step 3: Wipe all the disks on the servers – including the hacked backup encryption/decryption software.
Step 4: Send extortion demand for the encryption key to the backups. Unless they pay, they’ve lost years of work.

Of course, I didn’t actually try it, so I don’t know if it would work – e.g. if they tried to read the backups on a machine other than the ones hacked to silently decrypt, they’d realize something was up.

Thomas Sprinkmeier May 18, 2005 6:44 PM

I love statistics. They are so utterly meaningless unless the full context is given.

How do the statistics of arrests of those caught compare to those not caught?

Does a previous arrest mean you’re more likely to do something, or more likely to get caught?
If the latter, then one may be well advised to hire people with records; at least you can be reasonably sure you’ll catch them!

Silicium May 19, 2005 6:31 AM

In the early ’90s at the University of Hamburg, computer science, we had an interesting roleplay about compromising and attacking a Rechenzentrum. Two teams made plans about attacking or counter measures, then they simulated it out. Just as a simulation. The first attempt was a mostly romantic attempt from our side (we were the attackers) with kalashinikows, demolitions and all other stuff, terrorists would have used in the 70’s, 80’s and 90’s. We planned even an aircraft crash into a backup building.
The attempt was almost useless, because after 24h the counters got their hardware up through mobile backup centers and the lot.

Our second attempt after that was a desaster for the counters. We used bluecollar-insiders, infiltrated them, used weak password attacks, code analysis and what you would call today exploits (stack attacks), trojans, viruses and worms (we coded “proof of concept” of that before in our lessons and reverse engineered incoming viruses and worms). And we took our time, crypted the databanks with strong encryption and after that we blow their company off duty forever. Even without harming any of us attackers or any of their personnel.

So the true danger for a Rechenzentrum is not the guy with a kalashnikow. It is indeed the attack from inside. Inside does not always mean the guy who is working next to your cubicle but also the personal that is maintaining the building, working in the service duty or else.

Our attack lasted for almost a year (in simulation time) and wiped out that company and all their data completely. Serious harm could be done in less time, I think, but you always need more than a month and some time because of distributed backup data and some additional time for planning and infiltrating.

The first virus that used data corruption technics we discovered by reverse engineering was one on the Amiga Platform that was called bytebandit (if I remember the name right). That virus linked itself into the write block system code and xored the stream with a codeword (I’m not quite shure with the exact method, because I’m telling only from mind). While the virus was active the system behaved almost normal, while slowly “encrypting” all data on the disk/harddisk blockwise (if I remember right). When disabled, the disks showed only junk as content. I’m not quite shure which method was used by bytebandit to distinguish between corrupted and original files, but I think it was something like datestamp or unused extra bits in the fileheader, really simple.

The same technique used with strong encryption could do serious harm. If a worm (like the recently discussed ssh-worm-concept) links itself to systems through weak passwords (weakest link today together with stack overflow bugs but a more generalized attack) The codeword for encryption/decryption could be shredded after a certain time and after that without the key noone has any access to the stored data. If linked in a basic IO-Stream of the operation system, this could harm almost every stored data in the system.

A certain problem for such worms today is the diversity of modern computer systems and the distributed backup structure. If simply linked too deep in the harddisk access the backup system would read the original (decrypted) data to the backup system and because you can’t predict which access level will be taken to the disk content. Lowlevel like “dd” and other image backup or higher level like fileaccess. A generalized attack today is much more complicated than 10 or 15 years ago and that could be the reason we still do not suffer from such attacks. The worm/virus would have to track every kind of stream and which one it has corruped and which not. That is not as easy as it sounds, because let’s take an access to a file from a random access point of view and from a “file” like “dd if=/dev/hda1”. “looping” a filesystem would not be enough for corruption. Maybe on Windows Systems that is easier to handle but on *ix systems I got no idea how it could be done. The worm has to concentrate on specific Applications for to reduce the complexibility of that problem, so that is the main risk we have to face: specialized worm/virus attacks.

Only my two cents.

Alex Krupp May 19, 2005 11:05 PM

What percentage of the general public (men) has been arrested? I would suspect that it would be fairly high, based on how many people seem to have an intense hatred for the police for seemingly no reason.

Davi Ottenheimer May 20, 2005 9:26 PM

Although I agree in principle with your assessment, I think there are many more facets to the issue of breach reporting that you might actually support. For example, if your personal identity information is stored on a company system, then you would most certainly want employees watching that information to detect unauthorized access.

The risks are related to the value of the assets being protected.

“The report also doesn’t seem to acknowledge that the researchers are only looking at attacks that were noticed.”

Funny. I agree completely with your observations, but the above sentence makes me think that you want someone to release a report looking at attacks that have not been noticed, which leads to the attacks being noticed…or can attacks be reported and unnoticed at the same time? A Schroedinger-like dilemma!

This also reminds me of the oft-quoted example of the Allied Commander study of downed bombers during WWII. After all the bullet holes had been meticulously examined on every returned bomber, the Air Force announced they would be adding a very precise amount of extra armor to minimize weight and also help bring more planes home. A junior officer then made a rather poigniant observation that the planes included in the lengthy study were the ones that had already made it home…

Anonymous May 21, 2005 1:01 AM

“‘”The report also doesn’t seem to acknowledge that the researchers are only looking at attacks that were noticed.’ Funny. I agree completely with your observations, but the above sentence makes me think that you want someone to release a report looking at attacks that have not been noticed, which leads to the attacks being noticed…or can attacks be reported and unnoticed at the same time? A Schroedinger-like dilemma!”

No. What I want is for the researchers to accept the bias, and realize that their conclusions might not generalize.

piglet May 21, 2005 1:12 PM

Bruce is right, Davi. That’s the difference bewteen good and bad science: good science takes into account and discusses its methodological limits.

Davi Ottenheimer May 23, 2005 12:34 AM

I am not defending the authors or the study, but I think that in your rush to skim the story and avoid any bias you might have unintenionally overlooked some key information. For example:

“This report examines insider incidents across critical infrastructure sectors in which the
insider’s primary goal was to sabotage some aspect of the organization (for example,
business operations, information/data files, system/network, and/or reputation) or direct
specific harm toward an individual.”

So they’re being pretty straightforward about the limits of the study. They even mention a few other studies that handle other types of attacks/motive. But perhaps even more importantly, when you tell us that they should “realize that their conclusions might not generalize”, I think you might want to reread this section:

“This report and others from the study
will articulate only what we found among these known cases. This limits the ability to
generalize the study findings and underscores the difficulty other researchers have
faced in trying to better understand the insider threat.”

So my take on the study is that it is little more than an early attempt at dumping data into the public view for further analysis. Nothing very ground-breaking, but not totally useless or naive either.

Personally, my main objection to the study is that it seems to focus far too much on survivability through understanding attacker motive rather than seeking better early/efficient detection systems to avert consequences.

Alan May 25, 2005 3:35 PM

I agree that the report isn’t terrible useful. For one thing, it heavily favors very small organizations, which typically wouldn’t even have security staff. So is it valid that most events weren’t identified by security departments?

It’s just not broad enough to be valid. For example, more incidents came from West Virginia than from Pennsylvania. Working with such a small sample base surely can’t be considered statistically valid.

Stephen Taylor February 22, 2006 9:53 AM

Has there been any real progress in determining the the prevelance of insider attacks? The industry axiom is that insider attacks are the greatest threat. It seems to make sense. Retail stores reportedly lose more in stolen goods to employees than to customers. But there has never been a scientifically sound survey of the insider threat.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.