Software as Evidence

Increasingly, chains of evidence include software steps. It’s not just the RIAA suing people—and getting it wrong—based on automatic systems to detect and identify file sharers. It’s forensic programs used to collect and analyze data from computers and smart phones. It’s audit logs saved and stored by ISPs and websites. It’s location data from cell phones. It’s e-mails and IMs and comments posted to social networking sites. It’s tallies from digital voting machines. It’s images and meta-data from surveillance cameras. The list goes on and on. We in the security field know the risks associated with trusting digital data, but this evidence is routinely assumed by courts to be accurate.

Sergey Bratus is starting to look at this problem. His paper, written with Ashlyn Lembree and Anna Shubina, is “Software on the Witness Stand: What Should it Take for Us to Trust it?

We discuss the growing trend of electronic evidence, created automatically by autonomously running software, being used in both civil and criminal court cases. We discuss trustworthiness requirements that we believe should be applied to such software and platforms it runs on. We show that courts tend to regard computer-generated materials as inherently trustworthy evidence, ignoring many software and platform trustworthiness problems well known to computer security researchers. We outline the technical challenges in making evidence-generating software trustworthy and the role Trusted Computing can play in addressing them.

From a presentation he gave on the subject:

Constitutionally, criminal defendants have the right to confront accusers. If software is the accusing agent, what should the defendant be entitled to under the Confrontation Clause?


Witnesses are sworn in and cross-examined to expose biases & conflicts—what about software as a witness?

Posted on April 19, 2011 at 6:47 AM52 Comments


BF Skinner April 19, 2011 7:29 AM

“courts tend to regard computer-generated materials as inherently trustworthy evidence”

People do have a tendency to trust output. It’s like the computer is wearing a white lab coat. (and while a white lab coat won’t help you score with the women in a bar it will get people to trust you.)

“software as a witness?”
Not sure I agree with the model. Evidence can be questioned.
Forensic document examination questions the sample document with enough rigor for the courts.

If the point is that a program is detecting and making record of events then how is it any different from an automated CCTV?

Of course the integrity of the monitoring software should be challanged (just like the validity of evidence from a dog smelling contraband). But it’s a “witness”
not unless it get’s a whole lot smarter.

Personally I’d like to see Granick opin on the topic.

Rich April 19, 2011 7:44 AM

The defense bar pushes this issue when it fights for access to the software in breathalyzers, in order to have DUI charges dismissed. Manufacturers claim ‘trade secret’ and refuse to hand it over. Eventually state Legislatures have to get involved one way or the other. It’s an ongoing battle.

Tom April 19, 2011 7:48 AM

There’s a case of someone challenging a breathalyzer test and reviewing the source code. The math used was faulty. One item averaged the readings as:

((((a+b/2)+c)/2+d)/2+e)/2 instead of (a+b+c+d+e)/5

Ross Patterson April 19, 2011 8:08 AM

IANAL, but I recall reading reports about a brouhaha several months ago over whether or not lawyers could subpoena forensic lab workers. The gist of the issue was that evidence does not walk in the door by itself, it must be introduced via testimony, and testimony happens from a human being. Whether the human is accorded status as an expert witness, or is merely yet another witness is a matter the courts know how to address, and do often.

This is really no different than the RADAR gun issue in speeding cases. There was a period when the devices were new, and considered experimental and untrusted, after which the courts began to give them more credibility than they deserved. That period was followed by defense attorneys challenging their manner of use (calibration logs, etc.), and they went back to being merely a tool that required an officer to testify about in court, and which could be argued.

deepcover April 19, 2011 8:29 AM

Just a heads-up to my fellow paranoiacs, this site’s certificate has changed! And the CA too!

There probably is an innocuous reason but I prefer to indulge in wild-eyed speculation.

So, Bruce is probably in some TLA agency’s sound-proof basement as we type, being made to cough up our IP’s.

WL April 19, 2011 8:32 AM

NSA / CIA / FBI etc should learn from RIAA to track down the hackers and cyber intruders and bring them to justice 😀

Dirk Praet April 19, 2011 8:44 AM

To the best of my knowledge, current legislation is that any such evidence can be thrown out as hearsay unless properly argumented by an expert witness and challenged by the defense.

@ Deepcover: no reason to be paranoid. My browser’s CertPatrol picked up the change too and there is probably no reason to worry as the old certificate was going to expire in less than a month from now.

Russell Coker April 19, 2011 8:49 AM

One issue that wasn’t directly addressed in this paper is the reliability of evidence. In many cases of forensic tests it’s not that easy to modify the evidence, and it’s probably quite rare for modifications to be substantially faster and easier than analysis that would detect whatever is of interest.

When a house is raided and computers are seized it’s probably quite common for 10TB of storage to be taken, for a raid on a business it’s probably quite common to seize 100TB or more. Obviously such quantities of data can’t be examined in a small amount of time, and therefore it may be possible for a hostile person to contaminate the data before analysis.

As a single disk can be read in about 6 hours it seems that there should be a requirement that for each disk seized the owner should be provided with a cryptographically secure hash of it’s contents within 24 hours except in situations where a really large number of disks are seized at once. There’s no technical reason why you couldn’t be dumping the contents of 10 disks to an a medium spec RAID array simultaneously while generating SHA-512 hashes and at the same time making tape backups of the previous 10 disks. That should allow getting 30 disks backed up with hashes generated within 24 hours.

Getting hashes to the owner within 24 hours won’t prevent the data being modified, but it will make the window a lot smaller and avoid the temptation to create evidence after none has been found.

DZG April 19, 2011 9:04 AM

To those pointing out that this is like radar gun cases or other previous technological introductions to court evidence; I think you’re missing the point. Other technologies have had to face the rigor of being challenged and proven. The point being made here is that no such challenge is (for the most part) being made for computer evidence. It’s going unquestioned and the default mode for it in court seems to be acceptance as trusted and accurate. That mode of thinking needs to change. It needs to be questioned just like any other piece of evidence. Where did it come from? How was it made? Who brought it to us? etc.

kingsnake April 19, 2011 9:15 AM

I always thought it rather scary that all it takes to convict someone of child pornography was to find it on his (usually it is a he) computer, when it is a trivial matter to sabotage a computer by placing those perverted pictures on target’s computer. If I was on a jury, it would take a lot more evidence than just the computer to get me to convict. (Photos, videos, history of sex crime, etc.)

paul April 19, 2011 9:27 AM

Ultimately what we’re talking about is something very like the halting problem: going through all the code that comprises a particular chain of evidence and making sure it hasn’t been compromised. Are forensic programs even written with security in mind, or do they assume an environment where the people using them are honest and no one is trying to subvert them?

I don’t think this is solvable in a technical sense. As soon as you admit the possibility of steganography, whatever information you want to find can be found somewhere on somebody’s disks.

What we need are prosecutors, defense lawyers and judges who are much smarter about these things. But that’s going to cost money, and in the US at least that’s pretty much a nonstarter.

todd glassey April 19, 2011 10:27 AM

This is a really important topic but the issue is two fold. The first is the “Output of the Code” and its certification as competent evidence and the second is the code itself and its certification as competent to perform.

As to other cases, in California there is California v Khaled which shut off many states RedLight Cameras since the data is neither reliable or contained withint a proper chain of evidence.


Trichinosis USA April 19, 2011 10:42 AM

One of the first things I looked at while messing around with Trusted Solaris was whether audit trail files could be edited after the fact by someone who had root on the machine. Of course they could.

The same was true of Broadsoft VOIP software in 2006. Want to railroad someone? Edit the audit trail file with an appropriate entry, then copy a file in the appropriate place with the appropriate date-time-stamp consisting of a contrived audio conversation. In both cases, some internal utilities might reject that sort of tampering, but if the person doing the editing is the person supposed to be running the utilities, it’s easy to get around that too.

In fact, these days morphing technology is so advanced that it is even possible to contrive faked audio and video “evidence”.

pdf23ds April 19, 2011 12:04 PM


Framing someone has never been all that hard for people in the right places–especially the police investigating. I don’t think software changes that all that much.

Bob April 19, 2011 12:37 PM

I think the problem is still one of human testimony. There’s a syllogism going on, typically where somebody says:
a. “The machine said X”
b. “Whenever the machine says X, that means Y”
c. “So Y must be true”

Statement “a” is usually inarguable; what you need to determine is whether the person asserting “b” is correct.

One well-known example is drug tests for opiates. Chemists have known for decades many analytical techniques will give the same results if you’ve shot up heroin, snorted oxycontin, eaten a poppy seed muffin, or taken Robitussin DM. In fact, the Mythbusters recently “confirmed” that. So it’s fine for a prosecutor to say, “We did a GC/MS test and got these results”, but to assert, “That proves the defendant was using smack” is bogus.

In the referenced cases, some human is claiming:
a. “The machine printed out X”
b. “The machine is never wrong.”
c. “So X must be true.”

And it’s the human testimony that “b” is true that’s the weak link in the chain and subject to disproof.

Tony H. April 19, 2011 1:36 PM

It’s now 15 years since John Munden was acquitted:

That case clearly turned on the defence’s ability to test the prosecution’s evidence in court by having their expert (Ross Anderson) examine the bank’s internal systems rather than just accept the truth of various printouts.

It always offended me that the prosecution was allowed to simply drop the case, and was not forced to grant access to the bank’s systems.

Richard Steven Hack April 19, 2011 1:50 PM

Paul: “Are forensic programs even written with security in mind, or do they assume an environment where the people using them are honest and no one is trying to subvert them?”

If they do, that’s changing because anti-forensics is a big deal these days. Basically good hackers aren’t concerned about being detected or tracked back because they know they’ve modified enough stuff on the box to insure they can’t be PROVEN to have done anything. Computer forensics guys these days have to maximally paranoid when approaching a compromised machine to insure that whatever they capture really is what’s there because the hackers have software on the box waiting for the forensics software to make an appearance or have already erased their tracks thoroughly.

Also, law enforcement forensics examiners have such a backlog due to the difficulty of doing a comprehensive forensics examination that in many cases they have just a couple days to do the investigation and if the hacker can throw enough roadblocks in their way to exceed that time limit, the case is dropped and the hacker walks.

I have heard that some anti-forensics software writers are considering writing versions that not only erase the tracks of the actual hacker involved but also divert suspicion to some other user. That alone can really mess up an investigation, a la the FBI anthrax investigation mess.

It’s much like the AV companies are screwed these days because in many virus attacks the virus has already compromised most of the well-known AVs that might be installed or disabled them altogether. The only way to defeat the virus is a boot CD that runs AVs in an entirely separate OS.

Bryan Feir April 19, 2011 3:08 PM

@Richard Steven Hack:

On that, see ‘The Case of the Sysinternals-Blocking Malware’ by Mark Russinovich (who’s responsible for the Sysinternals debugging tools), where he talks about dealing with some malware that actively tried to prevent him from running his usual tools:

But yes, generally in that sort of situation the only way to truly be sure is to boot off a CD and run a known-good OS. Had to do that once myself to clear off an IRC ‘bot kit that was running on a Solaris machine, which had replaced ‘ls’ to hide its ‘…’ directory.

Stuke April 19, 2011 3:23 PM

I’m in full agreement. Courts are too willing to accept computer data as infallible evidence. As another example, I’ve seen far too many computer related prosecutions all based on the idea that the user has complete control over the contents of his hard drive. This is not always true.

BTW, the idea of a computer as the accuser was dramatized, or rather over-dramatized, in the Classic Star Trek episode “Court Martial”.

RobS April 19, 2011 4:32 PM

I think that some of these comments are conflating the admissibility of computer evidence with the weight of the evidence at trial.
At the moment, computer evidence is automatically trusted as admissible unless the (typically defence legal team) can show evidence that it has been tampered with AND that the tampering is relevant to the case.
It is down to the lawyers on both sides to provide the information required by the jury to provide the proper weight (trustworthiness) to the evidence.
That is the theory and I doubt very much that computer evidence is automatically trusted at trial (given strong weight). It used to be the case (1970s) that every week brought out a new “look how dumb computers are” story and I doubt that computer evidence had much weight. Perhaps juries are more trusting now.

DG April 19, 2011 6:07 PM

Software is not a person. It can’t qualify as an expert witness. There’s got to be an expert presenting the ‘evidence’, in which case you can have dueling expert witnesses. There’s also the issue of proving the methods used are scientifically valid (the Daubert standard after Daubert v. Merrell Dow Pharmaceutical).

I’d imagine particular software forensic tools could be as susceptible to ‘cult of personality’ issues (junk science) as lead alloys in bullets purportedly demonstrating the bullets came from the same batch or box. Right up there with SCO’s MIT rocket scientists.

Dirk Praet April 19, 2011 6:10 PM

@ Trichinosis USA

“One of the first things I looked at while messing around with Trusted Solaris was whether audit trail files could be edited after the fact by someone who had root on the machine. Of course they could. ”

In which case whomever set up that machine got MAC/RBAC completely wrong. If ever you’re in need of someone to help out with trusted solaris extensions, feel free to drop me a note and I can refer you to a couple of current and former Sun/Oracle subject matter experts in this field.

@ Brian Feir

“But yes, generally in that sort of situation the only way to truly be sure is to boot off a CD and run a known-good OS”

Yes when merely trying to salvage a machine, but a no-go if part of a digital forensics examination. First thing you learn in a DF class is to take out the hard drive and make one or more exact copies/images of it. You then examine these on a dedicated stand-alone machine. I can recommend EC-Council’s CHFI course as a good start.

Trichinosis USA April 19, 2011 6:19 PM

@Dirk, if the person setting up the machine sets up the MAC/RBAC wrong intentionally, or if someone coming after them can reconfigure that aspect of the system intentionally, the files can be compromised. That was my point. MAC/RBAC is still susceptible to a breach of trust at the root level. Someone has to install, configure and maintain it, and if that person is compromised and/or there is no oversight over their actions during that phase of the process, the whole thing is compromised.

Dirk Praet April 19, 2011 6:50 PM

@Trichinosis USA

I see what you mean. There is little defense against systems that have been misconfigured on purpose. Applying the principle of least privilige and root as a role (instead of a superuser) is however very helpful in tightening auditing processes.

Clive Robinson April 19, 2011 7:24 PM

@ Dirk,

“First thing you learn in a DF class is to take out the hard drive and make one or more exact copies/ images of it.”

Sadly doing that destroys much digital evidence hiding away in RAM and other places with fully mutable memory in the rest of the suspect computer.

Which although it did not matter to much a few years ago does these days in a networked environment where a RAM based nasty can almost certainly be gaurenteed that atleast one RAM infected machine stays up untill the others come back on to be re-infected again. That is modern malware does not need to go on the hard drive at all so won’t be visable to a lot of AV software or for that matter forensic software. Further there are ways the nasty can lock pages of RAM such that they won’t get swapped to the hard drive, nor will they get written to the hard drive in the event of a “suspend” or equivalent.

Also there is an issue with modern solid state drives in quite a few cases it is not possible to exactly “image” the drive because you can’t actually get at the flash memory.

Further and worse for DF’s unless you are 100% up on the ins and outs of the firmware in some flash based devices it is possible to hide quite large chunks of information away such that no forensic software will find it or even be aware there is a large hole in which the information is hiding.

There is however a small very distant glimer at the end of a very long dark tunnel for not entirly RAM based nasties. Sometimes the nasties need to temporarily put a file on the hard drive whilst they install themselves. If this occurs whilst other file activity is in progress on the machine it is sometimes possible using the file meta data on the drive image to go through a re-build exercise to show where files have been deleted but subsiquently over written.

However it is quite an involved process and very very sensitive so can give a false impression very easily, thus quite a few people think that currently it’s a futile excercise.

Judith_IP April 19, 2011 8:34 PM

Physical evidence is generally presented with a witness, usually an expert. Software is likely to be treated the same as bullet evidence, fingerprint evidence, GPS evidence, DNA evidence, etc. I don’t see the disconnect, or what would make software particularly different.

Jay April 19, 2011 8:48 PM

@Russell Coker

We’re technical people. There are (at least) two failure modes in using a hash to certify a disk hasn’t been changed:
1) SHA collisions. With 2TB to play with, there’s a lot of room for someone to tweak blocks looking for a match. And a sufficiently motivated attacker can also ensure chosen plaintexts – just send your victim an email first…
2) Change the hash. Then assert to the court it’s just an error in the reading software, an accidentally updated timestamp, or a bad sector. Knowing the hash changed doesn’t help a defendant prove what was changed, nor who by.

In any case – technical systems have technical holes, human systems have human holes. Claiming either as infallible would be unjust…

Davi Ottenheimer April 19, 2011 9:19 PM

Trustworthiness requirements of software? This reminds me of traffic court cases from many years ago.

Radar was used to accuse a car of speeding. The driver of the car showed up to court to defend himself.

He pleaded his case to the judge based on interference from fences and traffic. The signal was likely to have been dispersed and reflected with inaccurate readings. He explained it with simple geometry and common terms.

He was asked if he was an expert and he said no. He was asked what he did for a living and he said he studied dust in space. The judge then threw out his testimony because he considered him an expert and said the prosecution was not properly briefed or represented with a counter-expert.

More to the point (pun not intended) law enforcement use of laser speed traps in Wisconsin showed that a Judge’s record on them was 100% — he considered them infallible and incontestable.

A driver accused of speeding and caught with laser was able to get the ticket dismissed by avoiding direct confrontation with the evidence — he hired a lawyer who was an old law-school friend of the Judge.

There must be an aphorism about the law in there somewhere. A Judge will trust his friend more than software when….

Dirk Praet April 20, 2011 10:00 AM

@ Clive

In the case of this type of malware, I’m afraid you’re absolutely correct. Freezing RAM would supposedly help in retaining what’s there even after a complete shutdown, but I have never tried this technique myself.

Jim A April 20, 2011 10:11 AM

Certainly the operator of the software can be cross-examined just as the operator of any other tool used in an investigation. When I was on a jury in a drunk driving case I recall that we spent more than an hour hearing about the exact model of brethalizer used, how it was adjusted, and where and when the samples used to standardize were obtained.

Wade April 20, 2011 12:34 PM

BF Skinner asked: “If the point is that a program is detecting and making record of events then how is it any different from an automated CCTV?”

That was my first thought as I read the article, then I realized the difference. The CCTV recording can be viewed and interpreted by the judge and jury without any help. The audit trails and TCP/IP packet logs require an expert to interpret it before it makes sense to a layperson. Even if the audit logs are correct, the court must depend on the expert interpretation of those logs, and that is where bias and incompetence may come in.

I think it is more like showing a CCTV recording that doesn’t actually show the crime, and then explaining what “must be” happening off camera by pointing at shadows and reflections that are visible in the recording.

Clive Robinson April 20, 2011 1:14 PM

@ Dirk Praet,

“Freezing RAM would supposedly help… …but I have never tried this technique myself”

Let me put it this way, the “theory” as described is simple, you aproach the computer and squirt a large amount of a cryo-gas such as liquid nitrogen through an appropriate opening and then remove the power and squirt a lot more cryo-gas in…

Apparently the technique if done this way can prevent “booby traps” activating…

The “practice” is I have been told slightly different due to “health and safety” concernces. Apparently blow back and other problems injecting can occur. Also it has not been unknown for some one to screw the vent down on the cryostat by mistake thus causing a build up of preasure and blowing out of safety disks sometimes occasioning the cryo-gas to vent out as a jet. Then there are the transportation problems, cryostats are not exactly something you just chuck in the back of the car and forget about. And when you have frozen a PC box down you then have the fun of what to do with it to get it out the building and off to a lab, all whilst still giving it cryo-gas to keep the temprature way down…

Now I’ve played with liquid nitrogen in small quantities in the past to explode plastic drinks bottles, make instant ice cream and remove the occasional stubborn wart. All of which is good fun in it’s own way.

But you have to remeber the oldest tricks with liquid nitrogen involve freezing bananas and then smashing them with a hammer, and immersing rubber “Squash balls” and watching them shatter when they try to bounce.

So remembering that, let me put it this way when I heared about the practical difficulties I thought “Uh Ha not one to try at home boys and girls, unless you are not in any way attached to your various appendages”…

Dirk Praet April 20, 2011 1:37 PM

@ Clive

Exactly my point. Ever since a former classmate of mine blew himself up experimenting with a recipe from the infamous Anarchist Cookbook apparently involving red phosphor, I have grown a wee bit apprehensive of toying about with chemicals I don’t know squat about. If ever I get hired by Bin Laden, he is totally doomed.

Bob Gezelter April 21, 2011 1:12 AM

Wade’s comment notes an important point. A still or CCTV image can be directly viewed, but may be tampered with. Logs and other files reflect operations, they are not the primary operations themselves. Consider the difference between a canceled check and a transcript of account. A canceled check represents two sides of a transaction, the transaction should show on both bank’s books. In most cases, the banks are uninvolved third parties.

The issues become more interesting when data is generated and processed by non-mass market applications, or even rapidly evolving open source applications. Stored data reflects the actual transaction, and the proclivities of the programming used to maintain it. Different versions of software can produce similar, if not identical data artifacts through different paths. In the context of operating systems, one example of such behavioral quirks is the propagation (or non-propagation; as the case may be) of file creation dates on files in various contexts.

These questions produce interesting consequences when litigation ensues. I have seen it first hand as a consultant on several litigation matters.

I discussed some of the consequences of this in a recent blog post, “Electronic Discovery and Digital Forensics: The Applications Front” at

z April 21, 2011 1:32 AM

@Jay It is for this reason that forensic evidence has multiple hashes taken with different algorithms. However, if the forensic examiner is on the take they can just let whatever changes be made and generate new hashes. This is why courts usually appoint special masters for this work, who are typically experts known and trusted within the legal community. (Though that really just pushes the problem event horizon off somewhere else a bit, as anyone prosecuted in Mississippi in the last few decades will attest.)

Nick P April 21, 2011 2:44 AM

@ Richard Steven Hack

“I have heard that some anti-forensics software writers are considering writing versions that not only erase the tracks of the actual hacker involved but also divert suspicion to some other user. That alone can really mess up an investigation, a la the FBI anthrax investigation mess.”

We’ve been doing that for a long time my friend. You’re starting to see what an “Advanced [Category] Threat” REALLY means. I recall one benefit of using a proper one time pad scheme was that it could be decrypted to anything under duress and nobody could prove it wasn’t authentic. You create the evidence for the frame, create a new pad that produces that “evidence” from the ciphertext, hide/protect it like it’s key evidence, and then turn it over when coerced. They think it’s legit because it decrypts and the defended obviously went through a lot of trouble to protect the “key.” But it’s not. Just one trick among many. I could write books on deniability or blame-shifting in computing, but it’s safer to keep that stuff in my head for a rainy day. 😉

@ Dirk Praet

“First thing you learn in a DF class is to take out the hard drive and make one or more exact copies/ images of it.”

A recent book I read said to use some stock tools to image the ram first, as many crooks use Windows & the tools exist. Tools for Linux et. al. aren’t hard to find or build. If they have firewire, one doesn’t need liquid nitrogen. 😉 I’d also add to visually inspect the box. Anyone who didn’t do that to a critical system of mine would be foolish. The best (and quite honest) defense for evidence retrieval is to have a reliable tamper or intrusion detection system automatically wipe key memory and/or destroy the drive using the thermite approach we discussed in a different blog article. DF people are fine using the basics on basic or careless crooks. More serious crooks require more serious approaches.

And, btw, Trusted Solaris isn’t worth crap against a sophisticated attacker. Unlike XTS-400/STOP or LOCK/ix, Trusted Solaris only achieved an EAL4/B1 rating. It wasn’t capable of doing better due to its design, implementation and verification approaches. I’ve said before that the government describes EAL4 to protect against “casual or inadvertant attempts” to breach security. “Well-funded,” “sophisticated,” hostile attackers require high robustness or something approaching it.

That bolt-on security Trusted Solaris offers doesn’t work unless you trust the administrator and the users not to use sophisticated, hostile methods to breach it. Doubt me? It’s in the Security Target. It tells you what assumptions the product makes, what it’s designed to counter, how, and the evaluated configuration. I’ve read so many and products rarely meet the level of robustness of air gap approaches. The last semi-commodity, UNIX-like OS in medium to high assurance range is BAE’s XTS-400/500 (read security target). has a few papers on LOCK, an EAL7/A1 class system, and LOCK/ix (or ux), a UNIX layer on LOCK. The rest of the “trusted” OS’s are childs play for determined attackers & the only reason they work in practice is because the defense networks depend mostly on high assurance guards to move data between classification levels.

RH April 21, 2011 6:25 PM

@Jay: Matching a plaintext with SHA-256 is not a walk in the park. Has it even been done yet, in a lab?

I would also assume the prosecution coming in with “the first hash was wrong, this is the right hash” would be slaughtered by any defense attourney worth their salt.

Dirk Praet April 21, 2011 7:00 PM

@ Nick P.

While I agree on the RAM imaging part, some caution may be advised on booby-trapping machines killing the hard drive as to avoid simple DOS-attacks. It reminds me of permanent lock-outs when a password attempt threshold has been exceeded. Wiping key memory however is a good idea. I know of several live distributions like T.A.I.L.S. that do this by default.

I equally agree that TSOL definitely does not offer the highest protection levels against very sophisticated attackers. Then again, I guess there are trade-offs to be made between security and usability and I can’t imagine stuff like XTS-400/500 and LOCK/ix being usable for anything else than very specific purposes/applications as opposed to being used as a general purpose OS.

I’m also hearing the phrase “very sophisticated attack” way too often these days. The Anonymous DDoS attacks on PayPal and the like definitely weren’t. Neither were their HBGary and WBC stunts. Oak Ridge National Laboratory was classified as an APT while being nothing more than a targeted phishing attack combined with an MSIE ZD exploit. Bradley Manning doesn’t really strike me as an advanced IT security geek either. Although I do believe that there really are highly skilled and resourceful attackers out there that won’t be stopped by anything less than EAL7 stuff, the reality today still is that most mischief could probably be avoided if companies and ordinary users alike would start getting the basics right instead of being happy and complacent when everything just works.

RobertT April 21, 2011 11:24 PM

@Nick P

I’m confused by the use of the APT term. In my mind APT attacks are rarely sophisticated new ZD attacks rather they are specific employee targeted attacks. The attacks often start at the lowest level ditsy “click happy” secretaries and personal assistants. These people they often find ways to enable facebook and similar social networking sites. The first step is to compromise their email account and work forward from there. A high level PA’s email contains a good map of the critical people within the organization. The second step is to achieve privilege escalation, usually through infected attachments (from a trusted source the PA).

Now you can start installing back-doors and discovering how the network is divided and where the really worthwhile goodies are kept, they than build a custom exploit (script file) that enables data transfer from the high security to the low security network sections.

Almost nothing that I’ve outlined can be prevented by an EAL7 vs EAL4 system, because the primary exploited weakness is human.

The thing that makes APT so insidious is the patience and thoroughness of the attacker. APT is anything but a mindless botnet DOS attack.

If the attacker is any good, once the system is compromised you have no idea how the high security data is being transferred to the outside, because they will almost certainly use a side-channel backhaul comms method.
There are just way too many methods to build covert comms channels into any network.

Another human nature aspect, that is often used by APT’s is to directly rely on the companies leaders to transfer out the required files. The reason is that the CEO’s actions are rarely questioned (if even visible) to some lowly IT contract grunt. Even if he wonders why the CEO is transfer Gbytes of data to random machines (botnet) he’ll usually shutup because his job is on the line. (the network that he is suppose to be managing is so infected that even the CEO’s computer is not safe). Not a good idea to tell the CEO about this, especially not until next months consulting check clears…

In my mind most APT’s exploit human weakness rather than software or network weakness. The inability to “patch” this weakness is what makes the attack persistent

Clive Robinson April 22, 2011 9:46 AM

@ RobertT,

“The thing that makes APT so insidious is the patience and thoroughness of the attacker. APT i anything but a mindless botnet DOS attack”

Actually in it’s own way it’s both.

Alot of the attacks start of as “fire and forget” that spread wherever they can looking for certain docment types (this was seen with a modified version of Zuse against .mil/.gov that got seen in the same way the original Morris worm did).

The better versions have a lower replication-rate and a better targeting system, and unlike some that have been found don’t just open up non rate limited back channels to bulk download.

[It is now highly likley the latest and brightest versions also include “air gap crossing” code and are headless in that they use a different style of command and control, which does not suffer if DNS/IP/takedown activites take place (If I can write “proof of concept” code to do this so can many many others)]

Thus the “Fire and Forget” worm acts as millions of low level “intel agents” almost like web-crawlers they find their way in, but only report back if there is something of interest.

Those that do report back get sent other automatic commands to sniff out more info etc. Only when it looks good does a human actually step into the loop to finesse the final stages that you have detailed.

The scary part is “air gap” crossing because this takes advantage of “chance oportunity” that is when some one uses the wrong computer or memory stick etc. It works on the idea that you might know the principles and they are well trained and cautious, but there are others such as family and friends or maintanence / security guards / cleaners / temps / etc or other non employed personnel that are not cautious and do silly things (such as the son of a very senior CIA person plugging Daddies “work laptop” into the Internet etc).

At first sight you would think it has not got a snow ball in hades chance, but just as with humans (six steps of seperation between you and anybody else) the routes may not be known but they are there, and once found they are going to get travelled. Worse the fan like spread of “fire and forget” usually discovers several routes, giving a useful degree of both redundancy and camouflage.

Oddly few people have commented on “bot nets as Intel Agents” and you have to ask yourself why as the concept is fairly obvious and actually quite low risk compared to directed attacks etc.

Jeff Asselin April 22, 2011 5:51 PM

Well, at its base, all software is akin to a mathematical equation. If you review a CPA’s work in court, you would look at the calculations and actuarial tables and such, right?

Likewise, if someone put as proof in court mathematical equations or say statistical data, you could have a mathematician look at the equations or statistics used and show it’s faulty or correct.

So you should have the right to have the source code for the software analyzed and its ability to actually accuse you put in question the same way.

Robert in San Diego April 22, 2011 11:19 PM

I could have sworn there was an original series Star Trek episode, “Court Martial,” where Kirk was being prosecuted by an old girlfriend for a dastardly murder plot, and his defense attorney pointed out the computer’s evidence was being treated as unassailable….

Imperfect Citizen April 23, 2011 7:22 AM

Great article! An interesting can of worms to open up for the data mining software like verint’s, the phone security software too.

Stephen Mason April 26, 2011 7:49 AM

I read the article by Professors Lembree and Bratus and Dr Shubina with interest, together with the previous posts.

I have a number of observations:

First, for the purposes of legal analysis, digital data can be a mix of evidence, which I analyse at 5.32 in my text, ‘Electronic Evidence’ (2nd edn, LexisNexis Butterworths, 2010).

Second, it is correct that few lawyers or judges understand that software is far from being error free, which I explain in detail in chapter 5.

Third, it might also be assumed that in legal proceedings, digital data is sometimes considered to be unimpeachable, as the case of Julie Amero illustrates (for which see the Introduction to ‘International Electronic Evidence’ (British Institute of International and Comparative Law, 2008).

In essence, the problem for clients is the lack of knowledge of technical matters by lawyers, for which see the editorial to Volume 7 (2010) of the Digital Evidence and Electronic Signature Law Review. The IP address is merely one part of the evidence with which to identify a machine connected to the internet, as the Danish IPR case illustrates, for which see Per Overbeck, ‘The burden of proof in the matter of alleged illegal downloading of music in Denmark’ 7 (2010) Digital Evidence and Electronic Signature Law Review.

In relation to the authentication of digital evidence for legal proceedings, I formulated a 5-part test in conjunction with a number of technicians. The test is set out in Chapter of my text.

For those readers that wish to be more fully informed of these matters in greater detail in relation to the United States of America, the text by George L. Paul ‘Foundations of Digital Evidence’ (American Bar Association, 2008) is invaluable.

Stephen Mason

Stephen Mason April 26, 2011 7:56 AM

Apologies for failing to complete the information in this sentence:

In relation to the authentication of digital evidence for legal proceedings, I formulated a 5-part test in conjunction with a number of technicians. The test is set out in Chapter 4 of my text.

Stephen Mason

Dredd May 4, 2011 10:53 AM

Excellent post Mr. Schneier. The results of not questioning software performance in various situations could be catastrophic in some cases.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.