History and Ethics of Military Robots

This article gives an overview of U.S. military robots, and discusses a bit around the issues regarding their use in war:

As military robots gain more and more autonomy, the ethical questions involved will become even more complex. The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties. Autonomous robots could, in theory, follow the rules of engagement; they could be programmed with a list of criteria for determining appropriate targets and when shooting is permissible. The robot might be programmed to require human input if any civilians were detected. An example of such a list at work might go as follows: “Is the target a Soviet-made T-80 tank? Identification confirmed. Is the target located in an authorized free-fire zone? Location confirmed. Are there any friendly units within a 200-meter radius? No friendlies detected. Are there any civilians within a 200-meter radius? No civilians detected. Weapons release authorized. No human command authority required.”

Such an “ethical” killing machine, though, may not prove so simple in the reality of war. Even if a robot has software that follows all the various rules of engagement, and even if it were somehow absolutely free of software bugs and hardware failures (a big assumption), the very question of figuring out who an enemy is in the first place—that is, whether a target should even be considered for the list of screening questions—is extremely complicated in modern war. It essentially is a judgment call. It becomes further complicated as the enemy adapts, changes his conduct, and even hides among civilians. If an enemy is hiding behind a child, is it okay to shoot or not? Or what if an enemy is plotting an attack but has not yet carried it out? Politicians, pundits, and lawyers can fill pages arguing these points. It is unreasonable to expect robots to find them any easier.

The legal questions related to autonomous systems are also extremely sticky. In 2002, for example, an Air National Guard pilot in an F-16 saw flashing lights underneath him while flying over Afghanistan at twenty-three thousand feet and thought he was under fire from insurgents. Without getting required permission from his commanders, he dropped a 500-pound bomb on the lights. They instead turned out to be troops from Canada on a night training mission. Four were killed and eight wounded. In the hearings that followed, the pilot blamed the ubiquitous “fog of war” for his mistake. It didn’t matter and he was found guilty of dereliction of duty.

Change this scenario to an unmanned system and military lawyers aren’t sure what to do. Asks a Navy officer, “If these same Canadian forces had been attacked by an autonomous UCAV, determining who is accountable proves difficult. Would accountability lie with the civilian software programmers who wrote the faulty target identification software, the UCAV squadron’s Commanding Officer, or the Combatant Commander who authorized the operational use of the UCAV? Or are they collectively held responsible and accountable?”

The article was adapted from his book Wired for War: The Robotics Revolution and Conflict in the 21st Century, published this year. I bought the book, but I have not read it yet.

Related is this paper on the ethics of autonomous military robots.

Posted on March 9, 2009 at 6:59 AM46 Comments

Comments

Mat March 9, 2009 7:51 AM

Maybe these autonomous robots shouldn’t be totally autonomous until they’ve proven their ability in the field.

The article appears to assume that robots are being fielded as-is with little to no field testing or that they acquire and utilize irrational behavior because humans do so (as though programming irrationality is easily accomplished)

wiredog March 9, 2009 7:54 AM

A couple of disconnected, semi-off topic thoughts on killbots.

First. I seem to recall a SF/Fantasy short several years back where the Final Battle of Armageddon was being fought. The Forces of Good used robots to do the fighting and, after the battle was won, all the robots were taken up to heaven. Much ro the dismay of the Leaders of the Forces of Good.

Second. Given that iRobot, maker of the Roomba, has a military contract, and given that they have also developed a way of controlling a Roomba with a hamster, are we about to see the use of Militarized Hamsters in combat? Will our heroic soldiers be replaced by Rodent Guided Robots?

And, finally, is “Rodent Guided Robots” a Great Name For A Rock Band, or what?

Mike B March 9, 2009 8:07 AM

Of all people Bruce Schneier should realize the power of autonomous systems to improve certain types of tactical decision making on the battlefield. Time and time again the weakest link in the security chain is the human. If the security is even moderately robust one can just go and social engineer their way past the minimum wage rent-a-cop or call center employee.

Non-human systems deserve a chance to prove themselves on the battlefield given the generally lackluster job humans have been doing distinguishing Afghan weddings from terrorist training camps. For those who say “if it ain’t broke, don’t fix it” well, it IS broke and we need to fix it.

The best part about autonomous systems is that completely removes the specter of war crimes. One mishap is just a glitch, many similar mishaps can be handled under existing product recall frameworks. 😀

Kieran March 9, 2009 8:30 AM

Giving the software real-life testing should be entirely doable – just feed it the input from soldier-mounted sensors, for instance, and see what it decides to shoot at, or run UAVs for a couple of months with “bombs” which bristle with nothing deadlier than cameras to detect what they’re actually targeting.

The software is “good enough” when it performs, on average, better than humans. There will no doubt still be accidents, but so long as there are fewer accidents you know you’re on the right track.

noah March 9, 2009 8:48 AM

@Kieran Unfortunately, better than humans isn’t good enough for people to “trust a machine”. An accident involving a human is unremarkable, so we don’t fear it, but one accident involving a computer (which is supposed to be perfect) and people start saying “a human should’ve been involved” even if the machine had a better track record.

Of course, you can sometimes introduce them without most people realizing it, like auto-take-off and landing features on commercial airliners.

David March 9, 2009 8:48 AM

I don’t see the problem with rules of engagement. Doesn’t a soldier with a rifle have guidance on whether to shoot a terrorist using a child as a shield? Apply those to the robot.

Nor do I see it as a dereliction of responsibility, since somebody’s going to be responsible for using it. What’s the moral difference between this and a minefield, the other autonomous killing machine?

The interesting problem is how this can go wrong, and what countermeasures can be taken.

Does this remind anybody else of the autonomous robot in Robocop? “You have five seconds to put down the gun.”
Test subject puts gun on table, steps away. “You have four seconds to put down the gun….”

Stephane March 9, 2009 9:00 AM

@mike B:

<>

Unfortunately, humans are still the best semantic analysis system we know about by far. They are the only one we know that can get the correct conclusion from a partial set of data better than statistically.

<>

I have not seen any indication that a system capable of doing better than human in doing that task you just describe exists or will exists within the foreseeable future and the consequence for failure here is simply too high to “give it a try”.

Stephane March 9, 2009 9:02 AM

I hate that commenting system that reformat your messages, doesn’t give you any way to prview or edit it and fails to provide a simple quoting mechanism… Here are the two quotes from Mike B that I wanted to insert in the above:

-Time and time again the weakest link in the security chain is the human.

  • Non-human systems deserve a chance to prove themselves on the battlefield given the generally lackluster job humans have been doing distinguishing Afghan weddings from terrorist training camps.

Stefan March 9, 2009 9:35 AM

Easy fix- let the autonomous systems acquire targets using the rules of engagement, and have personnel “manning” several or a network of these systems. The system reports the target info back to them, and people determine if the system should take action on the target.

A nonny bunny March 9, 2009 9:54 AM

One of the advantages of killbots is that they can afford to make more reserved judgments. If they don’t take action, it’s just a robot at stake, and not a soldier’s life. So they never need to shoot in self-defense in a better-safe-than-sorry situation.
On the other hand, military hardware does tend to be expensive, so it remains to be seen whether they would be tuned with this in mind.

@Stephane:
“I hate that commenting system that reformat your messages, doesn’t give you any way to prview or edit it and fails to provide a simple quoting mechanism.”

There is a “preview” button to the left of the “post” button.

Rob Mayfield March 9, 2009 9:55 AM

… in the article – are the ethics the real concern, or is it the allocation of blame when things go wrong?

Marc March 9, 2009 10:10 AM

Autonomous weapons-systems must be banned. Globally, by the UN. Their production and use must be crimes against humanity.

A nation that owns and uses autonomous weapons-systems does not risk anything by going to war. It will use war in cases their leaders would not even think of sending their sons. And their adversaries could not fight that nation on the battlefield anymore. No bravery on the battlefield will achieve anything. Therefore the adversary must carry the war off the battlefield.

This is usually called terrorism. The only possible answer to a nations that owns and uses autonomous weapons-systems is terrorism against their home soil.

Mike March 9, 2009 10:29 AM

(1) The identification problem is AI complete. A human will recognize another human even when they lack legs or arms. For a robot that’s a lot harder.

(2) AI complete is similar to NP complete in that both are mathematically intractable problems. We are still missing the major breakthroughs that need to occur. They may not occur for centuries, if ever…

(3) Humans are far more robust than robots. Humans can cover far more severe terrain. There are anti-robot weapons that won’t affect humans. Robots have refueling issues. Humans can live off the land. Robots don’t repair themselves, or accommodate damage gracefully. Etc.

(4) Given the present state of our cyber-infrastructure security, giving computers (robots) guns may yet turn into the americanized version of russian roulette. (Sure, they could be secured. But they’re also built by the lowest bidder.)

Guy March 9, 2009 10:40 AM

I think that autonomous robotic systems should never be used in combat. Ever. Plain and simple. All the ethical issues about robotics and their use were addressed in Asimov’s robot novels and stories and the three laws of robotics. Take robots out of the combat equation all together and the three laws provide a solid, manageable, ethical use of autonomous robotic systems. Leave robots in the combat equation and it becomes an unmanageable problem with no end game in sight and no rational, ethical solution.

Warfare is a human activity and an activity in which humans must accept whatever responsibility and risk is involved. Machines cannot make a rational decision on acceptance of risk or the risk of collateral damage. The only decision points available to a robotic system is the value criteria programmed into the system.

Also, we as humans err. We are the only ones who can accept the responsibility and consequences of our error, not machines. We cannot transfer the responsibility for our error to a machine. Especially when that error can cause disastrous loss of innocent life. An autonomous robotic system cannot be held responsible for error, because the decision matrix that may have caused the error was designed and programmed by humans.

Just a fine point, here. An autonomous robotics/weapons system requires no human input or intervention. That is the whole point of the ethical question. If, as suggested, the target information is reviewed and approved by a human, the weapon system is not longer autonomous, but human directed or remotely controlled.

Clive Robinson March 9, 2009 11:16 AM

There is a significant problem which needs to be resolved first.

It was highlighted on Sat evening in N.I. Where the “Real IRA” (have claimed) they shot four soldiers and two civilians.

The four soldiers where unarmed because Civilian security gaurds who are (supposed to be) armed are responsable for security at the army base.

The civilian guards did not do anything to prevent the attack and even though the attackers walked over and killed two of the soldiers execution style the guards still did nothing.

Apparently the reason is the guards have “rules of engagment” that only alow them to draw their weapons in self defence”.

This has given rise to the death of two defensless people and the serious wounding of others.

Rules that appear reasonable just do not survive the knowledgable behaviour of an attacker.

I expect any “litteral” interpretation of rules will always fail and thus be of andvantage to an enamy one way or another.

Only systems that can adapt in an appropriate way as a situation develops are going to be in an appropriate position.

Our current state of understanding of these issues in humans is woefully inadiquate so how do we expect to put it into a device that cannot reason only follow rules/

Matt March 9, 2009 11:17 AM

I would truly enjoy a battlefield “dominated” by semi-autonomous/remotely controlled battle-bots. Then, all I would have to do is kill the system operators/manufactures. A much easier problem in my opinion. Jamming the signal would be an option to confuse the systems as well.

If the battlefield is filled with truly autonomous, decision making battle-bots, I just have to avoid them, or lead them off and then go kill the people/society that sent them.

On an ethical question, would using battle-bots make the decision to go to war easier or harder?

Roboticus March 9, 2009 11:23 AM

One major advantage of combat robots (autonomous or not) is that is those situations where a human may have less than a 1/2 second to decide is that a grenade or a cellphone, is this person running towards me to bomb me or try to get help, or any other split second shoot or no-shoot situation, a robot does not have the biological panic response that we do. If a robot is destroyed because it made a more conservative judgement it doesn’t really matter. However, machine intelligence is very fragile at this point and will likely remain so for several more decades and a combat robot may have an exploitable vulnerability such that its programmed responses can be predicted in advance. An example might be put on a uniform similar to our own, with the IR patch and everything (I remember they were supposedly being sold online several months ago) and the robot decides you are a friendly and allows you to do whatever you intended. Another possibility is because of a damaged IR patch it decides a friendly is an enemy. Combat robots are useful but for now remotely operated ones and ones with a very specific pre-programmed target are the only ones that should be armed. At this point autonomous robots cannot be trusted do to the wide variety of failure modes for the AI.

Jason March 9, 2009 11:25 AM

Would the best war would be fought with robots on both sides and no human beings at risk?

Or does that just shift the burden from skill and resolve to GDP? If you can afford more and better robots, you win.

What if the AI for the battle bots was configured to kill everything that was living? Would that be better? No ethical dilemmas there. If it has a body temp and exhales CO2, it dies: friend, foe, or neutral.

For those suggesting kill bots be treated as something worthy of war crimes, what do you think the chances of that happening actually are? Has the issue been brought up to the UN?

A nonny bunny March 9, 2009 11:28 AM

@Marc
“A nation that owns and uses autonomous weapons-systems does not risk anything by going to war.”

Except a lot of very, very expensive equipment. And public and international opinion; if, say, their weapon systems commit breaches the rules of war.

@mike
“(3) Humans are far more robust than robots. Humans can cover far more severe terrain. There are anti-robot weapons that won’t affect humans. Robots have refueling issues. Humans can live off the land. Robots don’t repair themselves, or accommodate damage gracefully. Etc.”

That’s an issue of how you design and build your robot. There isn’t an a priori reason a robot can’t be made to be robust, can’t be made to cover more severe terrain than humans, can’t be made to be affected less by many kinds of weapons, ‘live off the land’, repair itself, etc. Most of those are a long way off. But some issues are currently a tradeoff (e.g. robots are not affected by fatigue, or biological weapons).

@Guy
“Take robots out of the combat equation all together and the three laws provide a solid, manageable, ethical use of autonomous robotic systems.”

I’m afraid not. In fact, the three laws don’t even adress the problem. The whole problem of ethics is hidden from view, and only brought back to light by asking “what is harm?” The framework doesn’t actually give you anything concrete to work with.

“The only decision points available to a robotic system is the value criteria programmed into the system.”

Unless it can learn. Which is a rather good idea if it has a lifespan over a few decades and has to function in social settings (rather than, say, on a factory assembly line). Social rules have changed a lot over the last few decades, and we can expect such change to continue in one way or another.
If they could have build robots in the 16th century, and those were unable to learn and adapt to new moral rules, then they’d still be trying and burning witches today 😛

Paul Crowley March 9, 2009 11:52 AM

The suggestion that programmers be responsible is obviously crazy. The purchasers must be responsible for ensuring that the machine has been built in a fail-safe way – suing individual programmers, who don’t have the power to ensure the project is run the right way, would not help.

AppSec March 9, 2009 12:02 PM

It’s clear the answer to all of these ethical issues is not to use robotic soldiers, but clone the Master Chief from Halo. Or you could have the Ed-309 from RoboCop.

AppSec March 9, 2009 12:04 PM

@Paul Crowley:

I think the point would be to sue the manufacturer who hired the programmers, not the programmers on the individual level.

And to be honest, I don’t have a problem with that. Take them to Civil Court just like is done with Medical and prove liability.

Maybe it will be determined that the purchaser shouldn’t have used the technology in that situation and all liability would fall on them.

Obligatory Futurama Reference March 9, 2009 12:14 PM

Leela: They say Zapp Brannigan single handedly saved the Octillion system from a horde of rampaging kill bots.

Fry: Wow!

Bender: A grim day for robotkind, eh, but we can always build more kill bots.

. . .

Zap Brannigan: How did I defeat the killbots? Simple- All killbots are set to a pre-kill-limit. So , knowing this I simply sent wave after wave of my own men, knowing full well that eventually the killbots would reach their limit and shut down. Genius

paul March 9, 2009 12:28 PM

We’ve got enough trouble keeping non-autonomous systems from shooting at friendlies. (And no, the rules of engagement for an autonomous system shouldn’t be different from those comlying with the laws of war for humans, except that the self-preservation exception should no longer hold.)

What concerns me is the possibility, raised implicitly in the article, of diffusing responsibility for dereliction of duty or violation of the rules of war by claiming that it’s really the fault of the general staff or the military contractor. When a bunch of soldiers under someone’s command commit an atrocity, the commander doesn’t skate by calling it the responsibility of the recruiting officers who signed them up, or even the drill sergeants who oversaw them in basic training. This is particularly important because rules of engagement differ from mission to mission, and it’s the commanders in charge of the weapons who are going to have to specify those rules.

(Of course, we have a variant of this problem now, where in general culpability seems to get pushed down to the lowest level of the heirarchy, while the officers who impose unworkable schedules or demand results without asking how they achieved do not bear the brunt of the punishment when atrocities or disasters inevitably occur.)

Todd Knarr March 9, 2009 1:04 PM

@Guy:
“Take robots out of the combat equation all together and the three laws provide a solid, manageable, ethical use of autonomous robotic systems.”

One problem with this theory: many of Asimov’s robot stories were about how the 3 Laws break down or are otherwise unsuitable in the real world. I’d be wary of basing any real-world system on something with so many known and discussed problems.

Clive Robinson March 9, 2009 1:25 PM

Aside from the ethics which lets face it are a difficult problem for humans…

There is the question of what for the robots would be in.

Arguably we already have autonomus robots in terms of reconasance drones and smart guided weapons.

And I don’t hear anybody talking about the ethics of either except where the wrong target is attacked.

We could simply tie the drones and smart weapons together and let somebody remotly select targets (Ah yes I’d forgoton, we’ve already done that their called cruise missiles).

Basicaly the next step is to take the pilot out of the equation in multi target launch platforms (bombers etc). The price of suitable electronics to do the job is easily offset by designing such vehicals “not” to take a human and keep them alive.

Then at the other end of the scale the robots are likley to evolve from “smart mines” sown across large tracts of land to effectivly “salt it” against the perceived foe.

Then there would be “smart dust” for intel gathering.

None of these weapons systems need ethics or anything like it.

RH March 9, 2009 3:36 PM

Ahh, so many interesting statements.
@Marc:
“A nation that owns and uses autonomous weapons-systems does not risk anything by going to war. It will use war in cases their leaders would not even think of sending their sons. And their adversaries could not fight that nation on the battlefield anymore. No bravery on the battlefield will achieve anything. Therefore the adversary must carry the war off the battlefield.”

The interesting bit about this, to me, is that there is an implication of a “purpose” to war. Not that I disagree. I do, however, wonder if we are aware what that purpose actually is. It does prove to be a curiously effective evolutionary trait, despite its tenancy to lay waste to entire civilizations.

b March 9, 2009 4:16 PM

C’mon people, we’ve all seen this movie. Skynet destroys the world and sends out robots to finish off the survivors. Battefield robots are a bad idea.

Hendrick March 9, 2009 5:51 PM

… basic ethical & legal questions with military robots are no different than those with conventional land mines.

Land mines are just simpler automatic killing devices.

Matthew March 9, 2009 7:15 PM

“Politicians, pundits, and lawyers can fill pages arguing these points”

And the better class of general will be thinking about it too. In the modern era of ‘hearts and minds’ and asymmetric whatchamacallit, a machine that wanders around blasting the wrong things is not of positive benefit. In a large-scale tank-battle-type war maybe it’s fine, but the US is going to win those ones anyway.

I think the appropriate film metaphor is less “terminator” and more “robocop”.

Sarcastic aside – at least drones won’t get whacked on speed and bomb your allies.

Brian March 9, 2009 8:41 PM

“The U.S. military bends over backwards to figure out when it is appropriate to engage the enemy and how to limit civilian casualties.”

I read this line eight times before giving up. I hope the rest of the article made a little more sense.

Small Reminder March 9, 2009 11:03 PM

“I read this line eight times before giving up. I hope the rest of the article made a little more sense.”

the reality is that if the US wanted to be particularly bloody-minded about it, they would just level everything in Afghanistan that looks sideway at them.

It certainly worked for Genghis Khan, who never had any real trouble out of the Afghan people, and the Russians and British also tried similar solutions on a smaller scale.

you may not like the US approach, but claiming they aren’t trying to minimize civilian casualties to the degree possible is more than a bit disengenous.

It’s in their interest to for no other reason than running a good counterinsurgency campaign. The civilians you don’t hit in a firefight with the Taliban are more likely to not join the Taliban, and more likely to give you information as to where the local Taliban are located and where they hid any weapons caches.

averros March 10, 2009 4:50 AM

Don’t forget to pay your taxes.

That’s all there is to the whole battle robot brohuhaha. Defense contractors want your money.

Politicos lap it up because the pool of idiots (sorry, “our troops”) willing to sacrifice their hides for the sake of somebody’s geopolitical ambitions – and for profits of the defense contractors – is rapidly drying off.

Price/performance-wise combat robots are total losers in the unstructured environment of battlefield. Will be for a long long time. So are we – for letting the scoundrels in charge to run away with our money under the cover of this and similar inanities.

Buck March 10, 2009 5:23 AM

The buck stops where?

If a court of law finds a military unit committed murder, let the commander in chief hang. It’s a war crime – While merely carrying out orders doesn’t remove culpability, the supreme responsibility resides with the supreme commander.

Granted, this is hard on the CinC. But that’s where the responsibility lies.

Buck

Buck March 10, 2009 5:30 AM

Addendum: Unless it can be proven, beyond a reasonable doubt, that the soldier(s) involved were deliberately disobeying orders that they had received and understood.

And also keep in mind that situations concering them should have orders. Lack of orders is a crime of omission, just as bad orders are a crime of comission. Both are equally criminal, and lack of suitable orders should be just as punishable.

Buck

PasswordCracker March 10, 2009 7:21 AM

I just hope they secure the robots control software and communications channels with strong passwords, otherwise our enemies will merely hack the robots and turn them against us.

Oh yeah, what was the other story about….

Brian March 10, 2009 10:03 AM

“you may not like the US approach, but claiming they aren’t trying to minimize civilian casualties to the degree possible is more than a bit disengenous.”

I claimed no such thing.

And it doesn’t follow that because they’ve not “level[ed] everything in Afghanistan that looks sideway at them” they’re “bending over backwards to limit civilian casualties”.

vince March 10, 2009 10:32 AM

The only way to win this game is not play it. If you limit the use of autonomous military robots to scenarios where the ethical questions would not pertain to the robot, viz. you would be ready to nuke the area as your other option, then the problem goes away.

The trick is to consider the robot as a weapon, not as a soldier, and to use it appropriately. As a berserker. A juggernaut that can love neither its country nor its fellows. Like landmines, bombs, and chemical weapons, the ethical questions pertain to the use, assembly, and stockpiling of the ordnance — not to the design or function of the ordnance itself, however advanced it may be.

James March 10, 2009 12:00 PM

This ethical discussion was also covered in a Japanese anime called Gundam Wing. Overall a pretty good anime, but it has an overarching theme discussing war and war by robots with the conclusion that autonomous robots that are the only things at risk during a war lead to more war.

There are lots of movies, tv shows, and games about this sort of thing that come out of Japan actually.

Guy March 10, 2009 1:26 PM

@Todd Knarr
You are correct, and that would be the point. Any logical structure of this sort can break down in the face of unpredictable reality. Take robots out of the combat equation all together. There have been a lot of comments about rules of engagement and the logical structures surrounding them. They break down, even when used by humans. What makes anyone think they would hold up when limited to machines? Lessee, what was that old maxim … To err is human, to really screw something up requires a computer.

Bub March 10, 2009 5:04 PM

The difference between autonomous, unmanned and (the neologistic) manned is important. We actually probably have (or could anyway) some pretty good data about the firing decisions made be “unmanned” (i.e. remotely operated) and manned systems between the operations in Iraq, Afghanistan and nearby. The unmanned situations are probably inherently messier, but who wants to take bet that there are zero weapons releases by unmanned systems that aren’t fully blessed by the chain of command? (It’s a different question about whether a manned system would make better decisions or not.)

Now, autonomous is a different beast altogether. Who wants to be the Q&A officer that signs the report that proclaims the system works as intended (and won’t work as not intended)?

Steve Davies March 10, 2009 6:50 PM

Huge costs involved and as for the chance of them working…. Wasn’t the Sheffield sunk in the Falklands through the mistake of the systems seeing the incoming exocets and recognising them as non ‘enemy’ and so ignoring the event.

Multiply that by a lot (one for each type and version of robot) and the code line count by a large multiplier. YIKES !

best chance of making it work better – put the defense contractors and the people who commissioned this nightmare on the front line. But don’t it all. Of course there will be no way at all that these could or would be deployed in areas of civil unrest at home so thats a relief… and older or cut down versions won’t be sold across the world to be used by all and sundry so thats another huge relief. Sleep well.

Clive Robinson March 11, 2009 2:16 AM

@ Bub,

“Now, autonomous is a different beast altogether. Who wants to be the Q&A officer that signs the report that proclaims the system works as intended (and won’t work as not intended)?”

I suspect that most QA & other officers would sign off on it without much question, as they currently do.

The reason is simple both sets of officers (customer and supplier) know they cannot test any moderatly complex system fully, as a mater of certainty.

What is signed off is often called the Final (or Factory) acceptance Test (FAT).

The FAT is a document that is prepared by the supplier or customer from the customer requirments that make the product design specification. It is usually a series of determanistic tests of functionality that a unit will either pass or fail clearly.

Importantly it is signed by the customer that the product meets their requirments and by which they agree to pay.

In essence a product is designed to meet the tests not the specifcation from which the tests are derived.

Obviously for simple mecanical items a series of tests to show it meets spec are not difficult to visualise or draw up because in most cases the specification will have given the requirments to a sufficient level.

However as a mechanical item becomes more complicated and has more “freedoms of movment” then both the specification and the tests become more complicated. And in some cases are not tested or the tests are not appropriate (think high sided cars cornering at speed in a significant cross wind).

Obviously the number of potential test cases goes up with the level of complexity. However less obviously it tends to go up not linearly but as a high order power law.

And there is the problem by which things go wrong in the field.

If you make a very simplistic assumption that the number of cases goes up by the number of relationships between “atomic” components or actions (ie as in “atomic operations” not as in atoms), then you get the,

Relationships = 0.5(n^2 -n)

equation.

However the relationships of even simple components are usually quite complex. (Think of water flowing down a simple pipe, at certain flow rates the pipe can resonate like a pipe organ and can easily damage it’s self to the point of failure. Then what about the temprature etc etc.)

So even a simplistic view has an n^2 growth but in reality is going to be n^(2^x).

It can quickly be seen that for even moderatly complex items such as a car, finding all the test cases and testing them all is going to take longer than the expected life time of the product.

Now look at simple software with highly deterministic properties such as an accounting program. With around 20,000 lines of code.

Potentialy each line effects the others in quite complex ways, are you going to be able to spot them all let alone check them?

Now think about the underlying operating system (10,000,000 lines of code) and the tool chain used to turn the source code into an executable (another 1,000,000 or so lines).

Oh and what about the underlying hardware if you remember Intel had a bit of a problem with a batch of CPU’s where the FPU did not quite get it’s simple maths right…

At the end of the day as a “signing off” officer you know you cannot fully test, so you do the obvious things around “known issues” and cross your fingers for the rest, such is the reality of life.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.