IT Security and the Normalization of Deviance

Professional pilot Ron Rapp has written a fascinating article on a 2014 Gulfstream plane that crashed on takeoff. The accident was 100% human error and entirely preventable—the pilots ignored procedures and checklists and warning signs again and again. Rapp uses it as example of what systems theorists call the “normalization of deviance,” a term coined by sociologist Diane Vaughan:

Social normalization of deviance means that people within the organization become so much accustomed to a deviant behaviour that they don’t consider it as deviant, despite the fact that they far exceed their own rules for the elementary safety. But it is a complex process with some kind of organizational acceptance. The people outside see the situation as deviant whereas the people inside get accustomed to it and do not. The more they do it, the more they get accustomed. For instance in the Challenger case there were design flaws in the famous “O-rings,” although they considered that by design the O-rings would not be damaged. In fact it happened that they suffered some recurrent damage. The first time the O-rings were damaged the engineers found a solution and decided the space transportation system to be flying with “acceptable risk.” The second time damage occurred, they thought the trouble came from something else. Because in their mind they believed they fixed the newest trouble, they again defined it as an acceptable risk and just kept monitoring the problem. And as they recurrently observed the problem with no consequence they got to the point that flying with the flaw was normal and acceptable. Of course, after the accident, they were shocked and horrified as they saw what they had done.

The point is that normalization of deviance is a gradual process that leads to a situation where unacceptable practices or standards become acceptable, and flagrant violations of procedure become normal—despite that fact that everyone involved knows better.

I think this is a useful term for IT security professionals. I have long said that the fundamental problems in computer security are not about technology; instead, they’re about using technology. We have lots of technical tools at our disposal, and if technology alone could secure networks we’d all be in great shape. But, of course, it can’t. Security is fundamentally a human problem, and there are people involved in security every step of the way. We know that people are regularly the weakest link. We have trouble getting people to follow good security practices and not undermine them as soon as they’re inconvenient. Rules are ignored.

As long as the organizational culture turns a blind eye to these practices, the predictable result is insecurity.

None of this is unique to IT. Looking at the healthcare field, John Banja identifies seven factors
that contribute to the normalization of deviance:

  • The rules are stupid and inefficient!
  • Knowledge is imperfect and uneven.
  • The work itself, along with new technology, can disrupt work behaviors and rule compliance.
  • I’m breaking the rule for the good of my patient!
  • The rules don’t apply to me/you can trust me.
  • Workers are afraid to speak up.
  • Leadership withholding or diluting findings on system problems.

Dan Luu has written about this, too.

I see these same factors again and again in IT, especially in large organizations. We constantly battle this culture, and we’re regularly cleaning up the aftermath of people getting things wrong. The culture of IT relies on single expert individuals, with all the problems that come along with that. And false positives can wear down a team’s diligence, bringing about complacency.

I don’t have any magic solutions here. Banja’s suggestions are good, but general:

  • Pay attention to weak signals.
  • Resist the urge to be unreasonably optimistic.
  • Teach employees how to conduct emotionally uncomfortable conversations.
  • System operators need to feel safe in speaking up.
  • Realize that oversight and monitoring are never-ending.

The normalization of deviance is something we have to face, especially in areas like incident response where we can’t get people out of the loop. People believe they know better and deliberately ignore procedure, and invariably forget things. Recognizing the problem is the first step toward solving it.

This essay previously appeared on the Resilient Systems blog.

Posted on January 11, 2016 at 6:45 AM28 Comments

Comments

Bluescreenofdebt January 11, 2016 7:27 AM

This is the ‘drive it until it breaks’ mentality. Cars with a weird noise can cost hundreds of dollars for something that could have been fixed by using fuel cleaner (a recent experience). Next time I hear a weird noise I’m going to check the oil, give it a dose of fuel cleaner, and drive it for another 1000 miles to see what happens. Of course with a car the worst case scenario is usually that you pay a few thousand dollars for a new one so the calculation is fundamentally different.

IC Rules January 11, 2016 7:59 AM

IC rules by extension:

1) The laws / regulations / constitutional barriers are irrelevant (we are the new untouchables!)
2) Our digital knowledge is imperfect / incomplete (somebody is communicating somewhere without our knowledge!)
3) We are doing Gods work with wizardly tools (we may disrupt society as we see fit – rules are made for lesser mortals!)
4) I’m breaking the rules for the good of my country (somebody give me a Purple Heart!)
5) Rules don’t apply to 3 (4)-letter agencies (you can trust me!)
6) Whistleblowers are afraid to speak up (we ‘shlonged’ some folks!)
7) The hierarchical, top-down, ‘need-to-know-basis’ leadership is withholding their capacities and purposefully hiding systemic problems placing everyone at risk (what government hack?)

The psychiatrists would have a field day with this mob who internalise NCIS memes as reality. Megalomania is a good place to start:

Megalomania is a psychopathological condition characterized by fantasies of power, relevance, omnipotence, and by inflated self-esteem.

casey January 11, 2016 8:13 AM

This analysis is missing the impact of normalization of compliance. It appears that the premise is that no negative outcomes are a result of following procedure to the letter. In fact, I believe it was this blog that had a link to a document called “How Complex Systems Fail” which points out a common mistake in forensics it labeled as “Post-accident attribution accident to a ‘root cause’ is fundamentally wrong”. The point of which is that there is a social drive to simplify the system to a level where we can point a finger and say ‘This was the problem’ when a proper understanding has multiple factors contributing to the outcome.

paul January 11, 2016 8:52 AM

Casey’s second sentence. “By the book” is seldom a term of praise, and “work to rule” is well known as a technique for deliberately slowing operations. In most organizations you have to know which rules you can break safely and when — which is something people are really lousy at. And by the nature of promotion in most organizations, “getting things done” earns the promotion, with only minimal attention to how things got done. So as you go up the management ladder you have a population of more and more people who have ignored rules in the course of their rise. (And the ones who ignored rules and failed horribly don’t rise, but that’s attributed to them, not to the strategy.)

Sometimes people get rewarded for ruthless adherence to security rules, but not often. (The only story that immediately comes to mind is a one that was told of the encrypted radio link between Churchill and FDR, and of a private guarding the equipment room who refused to let an uncleared general have a look inside; when the general tried to get the private disciplined, a commendation was forthcoming instead.)

JdL January 11, 2016 8:54 AM

The rules are stupid and inefficient!

Though it’s listed first, I don’t think this point is given the attention it deserves. I don’t see a nod to it in the “list of solutions” at all.

Developed societies, America especially, have gotten into an unhealthy dance in which safety requirements far outstrip the reasonable. Predictable result: people not only skip unnecessary steps, but also lose respect for the entire process, which leads to skipping the really important steps. The usual response when such disrespect comes to light is to make the rules even more ridiculous, and the spiral repeats.

The solution I would stress is: pare down the list of requirements to the truly essential. This will gain respect for the process and lead to greater adherence to its rules.

Little Birdie January 11, 2016 9:11 AM

General Aviation, GAA, is very dangerous. Think of an Alaska bush pilot. They crash their small planes all the time. So do little planes, everywhere.

Almost always the cause is: OPERATOR ERROR. It’s an example of normalized deviance mentioned here.

Proof of acceptance is the response of the pilot group. They almost always respond: ” tain’t so”. But, it is. Generally speaking, flying or being a passenger in small plane is about the same or more risky than riding a motorcycle in the city, they always crash too. Aside from a personal warning to forgo flights in little planes, the message for the cyber pilots and passengers is still the same: It’s very risky and much or most of the time the real cause of security failure is “operator error”. Frankly, I catch myself making “OEs” quite often. Think what happens on a world wide scale.

How to build a operator error proof plane or computer?

That’s a tough one, for sure. One hint is in public air transportation which is, in turn, is very safe: Redundant operators (two pilots), rigorous check lists, an agency, FAA, with teeth to enforce standards, and as much idiot proof hardware and software as possible.

One special problem for electronic device security: Major government agencies dedicated to making communication insecure as possible. And they have multi-billion dollar budgets to make it happen. That’s normal, too.

AlanS January 11, 2016 9:30 AM

So what is deviant and non-deviant behavior? What is an acceptable and unacceptable practice? Is it something given or something accomplished? And if the latter, how is it accomplished?

I think Bruce is right that this is about using technology but you have to dig deep into philosophy and sociology to understand what’s going on when we use technology and appreciate how deeply social human cognition and action are. Following a rule isn’t something that is externally given, a pre-existing fact. See for Wittgenstein’s discussion of rule-following, language games and forms of life (for a short discussion and references see 3.5 and 3.6 here). In sociology this sort of analysis first surfaces is the work by Harold Garfinkel and colleagues in the 1960s and later bleeds into work of the sociology of scientific knowledge and science and technology studies. For discussion see Lynch (PDF). This is the field of work to which Vaughan (PDF) was also connected.

larkin3 January 11, 2016 10:32 AM

@JdL: “people…lose respect for the entire {rules} process…The usual response…is to make the rules even more ridiculous, and the spiral repeats.”

Correct. (BTW that also summarizes the entire U.S. legal system)

But your solution of ‘making rules for the making of rules’ suffers the same problem.

Real solution rests in direct, competent management of people… rather than the typical bureaucratic rule-making approach. This requires decentralization of large organizations into small units where individual managers are empowered to exercise effective span of control. Such decentralization is natural in small business and spontaneous human work groups, but is feared in large organizations — those at the top do not trust their subordinates because those mass subordinates cannot be directly observed/guided. Bureaucratic ‘rule-making’ is thought a safe, adequate substitute for genuine personal management/oversight– but cannot possibly keep up with a dynamic organization in a constantly changing environment.

Human organizations and bureaucracies have been closely studied for a very long time — there’s nothing new to be learned overall. Unfortunately human nature remains static and dominant — same mistakes are endlessly repeated.

Ron January 11, 2016 10:46 AM

What a nice compact name for something we’ve all seen in our work, not only in security, but in any compliance focused or compliance associated endeavour.

K15 January 11, 2016 11:41 AM

And if there is nowhere to log your observations, with someone who will be motivated to look into it? (In which case, the only way to get through the day is to be as dull-witted as the next guy)

Daniel January 11, 2016 11:54 AM

So let’s look at another example of this normalization of deviance–same sex marriage. As study after study has shown the rate of homosexuality is less than three percent. Indeed, the latest studies suggest 2% for both men and women.

http://www.medicaldaily.com/bisexuality-more-common-past-years-according-cdc-survey-368546

Yet despite the tiny numbers huge amount of social time, effort, and emotional angst has been spent normalizing homosexual behavior. Every one of Banja’s seven factors has at some time been used in American public discourse to normalize homosexuality under the rhetoric of same sex marriage.

What’s my point? My point is that “normalization” describes a process whose outcomes are not inherently good or bad. We only find the process of normalization problematic when it leads to outcomes we socially disfavor, such as shuttles blowing up. But when it leads to outcomes that we socially favor, hooray for normalization.

Of course, after the accident, they were shocked and horrified as they saw what they had done.

Then they were fools. Or is it the case that in the future generations will look back at same sex marriage and be “shocked and horrified” by what we have done.

Ray Dillinger January 11, 2016 12:41 PM

This goes all the way to the bottom level. Users and low-level admins constantly see error messages about things they can’t correct or don’t understand, in applications that mostly work. And over time they learn to ignore these messages and click ‘OK’ because that’s the only way they can continue. Which is a problem because learning to ignore these messages means they learn to ignore ALL messages.

It has reached the point where someone who lost their entire company email archive told me “I got an error message” and wanted me to fix the problem, but was then flabbergasted when I asked what the error message actually said. Nobody reads the error messages. They aren’t the people who are supposed to care what the error messages say. They’re just the people who are responsible for keeping the email server working…. Or, in this case, restoring it from backup, with some structural still out there unidentified waiting to strike again.

And if this is what I find out in the business world, just imagine what most people are letting slide on their own desktop machines.

David M January 11, 2016 1:19 PM

I think I can say that without a doubt the leading “normalization of deviance” example would be everyday automobile driving.

At least in America, nearly every driver takes horrendous risks every time they get behind the wheel (and I include myself in that statement). A significant number of us routinely speed at 5 to 10 miles over the speed limit, some even faster. Some of us never use a turn signal. Many drivers ONLY drive in the fast lane. We drive without wearing seatbelts, with loose articles in the car just waiting to become a deadly projectile, on bad tires, with bad brakes, broken side mirrors, no brake lights, lights off during bad weather. Tailgating at highway speeds on slippery roads is common, as is taking curves at dangerous speeds. Running red lights? Pretty much at EVERY light change. Stop signs? A waste of money for the most part. Damn, I wish the car wouldn’t bounce so much, it makes typing hard. Ultra-bright headlights? Cool is better than safe (amirite?). Gotta be at the front of the line to save that extra half second. Let someone else in line in front of ME? forgetaboutitnotgonnahappen. Man, I’m really buzzed tonight, I’d better be “extra careful”…

I’m amazed any of us survive it…

T. G. Reaper January 11, 2016 1:31 PM

@David M:

“I’m amazed any of us survive it…”

Well, technically, you don’t. See you soon.

AlanS January 11, 2016 2:40 PM

On reading Rapp’s discussion I don’t find his use of Vaughan’s notion of normalized deviance very insightful into how the accident he describes happened. What she describes and what he describes are completely different. There’s a lot of interesting and subtle social science going on in Vaughan’s writing that just seems to go over his head because he doesn’t appear to be familiar with the literature she references. He’s a pilot; not a sociologist.

These statements are at odds with Vaughan’s analysis:
Rapp: the pilots were engaged in a “litany of errors and malfeasance”
Schneier: “flagrant violations of procedure become normal”
Banja: “The rules are stupid and inefficient!”
And you won’t stop the process Vaughan describes by:
Rapp: “a willingness to confront such deviance when it is seen, lest it metastasize” because it is only ‘deviance’ in hindsight.

The whole point of Vaughan’s analysis of the Challenger disaster is that it is a critique of the official reports of supposed behavioral malfeasance and the assignment of blame. It is easy to tell a story of malfeasance when you know the outcome. If you go back and immerse yourself in the sociology of the situations leading up to the event, suspending what you already know, then things might look rather more complex and difficult.

For those interested in the sociology I suggest reading the following account that covers the NASA study and two others: Diane Vaughan 2002 Signals and Interpretive Work: The Role of Culture in a Theory of Practical Action (PDF).

This is not a description of “malfeasance” or “flagrant violations of procedure”:

Analogical with the pattern in uncoupling, as incidents occurred the managers and engineers did not define the technical design as a serious problem because the social context and patterns of information affected interpretive work: As decisions were being made, signals of potential danger appeared mixed, weak, and routine. Mixed signals were information indicating something was wrong, followed by information indicating all was well. For example, the first incident of a technical anomaly on the Solid Rocket Boosters (a signal of potential danger) was examined, the cause identified, and the problem corrected. Then for five subsequent flights, there were no anomalies on the boosters (signals that all was well). A weak signal was one that at the time had no apparent clear and direct connection to risk and potential danger, or one that occurred once but the conditions that created it were viewed as rare and unlikely to occur again. Routine signals were anomalies that occurred repeatedly, but were expected and predicted as a consequence of a new safety procedure, and that engineering analysis supported as tolerable. Analogically, like the person left behind in an intimate relationship, only after the disaster were the managers and engineers able to look back and see clearly the meaning of the signals of potential danger that were there all along. (emphasis added)

Technological uncertainty created a situation where having problems and anomalies on the shuttle was itself a taken-for-granted aspect of NASA culture. The shuttle technology was of unprecedented design, so technical deviations from performance predictions were expected. Also, the forces of the environment on the vehicle in flight were unpredictable, so anomalies on returning space flights were frequent on every part on every mission and therefore routine and normal. Within this context of taken-for-granted problems, having problems on the SRBs was not a deviant event: Problems were normal and expected. Changes in the quality and quantity of the problems on the SRBs occurred over several years, introduced gradually into a cultural context where problems and change were taken-for-granted. Had all the damage to the boosters occurred on one mission, or on a series of missions in close succession, the sudden change might have been the attention-getting strong signal necessary to produce an official redefinition of the situation that the booster design was not an “Acceptable Risk.” As it was, signals of potential danger occurred in an ongoing stream of problems that tended to obscure change. History and experience mattered to the frame of reference the work group brought to the interpretation of information in a second way. The engineering rationale developed to justify the first anomaly became the precedent for accepting anomalies in the future. That first engineering risk assessment was foundational, for the first technical analysis was elaborated in greater detail with tests and analysis each time, so that the past and past decisions became the basis for subsequent analysis and launch decision making. The technical rationale for launching with anomalies was reinforced and made stronger in the process. (emphasis added)

Note the similarity to Kuhn’s analysis in The Structure of Scientific Revolutions: the interpretative work around anomalies, the elaboration of the community’s accepted problem solutions, etc. 

tyr January 11, 2016 3:26 PM

@Little Birdie

General aviation only needs one rule.

“there are old ones and bold ones, but there are
no old bold ones.”

There is also a vast gulf between leadership and what is
called management. You cannot become a leader by Peter
principles of advancement to your incompetence level.

Driving by using the rear view mirror is a horrible way
to approach security.

David Leppik January 11, 2016 4:56 PM

Life is always a struggle between forming good habits and habituation. Humans constantly struggle to pay attention to things that are important but not immediately rewarding.

In IT, people struggle to find the right log/alert level where the signal-to-noise ratio is just high enough that false positives (e.g. failed login attempts, 404 page-not-found errors) are just infrequent enough to not get ignored.

In personal life, people struggle between immediate rewards and long term rewards. For example, snacks vs. a desirable weight; cash now vs. retirement later. And of course paying attention to a child who is extremely dear but wants you to be interested in things that cannot hold an adult’s attention.

Clive Robinson January 11, 2016 7:04 PM

@ Bruce,

The normalization of deviance something we have to face…

Err it’s not exceptional, although most don’t realise it, it is actually how we all live our lives.

Normalization of deviance is also an apt description of how society evolves.

Those of a liberal nature today are seen as devient by those of a conservative nature. In time though the deviancey becomes the accepted norm. With a little more time those radical devient liberal ideas become the confining view of conservatism. That is the way society rolls.

The proces is thus both fundemental to society and importantly agnostic to the nature of the outcome (good or bad).

In times past surgeons more or less walked into theater hacked the body on the table and chucked instruments etc around. This led to proceadual mistakes and the odd instrument etc being sewn in to a patient on closure. This eventually gave rise to the likes of the Charge Nurse who’s job it is to prep the instruments and other surgical items and count them out for use and count them in again as the operation proceeds so the patient does not get any unintended extras. You now find proceadure checlists for the actual surgery being used, and lives are saved because of it.

But the thing about proceadure checklists is that they evolve through use and improved instruments. So they can be seen as an optomising process which also helps save lives as the patient is open for less time and thus alsp receives less harmfull anaesthetic, with improved recovery rates and less time in hospital at risk of serious infection from the likes of MRSA etc.

JPG suffers from regulation frustration January 11, 2016 7:19 PM

There are always tradeoffs that must be made between efficiency and effectiveness. In this case, effectiveness is could be measured by ensuring the checklists are followed by pilots using some kind of post review and random (or targeted) sampling. Efficiency is not an issue as the time required to perform the checklist is typically minimal. Making some assumptions as I have never flown a plane.

Now introduce the regulatory world and apply it to an industry with IT and OT security regulations. We are so afraid of a failure that we have created a bunch of requirements to ensure certain mistakes aren’t made. 99% of the time (not 100%), an error is inconsequential. It’s regulatory, so we have to prove that we met every regulation. If you can’t prove it, you didn’t do it. No room for professionalism and trust. Yes, we were specifically told by an auditor to remove the term professional judgement from our procedures.

As a result, the rules have become so inefficient that finding a way to get it done more efficiently, while proving ‘something’, becomes the job. We have made finding deviant shortcuts on the way to producing 400 pages of evidence a necessary job skill. And once the attitude is in place, it then applies everywhere. Essentially, the workload for a limited set of resources has doubled to tripled and the focus is on compliance. To hell with actual security.

Now add a regulatory requirement that you must document and follow a process that meets the requirement. An error in your process, not just failing to meet a requirement, results in a violation. As a result, procedures are now necessarily written not to expose yourself to a violation.

Couple regulation-induced waste-enhanced deviancy with a process that is written so that you cannot violate it. Do you expect security out of this? Risk analysis becomes number crunching rather than a security evaluation written for the specific environment. Instead of introducing security enhancing checklists built around a good security program and paying people to be knowledgable, we have instead engineered into our processes the seven identified factors that normalize deviance.

AlanS January 11, 2016 8:55 PM

more from an interview with Diane Vaughan:

The Challenger case is a good example. Initially, it appeared to bea case of individuals – NASA managers – undercompetitive pressure who violated rules about launching the shuttle in order to meet the launch schedule.It was the violation of the rules in pursuit of organization goals that made it seem like miconduct to me. But after getting deeper into the data, it turned out the managers had not violated rules at all, but had actually conformed to all NASA requirements. After analysis I realized that people conformed to « other rules » than the regular procedures. They were conforming to the agency’s need to meet schedules, engineering rules about how to make decisions about risk. These rules were about what were acceptable risks for the technologies of space flight. We discovered that they could set-up the rules that conformed to the basic engineering principles that allowed them to acept more and more risk. So they established a social normalization of the deviance, meaning once they accepted the first technical anomaly, they continued to accept more and more with each launch. It was not deviant to them. In their view, they were conforming to engineering and organizational principles. That was the big [discovery]. I concluded it was mistake, not misconduct. (emphasis added)

But note that on any activity there are on-going events that require judgement as to whether they can be explained and incorporated into routine knowledge/practice or they are real anomalies that require a major rethink and new practices. The judgements are embedded in the local culture, social relationships, practices, experiences, etc. There’s no magic, super-duper algorithm that’s going to tell you how to ‘objectively’ interpret an event as something that’s routine versus something that isn’t. There’s uncertainty. Communities are quite conservative so tend to stick with what has worked in the past and to minimize the adoption of dramatic changes in existing practices. Those sort of dramatic deviations from normal practice are themselves usually risky. As Vaughan observes, the O Ring issues were spread out over years rather than bunched in a few flights one after the other that might have heightened the perception that there was something involved that couldn’t readily be assimilated to existing knowledge/practices and that the threshold of unacceptable risk had been crossed.

Buck January 11, 2016 9:24 PM

@Clive Robinson

The proces is thus both fundemental to society and importantly agnostic to the nature of the outcome (good or bad).

Yes, but how did we get from there to here:

This eventually gave rise to the likes of the Charge Nurse who’s job it is to prep the instruments and other surgical items and count them out for use and count them in again as the operation proceeds so the patient does not get any unintended extras. You now find proceadure checlists for the actual surgery being used, and lives are saved because of it.

What are we really optimizing for? Perhaps it was simply cheaper economically for surgeons to hire a Charge Nurse than it was to constantly be replacing accidentally embedded medical equipment (externalities be damned)…

Not that I actually believe this could have been the case, but I think there’s a big difference between people clearly dying because of ($_medical_tool left-in-the $_body_part > 50% chance of unnecessarily early death)
vs.
((A*$_credit_score + B*$_inheritance + C*$_social_reputation – D*$_accent – E*$_country_of_origin – F*$_skin_color)^$_unknowable_algorithm = $_owning_a_home*$_feeding_your_family*$_educating_future_leaders)

Not entirely tongue-in-cheek here, I think there’s a serious point to be made about force-feedback optimizations when human emotions are entirely left out of the loop.

Clive Robinson January 12, 2016 3:44 AM

@ Buck,

Yes, but how did we get from there to here:

Long answer short “the carrot and stick of society”.

That is “incentives and punishments” that come from various sources, but media, lawyers, politicians and the legaslative process tend to be the publicaly documented areas we can look at.

But as you indirectly note through your formula there are other “carrots and sticks” which tend not to be publicaly documented.

One is how an individual sees or does not see themselves within the community that is their current society.

Research is usually inherantly risky, that is new methods can have unpredictable results, but it also raises new risks with old methods. Thus there is a risk balance that moves. When dealing with sentient living beings there is a perverse incentive to take risk inappropriately, which is metarisk.

Look at it this way, in the short term if there is a greater chance you are going to die by not doing something than by doing something inherantly risky then you will do the inherantly risky thing. That is if you are in a burning building you will be rather more likely to jump out the window than not. Because humans are generally very poor judges of risk then the riskier option may look less risky and be taken.

This has a knock on effect, the obvious test case for a new method is one which will have the mininum of existing inherant risks, which is a healthy individual. The most likely individual to volunteer to be a test case is a person where their inherant risk is so high that the risks of a new method appear negligible, thus are actually the worst test cases as they are the least likely to survive due to complications…

Which brings us to your point of,

I think there’s a serious point to be made about force-feedback optimizations when human emotions are entirely left out of the loop.

You can see that the play off of this in medicine all the time, and it does have a limiting effect on existing methods let alone new methods. An example of this is eating disorder surgery such as gastric bands.

Your risk of dying from anaesthesia during surgery goes up with both your actual and relative quantaties of fat. This is because the anaesthetic is readily taken up by fat, and thus larger quantaties of anaesthetic have to be used to be effective which adversly effects the respiratory system.

Thus the physicaly larger an individual is the more at risk they are at a lower percentage of body fat [1], simply because the actual quantity of fat is higher at any given percentage.

Which means having a requirment for having to be over a given percentage of body fat for such surgery, makes the surgery inherantly a lot riskier for physically large people than it does for small. The result is that if you are above a given physical size there is a cut off point where the risk is to high to perform the surgery, even though there is a significant and real need for it.

You thus get a tension in the system, where a doctor is not alowed to pass their patient to the consultant surgeon untill the patient is beyond a certain percentage of body fat. And even if the surgeon will want to perform the operation “for the patient” they can not because they know the anaesthatist will not accept the risk “for their and the hospitals reputation”.

[1] Note I say percentage body fat not BMI. The medical proffession are well well behind the curve on this. The BMI came about from a need to come up with a simple method to “curve fit” data in actuarial tables for life insurance. The simple formula that they came up with is “weight / (height x height)” which after a moments thought is obviously an iffy approximation as it requires you to change density the taller you get. Because mass is proportional to volume at any given density and volume would be proportional to hight cubed not hight squared. The life insurance industry came up with the formula over a century ago. Since then average hight has gone up by a foot in men and life expectancy for the “working” class increased by upto 40%.

moo January 12, 2016 9:12 PM

@Andy:

MIT prof Nancy Leveson wrote a pretty good book called “Engineering a Safer World”, which is available for free in PDF form from MIT Press. Chapter 5 of the book is entirely devoted to a detailed examination of that friendly-fire incident. It is very much worth reading.

AlanS January 13, 2016 3:01 PM

This paper is well worth reading for an insight into the limits of risk assessment and the prevention of system failure:
John Downer 2010 Anatomy of a Disaster: Why Some Accidents Are Unavoidable.

The paper looks at two critiques, Normal Accident Theory and Epistemic Accident Theory, of the ‘foresight-failure’ of Disaster Theory (the theory espoused by Bruce and others above),

…which implicitly assumes that accidents, whatever their cause, are preceded by warning signals, and then looks further downstream to the social question of why those warning signs go unheeded.

Yes, Disaster Theory is applicable in many cases, including, based on Rapp’s comments, the 2014 Gulfstream crash. But Disaster theory assumes:

If a bridge collapses or an airplane fuselage breaks apart because it experiences forces beyond those its designers anticipated, then it is easy to interpret the accident as an engineering error. Such accidents invariably reveal flaws in engineering assumptions, models, or data, but engineers work in an empirical realm of measurable facts. Facts are knowable. Facts are binary. True or false. Ontologically distinct. So when the facts are wrong, this wrongness, and any disaster it contributes to, can be viewed as a methodological, organizational, or even moral failing: one that proper engineering discipline should have avoided and one that social theorists might one day prevent.

But

Porter (1995) calls this ‘the ideal of mechanical objectivity’. It is intuitive, convincing, and entirely rejected by most epistemologists….Indeed, beginning with Wittgenstein’s later work, logical philosophers started rejecting the entire enterprise. ‘Finitist’ philosophers, such as David Bloor (1976), and historians, such as Thomas Kuhn (1962), began to argue that no facts — even scientific or technological — are, or could be, completely and unambiguously determined by logic or experiment.

The upshot of this is that:

Disaster Studies, as Pinch (1991: 155) puts it, needs to ‘bite the bullet of technical uncertainty’. Turner recognized that engineering knowledge is based in simplifications and interpretations, (Turner 1976: 379; Weick 1998: 73), but assumed that these simplifications ‘masked’ warning signals. The truth is more subtle, however. It is not that simplifications ‘mask’ warning signals, as Turner suggests, but that — on a deep epistemological level — there need be nothing that makes ‘warning signals’ distinguishable from the messy reality of normal technological practice. In other words: that there might be nothing to mask. If it is impossible to completely and objectively ‘know’ complex machines (or their behaviors), then the idea of ‘failing’ technologies as ontologically deviant or distinct from ‘functioning’ technologies is necessarily an illusion of hindsight. There is no inherent pathology to failure, and no perfect method of separating ‘flawed’ from ‘functional’ technologies.

Also see Duhem-Quine Underdetermination Thesis.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.