Schneier on Security
A blog covering security and security technology.
« The Battle for Internet Governance |
| Helen Nissenbaum, Privacy, and the Federal Trade Commission »
April 5, 2012
JetBlue Captain Clayton Osbon and Resilient Security
This is the most intelligent thing I've read about the JetBlue incident where a pilot had a mental breakdown in the cockpit:
For decades, public safety officials and those who fund them have focused on training and equipment that has a dual-use function for any hazard that may come our way. The post-9/11 focus on terrorism, with all the gizmos that were bought in its name, was a moment of frenzy, and sometimes inconsistent with sound public policy. Over time, there was a return to security measures that were adaptable (dual or multiple use) to any threat and more sustainable in a world that has its fair share of both predictable and utterly bizarre events.
The mental condition of airline pilots is a relevant factor in their annual or bi-annual physicals. (FAA rules differ on the number of physicals required, based on the type of plane being flown.) But believing that the system is flawed because it didn't predict the breakdown of one of 450,000 certified pilots is a myopic reaction.
In many ways, though, this kind of incident was anticipated. The system envisions pilot incapacitation -- physical, mental, or possibly, as in the campy movie ''Snakes on a Plane,'' a slithering foe.
That is, after all, why we have copilots.
The whole essay is worth reading.
Posted on April 5, 2012 at 6:19 AM
• 47 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Sadly, it matters not one iota how intelligent this article is or how much sense it makes. Ms. Kayyem has predicted a foregone conclusion in the first paragraph..."There will likely be congressional hearings, lawsuits, and new administrative rules."
Few will see this incident for simply what it is, a human being who had a mental breakdown. Even fewer will see beyond the finger pointing, political posturing and wailing, to recognize the redundant system (co pilot) worked and as UA93 taught us, passengers will no longer sit idly by as inflight incidents unfold.
But what if a case of food poisoning in the in-flight meals incapacitated both pilots? Shirley that's a risk we aren't anticipating!
"Shirley that's a risk we aren't anticipating!"
That's why they have autopilots, which I admit sometimes need to be reinflated. And quit calling me surely.
Does anyone here speak "Jive"?
@Name: That's why pilot and copilot are not allowed to have the same choice of food. Or, similar, why the engines don't get deep service at the same occasion, so that errors in handling don't affect both for the same flight.
In addition to the copilot there was a pilot in the passengers seating that helped out.
Sad they can't see this as a win.
At the very least this us a good argument for screening pilots as much as anyone else. The "I'm flying the plane" rationale for not doing so isn't unconditional.
In this case, it ended well: the "good" pilot happened to be the one on the same side of the locked door as the controls, while the "bad" pilot was locked out. Could it not easily have happened the other way round, particularly if this pilot had figured out the good one's plan earlier?
Not hard to imagine the positions reversed, with just the other guy locked in the cockpit where the locked reinforced door protects him instead...
That always struck me as a weakness in the "secure" door: hijacker jumps up, grabs a hostage, demands access to the flight deck. Once in, door locked ... what can the passengers do, try taking hostages themselves? It might with luck come in useful on another Flight 11 ... but much, much worse than useless on another Flight 93.
No, "I'm flying the plane" is a pretty damn good reason to not bother running them through the screening passengers receive at the airport. The pilot doesn't need a weapon to incapacitate a single other person in a locked room (Especially with the element of surprise, against someone probably not trained in unarmed combat), and he doesn't need a weapon to take over the plane (he already has control).
This incident was a pilot who suffered a mental breakdown of some sort. If he had been a terrorist, this would have ended much worse.
Pilots should, at most, go through background checks, and be trained in self defense. Since it's much harder to have both people in the cockpit be terrorists than just one (schedules, etc), this alone mitigates the threat. (Because the case where both people in the cockpit are terrorists is a case where you've already lost).
With something between one in six and one in four adults suffering mental health problems requiring medical assistance in the western world. The fact that we have had so few mental breakdowns in what is considered a very stressful job tends to sugest the system is working.
However it needs to be said this is not the first time a mental breakdown has occured in a pilot and I belive more than one pilot has crashed the plane in the past thirty or so years.
The simple fact is the law requires there to be a single person incharge of a commercial plane and under various circumstances a co-pilot etc who is subordinate to the pilot unless the pilot becomes incapacitated in some way.
Even if we had the technology for a fully automated plane would it be alowed?
Simple answer is no and the reasons are more human than technical.
I'm not sure what can be done to stop a "rouge pilot" that would not open other more risky problems.
No matter what politicians may want to say there are problems with every system and every system will fail at some point nomatter what you do.
Sometimes it's best to just acknowledge the problem and carry on, otherwise you just end up moving the deck chairs on the Titanic.
If we're going down, then we're going down with deck chairs arranged PERFECTLY!
The solution is to get rid of the human pilot entirely. A computer does 90% of the work already and every single shred of evidence indicates the software does the other 10% better, too.
"Even if we had the technology for a fully automated plane would it be alowed?"
Yes. I believe that the major issue is not the public but the pilot unions and the political parties that rely on their donations for support. It really has nothing to do with saftey and everything to do with jobs. Planes are not a fundamentally different industry to make them immune from the pattern of automation that has gone on in other industries. It may take a generation or two but I believe yes is the inevitable answer.
Name: When was the last time you saw an in flight meal? That disaster plot is soooo 1970s ...
Daniel: Better watch out for the . that should have been a , because the plane is going down because of it. In other words, everything that is a product of human kind is imperfect, including computers or, in the future, computers created by computers. At least with human beings, they can (somewhat) be negotiated with.
"... annual or bi-annual physicals": Are the physicals really only every two years? If so, I'd say that was too far apart. (If the article meant "twice per year", then the correct term is "semi-annual" - an increasingly common error.)
"In other words, everything that is a product of human kind is imperfect"
At some remove, some human flies the plane. The issue is pure math. On average, do human pilots make less or more mistakes than software designers. The answer over the last 50 plus years has been firm and unequivocal: software designers make less mistakes. That doesn't mean they make zero mistakes only that they make relatively less mistakes. Moreover, software designers are also significantly less costly in terms of training and education than a pilot.
A fully automated system is both cheaper and safer. The tragedy of a human being in the cockpit is that while it makes us feel safer it it is objectively less safe. All too human, indeed.
I trust most pilots before I trust most software developers, and I say that as a software developer. (And, worse, software user ...) I think the odds of a pilot Doing A Osbon are considerably less than a computer system throwing a blue screen of death mid-flight. Especially since, most software development is considerably less rigorous white-short-sleeve-with-tie engineer ideal NASA used to epitomize.
Please see this months blog
thanks, I try to prevent allot of the disinformation out there. Thanks!
@Daniel: Unlike computers, human pilots have the ability to adapt to an unexpected situation.
For example:. What do you do when you discover that fuel is leaking, and you don't have enough endurance left to redirect to a safe landing site?
What do you do an engine failure takes out both primary and backup hydraulic systems, disabling your control surfaces?
What do you do when both engines flameout simultaneously, knocking out power to the cockpit?
What do you do when the landing gear indicator says it deployed, and yet it didn't? Can your program even detect this scenario?
People have survived all of the above incidents, in no small part because of live human pilots who were able to improvise.
@James Sutherland : That always struck me as a weakness in the "secure" door: hijacker jumps up, grabs a hostage, demands access to the flight deck. Once in, door locked
Whomever is in the cockpit has to be willing to lock the door with the hijackers on the outside, knowing that at least some of them will likely be killed.
But consider that if whomever doesn't lock the cockpit they likely *all* will be killed.
I sincerely hope that current pilot training includes recognizing these facts and being prepared to do what it takes to protect the plane.
The passengers reaction shows that "crowdsourced security" is well suited to dealing with this (hopefully) very rare condition. Far better and more adaptable than any of the TSA's shoe removals, porno scanners and no-fly lists.
Unlike computers, human pilots have the ability to adapt to an unexpected situation.
This, Daniel, is precisely why airplanes will never dispense with human pilots. A computer program is limited in its choice of actions to those its designers foresaw. Human beings are imperfect, and thus can never anticipate the broad range of failure possibilities that can occur in order to program such a computer.
To make a comparison to airplane security (theatre), a computer operates on the TSA model, while a human operates on the intelligence and observation model.
Just be glad the pilot wasn't carrying his Congressionally-permited gun into the passenger cabin.
Redundancy is not inefficiency when the stakes are high.
This is a story of security practices and techniques working as intended. Be glad that no one was hurt and that Clayton Osbon is getting the medical help he needs.
@Lindley The Federal Flight Deck Officer program (and other armed flyer programs) considerably improve airline safety by introducing a variable a terrorist cannot control or predict. FFDO pilots receive substantial additional screening and very intense training. This pilot was not an FFDO. Even if an armed flyer (such as a diplomatic courier, air marshal, trained local law enforcement officer, FBI or USSS) were to go rogue in the passenger cabin, the worst that could happen is that they could kill some people. No chance of taking over the aircraft in this day and age. You may also want to learn to spell "permit."
@Name (food poisoning -> both pilots)
That's why the pilots don't eat the same meal. Also why certain early airborne command post flights ("Looking Glass") required pilots to wear a patch over one eye, so that they could fly with the spare after a nuclear airburst blinded them.
@LinkTheValiant and Aaron.
Sloppy thinking. Of course a human being can react to unexpected situations. That cuts both ways. For every Sullenberger who saves the day I can find two examples where the human pilot crashed the plane due to his reactions. There is no disputing that in a fully automated plane some accidents will happen that wouldn't have happened if a human pilot had been in the cockpit. But there is also no disputing that accidents today happen (AF447) that wouldn't have happened if they had been fully automated.
Again, on average fully automated cockpits are the safest. It is only the pilot as hero mentality that attracts us. But it has no scientific foundation. It is death in protection of myths.
I think Daniel is so wrong here that it hurts... software is nowhere *close* to being able to replace human pilots.
At best, current software can handle the standard day-to-day situations properly, and maybe with more reliability than most human pilots. But any one of a thousand problems can occur that the software will not be able to deal with, and that's when you need a human in the pilot seat to assess the situation and react. Even situations as common and mild-sounding as "bad weather" currently require the human pilots to take over. Even during software-assisted takeoff and landing, the human pilot and co-pilot still perform dozens of small tasks themselves. Even if most of these tasks are providing inputs to other electronic systems, and could in principle be fully automated, we are not yet at that point. It will be a while (10 more years? 20?) before your commercial airliner can perform a "fully automated" takeoff, flight and landing without human intervention.
Even if we had software expert systems which were provably better at this "handle unexpected situations" stuff than human pilots, you still have to convince the passengers to trust their lives to it. Passengers trust human airline pilots, and they have we have decades of experience and millions of successful flights demonstrating that the humans are up to the job. It will be a while before we have enough evidence of automated flying successes to convince passengers to trust their lives to it. We'll probably need a transition period of at least a decade, where we still have human pilots aboard even if they don't do anything *even in an emergency* but twiddle their thumbs and watch the computer handle the situation. And every time there's an incident where a human pilot jumps in and overrides the AI's emergency decision-making, I predict that it will set this clock back by at least a few years.
Also, I should mention that I write software for a living, and that average software is estimated to contain about 1 bug per 100 lines of code. The software required to fly a modern commercial airliner would add up to hundreds of thousands, perhaps millions of lines of code. (seriously). I work on multi-million line codebases that have 2- to 4-year product lifecycles, and we fix literally tens of thousands of bugs in them before shipping them. And that's just for video games. Even in safety-critical software, there will always be bugs. Its too expensive and difficult to completely prevent them. (Unless you are NASA and can afford to spend 1000x the commercial average, for each line of your code).
We will eventually reach the point where fully-automated systems can fly planes routinely and can recover from most emergencies well enough that the cost/benefit of having human pilots aboard will not pay off anymore, but I think we're still decades away from that point.
Luckily the co-pilot was able to get the psychotic captain out of the cockpit and lock him out. I wonder how things would have turned out if the deranged pilot had been one of those permitted by the FAA to carry a handgun on board. All in favor of armed pilots, please step forward to the check-in counter.
I never said that I thought software was capable *today* of replacing human pilots. You made that up as a straw man. I simply asserted that the long-term solution to the problem outlined by Bruce in his OP was a human free flight-deck. In fact, my exact words were "it might take a generation or two".
there is always Otto, the blow-up pilot ...
> The Federal Flight Deck Officer program (and other armed flyer programs) considerably improve airline safety ...
There just isn't enough evidence to make an absolute statement like that; given that there don't seem to be many terrorists attacking US planes, it's unlikely that there will ever be much evidence.
You are correct that it adds an additional variable; James R. Lindley is also correct, that additional variable can move in an undesired direction.
So much wailing and gnashing of teeth for a system that has statistically proven itself as 'acceptably safe' and resilient for decades. While it is a good thing that the aviation industry has a culture of analysing and incrementally learning from every incident, the level of angst and recrimination around this event seems disproportionate to the every day carnage on the roads, especially in countries like India. While the endless pursuit of perfection is to be admired and encouraged, the displayed angst for the level of perceived risk that current human pilot based aviation incurs needs to be ameliorated to a level matching the real risk.
Constant, iterative, improvement is good, throwing the baby (or pilot) out with the bath water isn't. And isn't going to happen in our life-times.
@Daniel: Are you sure about your claim that AF447 wouldn't have failed under automated control? According to wikipedia, "While the incorrect airspeed data was the apparent cause of the disengagement of the autopilot, the reason the pilots lost control of the aircraft remains something of a mystery, ..."; i.e. the automated system had already given up and handed control over to the humans, they just failed to guess correctly as to what to do.
I think a lot of "pilots are better than software" forget that the latest planes are already flying on just software. In that a system level failure cannot be saved by the pilots. Its already a case that you are trusting the software.
There are several other important points to make.
First is that life critical software is done by software engineers, *not* software developers. The whole process has zero resemblance to commercial off the shelf development cycles.
Second if there is a bug in the software, it can be fixed in all aircraft almost immediately. Compare to yet one more page in an already unreadable large flight manual.
Despite everything there are still a lot of older planes. Even if we wanted to go all auto pilot, most aircraft flying today can't. Only the very latest are close.
Like driver-less cars, i think in the end it will be the insurance companies that really start pushing this.
>"... annual or bi-annual physicals": Are the physicals really only every two years? If so, I'd say that was too far apart. (If the article meant "twice per year", then the correct term is "semi-annual" - an increasingly common error.)
He probably meant semi-annual, but some aviation physicals are required bi-annually, specifically a 3rd class medical (private pilot) if over the age of 40.
>I never said that I thought software was capable *today* of replacing human pilots. You made that up as a straw man. I simply asserted that the long-term solution to the problem outlined by Bruce in his OP was a human free flight-deck. In fact, my exact words were "it might take a generation or two".
I didn't see those "exact words" anywhere in your posts. What I did see, and what I was reacting to, were the following statements:
"A computer does 90% of the work already and every single shred of evidence indicates the software does the other 10% better, too."
"The answer over the last 50 plus years has been firm and unequivocal: software designers make less mistakes."
"There is no disputing that in a fully automated plane some accidents will happen that wouldn't have happened if a human pilot had been in the cockpit. But there is also no disputing that accidents today happen (AF447) that wouldn't have happened if they had been fully automated."
None of those implied that "in a generation or two" fully-automated flight by software solutions would be good enough; I interpreted your posts as arguing that software was (almost) ready for that now, which I strongly disagree with. We probably will see it within a generation or two though, assuming we can get a handle on the complexity of the task, and sort out the safety/liability issues well enough. You have to admit, flying a modern commercial airliner successfully and safely is a pretty challenging task, even for famously-adaptable humans. We may eventually be able to build computer systems that can do this task even better than humans, but its going to take an extraordinary amount of engineering effort. Hell, we haven't even got self-driving cars completely figured out yet, even with big companies like Google working on it. In some ways thats a much simpler problem than safely landing a commercial airliner in dangerous weather.
You're not entirely correct about the planes flying on "just software". They have a complicated set of fallbacks which yield progressively more and more control to the pilot.
See e.g. http://en.wikipedia.org/wiki/...
The current designs assume that in an emergency situation, human pilots will make better decisions than software. The software removes itself from the loop as much as possible, hopefully giving the pilot the best chance to save the plane.
Daniel's exact words were "It may take a generation or two". Is misquoting himself with "might" in place of "may" such a big deal?
For what it's worth, almost exactly 30 years ago a Boeing guy (neither an expert nor authorized to speak publicly on behalf of the company) told me that the technical capacity of airliner automation already had reached the level where they could (with some modification) fly from runway to runway without anyone aboard, though of course noone was proposing this as an operational capability.
Military UAVs, which of course are advancing quite rapidly, seem in some instances to have quite a lot of autonomous flight capability.
I don't expect people to removed from the pointy end of passenger planes -- but the day might not be far off when cockpits can have a Big Red Button that initiates autonomous emergency landing. For this purpose, the automatic flight system doesn't have to exhibit perfect safety*: only to significantly improve safety compared to the handling of emergency situations without the button.
*Of course, no aviation technology or system exhibits perfect safety, and no certifying authority requires such perfection. That automation systems will inevitably have a non-zero failure rate will not prevent their future certification as airworthy. Already, movement is underway to certify some UAVs for certain (peacetime) flights over populated areas. It is reasonable to expect a gradual trend in this direction.
It is my understanding that in fly by wire, no power is the same as no hydraulics. You can move the control surfaces. Only in fly by wire a big software fault could do the same thing.
@ Mark H, moo, Daniel,
For what it's worth, almost exactly 30 years ago a Boeing guy... ...told me that the technical capacity of airliner automation already had reached the level where they could(with some modification) fly from runway to runway without anyone aboard...
It was somewhat earlier than that, that various British companies had perfected the various parts of fly by wire (TRS2), gyro navigation (blue streak) and instrument only blind take off and landing by the early 1960's. 
Arguably it could all have been done with WWII technology as all the pieces were in place over several weapons and defensive devices.
That being said getting from runway to runway is just a very small part of the problem, because of exceptions...
The first problem is getting the aircraft to taxi from the stand onto the right part of the runway without bumping into things and likewise being able to actually take off and not hit other aircraft etc that intrude into it's airspace. Easy enough if everything else does as it's supposed to but not otherwise.
Computers are exceptionaly good at doing things that humans are not and humans are exceptionaly good at doing things that computers are not.
A simple example of this is to put a weight on the top end of a stick and then balence it upright with the bottom end of the stick being in the palm of an open hand. Most adults and children can master this skill in a couple of hours, likewise hitting a ball with a bat, in both cases effectivly using just vison as the feedback mechanism. We have yet to make a robot do both or either with any skill even in a visually "quiet environment".
Computers are quite bad at visual recognition of anything that they do not explicitly "know" and thus uniquely recognise, therefore they currently do not have the ability to perform "hazard perception" in anything like the way insects let alone humans do.
And without hazard perception atleast equal to humans no human is going to trust them with their lives in an ordinary civilian flight.
So yes computers can already fly an aircraft from departure stand to arivals stand because we have the technology to do the mechanics of it ultra reliably way better than humans can. However a computer cannot recognise a bunch of geese grazing on the grass beside the runway and react appropriately. And as was put on a poster for RAF pilots flying in and around the Flaklands 30 years ago "Watch Out : A bird strike can ruin your day".
It has got to the point that it is now possible to make your own "home made" drone using a standard large RC model aircraft kit and using a well known brand of cell phone to control it (a Uni student I know recently had the first test flight of their system and it flew a test pattern flight successfully using the gyros and gps).
But to give you an idea of just how fast and well a computer can guide an airbourn object we now have shells for battle field artillery that can use GPS/inertial navigation to drop a standard howitzer type shell into an area on the ground of less than six feet radius from atleast 40Km away.
We have also had for a while cruise missiles that can fly in excess of 1000Km and likewise hit an area of a similar size. And back in May 2009, Raytheon the manufacturer of the Tomahawk cruise missile proposed an upgrade to the existing (Block IV land-attack Tomahawk) missile that would allow it to destroy, hardened ships at upto 1,700 km which at just over 1000 miles is considered "medium to long haul" for passenger aircraft.
Thus the real question is safety due to lack of "hazard recognition/resolution". Humans don't do this on a case by case basis, we actually respond in similar ways to classes of hazards and sometimes this means we get it wrong (ie running up a tree when the attcaker is lighter and can climb bettter than we can).
And this is the acid rub, I don't think we will be able to do it much better with a computer. That is it will just like a human have to recognise a hazard within a class of problem and respond in a general not specific way. Thus the computer will likewise get it wrong sometimes, and while we as humans accept that "humans have accidents" we don't currently however give machines the same latitude.
. Historicaly Britain was usually well in advance of the US in these sorts of inovation and development. However successive British Governments (usually Labour) cancelled the projects and invariably handed the details across to the US under the "special relationship" (ICBM silo and nuke proof bunker design for instance).
Britain is however unique in that having developed it's own fully independent cruise missile and satellite delivery systems it gave them up... And it shows how desperate the US was to stop Britiain because NASA offered "free satellite launches" to britain untill Britain announced the cancellation of it's rocket program...
Britain still has it's very own satellite "prospero" launched on it's very own rocket up in low earth orbit. It' been there for over 40 years and I used to track it regularly years ago and the last time I checked around five years ago it could still be heard on 137.56Mhz.
The reason why Britain kept canncelling successfull inovation projects just at the point they had proved successful / usable can be found in various political biographies and historical research. Generaly they refere to "economic preasure" from the US over "War Debt" etc. Which might account for why in the 1980's Maggie Thatcher (UK Prime Minister) made getting rid of the "war debt" a high priority.
However it was two little to late the "brain drain" that had arrisen in the late 1960's and became epidemic under 90% taxation in the 1970's became ever more rampant because whilst the US might not be as good at inovation per head of population, it has always been far better at speculative investment. Mainly because in Britian you are looked on as equivalent to a criminal for having "tried but failed" where as in the US it tends to be regarded as part of the "game" or learning curve to have had a succession of startups that did not make the grade of IPO.
We are told that airplanes are basically capable of flying themselves. How true is this, and is the concept of pilotless planes really viable?
I do a fair bit of mythbusting. It comes with the territory, I suppose. Air travel has always been rich with conspiracy theories, urban legends, wives’ tales and other ridiculous notions. I’ve heard it all.
Nothing, however, gets me sputtering more than the myths and exaggerations about cockpit automation — this pervasive idea that modern aircraft are flown by computer, with pilots on hand merely as a backup in case of trouble. The press and pundits repeat this garbage constantly, and millions of people actually believe it. In some not-too-distant future, we’re told, pilots will be engineered out of the picture altogether.
This is so laughably far from reality that it’s hard to get my arms around it and begin to explain how, yet it amazes me how often this contention turns up — in magazines, on television, in the science section of the papers.
One thing you’ll notice is how these experts tend to be academics — professors, researchers, etc. — rather than pilots. Many of these people, however intelligent and however valuable their work might be, are highly unfamiliar with the day-to-day operational aspects of flying planes. Though pilots too are part of the problem. “Aw, shucks, this plane practically lands itself,” one of us might say. We’re often our own worst enemies, enamored of gadgetry and, in our attempts to explain complicated procedures to the layperson, given to dumbing down. We wind up painting a caricature of what flying is really like, in the process undercutting the value of our profession.
Essentially, high-tech cockpit equipment assists pilots in the way that high-tech medical equipment assists physicians and surgeons. It has vastly improved their capabilities, but it by no means diminishes the experience and skill required to perform at that level, and has not come remotely close to rendering them redundant. A plane can fly itself about as much as the modern operating room can perform a surgical procedure by itself.
“Talk about medical progress, and people think about technology,” wrote the surgeon and author Atul Gawande in a 2011 issue of The New Yorker. “But the capabilities of doctors matter every bit as much as the technology. This is true of all professions. What ultimately makes the difference is how well people use technology.”
And what do terms like “automatic” and “autopilot” mean anyway? The autopilot is a tool, along with many other tools available to the crew. You still need to tell it what to do, how to do it, and when to do it. I prefer the term autoflight system. It’s a collection of several different functions controlling speed, thrust, and both horizontal and vertical navigation – together or separately, and all of it requiring regular crew inputs in order work properly. On the jet I fly, I can set up an “automatic” climb or descent any of about six different ways, depending what’s needed in a given situation.
A flight is a very organic thing — complex, fluid, always changing — in which decision-making is constant and critical. Emergencies are another thing entirely. I’m talking about the run-of-the-mill situations that arise every single day, on every single flight, often to the point of task-saturation. You’d be surprised how busy a cockpit can become — with the autopilot on. For all of its scripted protocols, checklists and SOP, hundreds if not thousands of subjective inputs are made by the crew, from deviating around a cumulus buildup (how far, how high, how long), to troubleshooting a mechanical issue to performing the takeoff and landing.
One evening I was sitting in economy class when our jet came in for unusually smooth landing. “Nice job, autopilot!” yelled some knucklehead behind me. Amusing, maybe, but wrong. It was a fully manual touchdown, as the vast majority of touchdowns are. Yes, it’s true that most jetliners are certified for automatic landings. Called “autolands” in pilot-speak they are intended for extreme low-visibility conditions. But in practice they are very rare. Fewer than one percent of landings are performed automatically, and the fine print of setting up and managing one of these landings is something I could talk about all day. If it were as easy as pressing a button I wouldn’t need to practice them twice a year in the simulator or need to review those tabbed, highlighted pages in my manuals.
Another thing we hear again and again is how the sophisticated, automated Boeing or Airbus has made flying “easier” than it was in years past. On the contrary, it’s probably more demanding than it’s ever been. Once you account for all of the operational aspects of modern flying –- not merely the hands-on aspects of driving the plane, but familiarity with everything else that the job entails, from flight-planning to navigating to communicating — the volume of requisite knowledge is far greater than it used to be. The emphasis is on a somewhat different skill set, but it’s wrong to suggest that one skill set is necessarily more important than other.
But, you’re bound to point out, what about the proliferation of remotely piloted military drones and unmanned aerial vehicles (UAVs)? Are they not a harbinger of things to come? It’s tempting to see it that way. These machines are very sophisticated and have proven themselves reliable — to a point. But a drone is not a commercial jet carrying hundreds of people. It has an entirely different mission, and operates in a wholly different environment — with far less at stake should something go wrong. You don’t simply take the drone concept, scale it up, build in a few redundancies, and off you go.
I would like to see a drone perform a high-speed takeoff abort after a tire explosion, followed by the evacuation of 250 passengers. I would like to see one troubleshoot a pneumatic problem requiring an emergency diversion over mountainous terrain. I’d like to see it thread through a storm front over the middle of the ocean. Hell, even the simplest things. On any given flight there are innumerable contingencies, large and small, requiring the attention and subjective appraisal of the crew.
And adapting the UAV model to the commercial realm would require, in addition to gigantic technological challenges, a restructuring of the entire commercial aviation infrastructure, from airports to ATC. We’re talking hundreds of billions of dollars, from the planes themselves to the facilities they’d rely on. We still haven’t perfected the idea of remote control cars, trains, or ships; the leap to commercial aircraft would harder and more expensive by orders of magnitude.
And for what? You’d still need human beings to operate these planes remotely. Thus I’m not sure what the benefit of this would be in terms of cost.
It amuses me that as aviation technology progresses and evolves, so many people see elimination of the pilot as the logical, inevitable endpoint. I’ve never understood this. Are modern medical advances intended to eliminate doctors? Of course not. What exists in the cockpit today is already a fine example of how progress and technology have made improved flying — making it faster, far safer and more reliable than it once was. But it has not made it easy, and it is a long, long way from engineering the pilot out of the picture — something we needn’t be looking for in the first place.
I know how this sounds to some of you. It comes across as jealousy, or I sound like a Luddite pilot trying to defend his profession against the encroachment of technology and an inevitable obsolescence.
You can think that all you want.
I am not against the advance of technology. I am against foolish extrapolations of it.
from askthepilot. A real pilot not a boeing worker.
The automation question is not whether the computer can do better in the unusual circumstances than the human does, but whether the computer can do better in enough of the *usual* circumstances that overall safety is better.
Making up numbers (that are much too large for reality, to keep the numbers simple):
If 1% of flights involve pilot errors that lead to crashes, and 1% of flights involve non-pilot issues, and a human pilot is able to handle half of those issues and the other half lead to crashes, the net is that 1.5% of flights lead to crashes. (We'll ignore those flights that have both.)
If computers never make pilot errors, and are incapable of handling *any* of the non-pilot issues, the net will be that 1% of the flights crash.
Certainly there are cases where human pilots save the airplane where computers couldn't have.
Certainly there are cases where human pilots crash the plane where a computer wouldn't have.
The question (which I don't propose an answer to) is which of those numbers is higher.
I do think the big red "autoland" button is a good idea. Once you punch the autoland button, the airplane should pick a nearby airport, hold for a random amount of time, and attempt to land, then shut down on the runway, all the while screaming for help. It should not be possible to abort this sequence; it should only be possible to clear this mode using an access hatch on the *outside* of the airplane. Such a mechanism would carry its own risks, of course, but would address hijack cases and at least some rogue-crewmember cases.
If there were any serious contemplation of a "cockpit emergency landing" mode, I think it might be both prudent and sufficiently secure to have a properly encrypted ground-based suspend.
Radio communication protocols could be established to give some level of confidence that the cockpit is secure. The landing mode would not be switched off, but instead suspended, and would resume either by ground control (if the plane went where it shouldn't) or autonomously (in case of extreme maneuvering).
An emergency landing mode can perhaps already be made sufficiently robust to provide a high probability of safe landing, but the suspend capability would improve the margin of safety for cases of inadvertent or uncommanded activation.
Could this be a new vector of attack, or one being tested for potential effects.. could this be the new Phillip K Dickian attack vector whereby the human frailty is exploited.. a little psycotropic substance here, a little there, maybe on a car door handle, maybe in a burger...
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.