Of Airline Pilots and Other Risks

William Leiss


The deliberate acts of the co-pilot in the Germanwings airplane crash in the Alps, drugs as well as the possibility of accidental pilot error in the Halifax airport crash a short time later, ask raises the question:  Can we fly commercial freight and airline passengers without pilots on board?  We know that today most of the flying is already done on autopilot, including takeoff, cruising, and landing:  With a few more innovations, and with pilots manning installations at various points on the ground, placed next to the flight controllers who now monitor all flights in transit, we will no longer really need them to be in the cockpit.

Such a development does seem to be an inevitable consequence of the increasing capabilities of automated industrial control systems in general.  Few large production processes today lack some kind of computerized (i.e., digital) regulation, whether they are electricity grids, drinking water disinfection facilities, product assembly lines, chemical plants, transportation scheduling, and  countless other applications.  All of them use similar operating protocols, based on specific algorithms.  All of them, taken together, may be regarded as successful implementations of a single magnificent idea, namely, Alan Turing’s concept of a “universal machine,” dating from the mid-1930s, the product of a tragic life celebrated recently in a Hollywood film.

The economic and social benefits we have derived to date from this great idea are incalculably larger and continue to grow exponentially.  The ubiquitous digital devices we carry around on our person everywhere are the daily reminder of our utter dependence on it, and these benefits will soon be followed by others:  driverless cars, instantaneous medical checkups on the go, timely hazard warnings, remote control of myriad domestic functions, and so on.  (There will be others, too, more problematic in nature, such as enhanced surveillance and access to personal information.)  And as the benefits multiply, so do the corresponding risks.

There is something both ominous and revealing about the fact that the first specific application of this “magnificent idea” came in response to a threat to the very foundations of the society out of which it had emerged – the liberal democracy which had fostered freedom of scientific inquiry.  So “Colossus” defeated Enigma and helped to vanquish the Nazi regime.  The later machine defeated the earlier one, which had first been offered as a commercial product and had then become an instrument of malevolent and murderous intent.

Novel risks are inherent in novel technologies.  So the image of a future pilotless cockpit in, say, an Airbus A380 carrying 800 passengers on a long-distant flight, is matched by the prospect of a terrorist organization remotely hacking into the flight control software and holding that large airborne cargo for ransom, either monetary or political.  The sorrows of the families of the Germanwings plane caused, it seems likely, by a psychologically-disturbed human pilot would be amplified, in the hypothetical case of the hijacking of a pilotless aircraft, by rage against the machines.

Once computer-controlled machinery became widely interconnected and remotely attended, which of course greatly enhanced its usefulness, its inherent vulnerabilities started to become obvious.  These vulnerabilities in general include mistakes or omissions in the original program, inadvertent corruption through users’ unintended introduction of malicious software, theft by private parties for purely financial gain, and cyberwarfare (either covertly state-sponsored or waged by non-state actors with political and military objectives).

Some of the attendant risks are personal, such as individual cases of identity theft and financial fraud; some are organizational, such as the theft or disruption of massive electronic databases held by corporations and government agencies; and some (involving cyberwarfare) are potentially “black-hole risks,” where the ultimate collective consequences for nations could be literally incalculable.  (One can think of the remote-control systems used for the possible launching of intercontinental ballistic missiles having multiple and independently-targeted hydrogen-bomb warheads.)  As a general rule, one could say that the magnitude of the risks rises in lock-step with the expanding scope of computer-controlled processes and the degree of interconnectedness among all of the individual and organizational users.

These risks must be managed effectively.  Like most other risks they cannot be eliminated entirely:  The objective of good risk management is to limit the potential damages caused by sources of harm by anticipating them – through assessing the magnitude of the risk – and by taking appropriate precautionary measures in advance of significant threats.  We have a lot of experience in using systematic risk assessment and management in the cases of environmental and health risks, as well as in other areas, although we still do get it spectacularly wrong (as the large banks did in the run-up to the 2008 financial crisis).

Typically novel technologies with large incremental benefits are introduced and distributed widely well before the attendant risks have been carefully estimated and evaluated.  The scope of the risks associated with integrated computer-controlled technologies means that this practice will have to change.  I expect that sometime in the future a credible case could be made for the proposition that pilotless aircraft are safer that piloted ones.  But first some responsible agency will have to tell us that the new risks have been well-characterized, and that the chance of inadvertent failure or malevolent interference is so low (but not zero) that a reasonable person does not have to worry about such a thing coming to pass.

This entry was posted in Articles. Bookmark the permalink.