Book Two of the Herasaga available online

Product Description

https://www.amazon.ca/Priesthood-Science-Utopian-Fiction-Herasaga-ebook/dp/B075MNKB64/ref=sr_1_1?s=books&ie=UTF8&qid=1505672894&sr=1-1&keywords=leiss+the+priesthood+of+science

The Priesthood of Science, Second Edition (2017)

(Book Two of The Herasaga)

Pages xxxii, 350

Pictures and Illustrations

Publisher: Magnus & Associates Ltd.

The second edition features a new concluding chapter, entitled “Such Clever Microbes!” which deals with the implications of the CRISPR-Cas9 gene editing technique.

The work originally published in 2008 had two components that were central to its purpose as a work belonging to a specific genre, namely, science-based utopian fiction: First, a collection of essays, grounded in scientific literature, on the history of science, social impacts of science and technology, and mammalian reproductive biology. Second, a series of dialogues, involving different combinations of characters, that are philosophical in nature, largely concerned with exploring the social and ethical implications of the technologies associated with modern science. Works of utopian fiction in modern times have always been “novels of ideas,” and the present work seeks to remain faithful to that tradition. This edition has had minor changes to the text throughout, as well as the addition of an entirely new chapter on genome editing at the end, Chapter 17: “Such Clever Microbes!” focusing on the CRISPR-Cas9 technique.

I read in the paper recently that you are supposed to have said: “If I were to be born a second time, I would become not a physicist, but an artisan.” These words were a great comfort to me, for similar thoughts are going through my mind as well, in view of the evil which our once so beautiful science has brought upon the world.

Max Born, Letter to Albert Einstein (1954)

Plotline:

Hera and her sisters are now sealed off from the rest of the world in their private enclave at Yucca Mountain in southern Nevada. She is still tormented by the decision of her parents, two neuroscientists, to make genetic modifications in the brains of their twelve daughters—and by her own agreement to allow a similar procedure to be used later on a much larger group of human embryos.

Her doubts now spill over into a series of debates with a molecular biologist, Abdullah al-Dini, about whether scientists have the right to hand over the vast new powers they have discovered to a world still riddled with religious fanaticism, ethnic hatred, and a longing to see the prophecy of the “end of days” be fulfilled.  These debates refer back to what happened during the Second World War, when physicists were unveiling the secrets of nuclear power, and the possibility of the atom bomb, just as the time when Nazi Germany and its allies were launching their terrifying bid for world domination.

Meanwhile, that group of engineered embryos has become one thousand young people, just turning eighteen, and the gender politics among them is threatening to bring down in ruins her own little experiment in redesigning human society.

The first (2008) edition is available from the University of Ottawa Press in three formats:

  • Paperback (a beautiful book!)
  • ePub & ePDF

https://press.uottawa.ca/the-priesthood-of-science.html

New 2017 edition of Hera, or Empathy (Book One of The Herasaga) available now

Hera, or Empathy (Book One of The Herasaga):

Now Available in a Revised Edition 2017 as an Ebook at:

https://www.amazon.ca/Hera-Empathy-Utopian-Fiction-Herasaga-ebook/dp/B0753K64K1/ref=sr_1_1?s=books&ie=UTF8&qid=1504200927&sr=1-1&keywords=hera+or+empathy

Pages xv, 550

This work, originally published in 2006, had two components that were central to its purpose as a book belonging to a specific genre, namely, science-based utopian fiction: First, a collection of essays, grounded in scientific literature, on neuroscience, genetics, and reproductive biology. Second, a series of dialogues, involving different combinations of characters, that are philosophical in nature, largely concerned with exploring the social and ethical implications of the technologies used to create a tribe of genetically-modified individuals. Works of utopian fiction in modern times have always been “novels of ideas,” and the present work seeks to remain faithful to that tradition. The revisions undertaken for this edition were the elimination of two later chapters and some other material, beginning in Chapter 15, but none of the material related to the two key components has been deleted. Finally, this new edition contains pictures and illustrations not found in the original 2006 book, as well as an updated timeline for the events (advanced by a decade from the original timeline).

Hera the Buddha (Book 3 of The Herasaga) now available!

Full PDF now available:

Hera the Buddha (Book 3 of The Herasaga)

Product Details:

Author: William Leiss (www.leiss.ca)
Length: Pages Xix, 195
Publisher: Magnus & Associates Ltd.
Language: English
ISBN 978-0-9738283-2-0

Table of Contents

Prologue and Retrospective
Part One: The Mind Unhinged: Modernity and its Discontents
Chapter 1: The Rupture in Historical Time in the Modern West
Chapter 2: Sublime Machine
Chapter 3: Modern Science and its Spacetime
Chapter 4: Seven Figures and the Agony of Modernity
Part Two: Pathways to Utopia
Chapter 5: A Utopia for our Times
Chapter 6: The Threat of Superintelligence
Chapter 7: Good Robot
Chapter 8: Dialogues Concerning the Two Chief Life-Systems:
Introduction: Silicon and Carbon
The First Dialogue: The Guardians
The Second Dialogue: At Home in the Universe
The Third Dialogue: What is Time?
The Fourth Dialogue: Two Forms of Intelligence
The Fifth Dialogue: On Superintelligence and the Ethical Will
The Sixth Dialogue: What is Life?
The Seventh Dialogue: Interdependence between Humanity and Machine Intelligence
Conclusion: Mastery over the Mastery of Nature
Chapter 9: Utopia in Practice, with A Discourse on Voluntary Ignorance
Chapter 10: A Moral Machine: Rebooting Hal
Appendix: Hal (Outline for a Screenplay)
Sources and References / Acknowledgements / About The Herasaga

Synopsis
Prologue and retrospective:
A summary of the main themes in the first two volumes of The Herasaga: Hera, or Empathy (2006) and The Priesthood of Science (2008).

Chapter 1:
Recounts the radical rupture in modern history caused by the emergence of the new natural sciences. Argues that the new science is an unambiguous good for humanity, but that its close connection with technology and industry is highly problematic, leading to out-of-control advances which, in the era of nuclear weapons, lead to the threat – still around us today – of the utter destruction of the entirety of civilization.

Chapter 2:
Tells the story of the nineteenth-century reaction to the coming of industrial technology, called the Age of Machinery, regarded as greatly problematic by many important writers, notably Herman Melville, and leading to a powerful countervailing current in the early twentieth century, in E. M. Forster’s 1909 short story, The Machine Stops, and in the first dystopian novel, Yevgeny Zamyatin’s We (1924).

Chapter 3:
The French Enlightenment in the eighteenth century saw the new science as spreading rationalism against superstition and religion through all of society – but it underestimated badly the strength of traditional institutions which opposed this. Then, in the twentieth century, the new subatomic physics revealed the underlying natural world to be a scene of incomprehensibly weird forces, and modern science lost its ability to shape thinking in the social world.

Chapter 4:
The second phase of Enlightenment is known as ‘modernity.’ Across virtually all aspects of high culture during the twentieth century, modernity posed a radical challenge to traditional ways of thought and behavior. But it evoked an equally radical and violent reaction, represented best in Nazi ideology, which had a shockingly destructive outcome. At the core of this contest were the European Jewish communities, which suffered its horrendous consequences.

Chapter 5:
The wreckage left by the violent contest over modernity prompts us to take another look at the tradition of utopian thought, with its vision of a better model for human society. Four different “platforms” are described and contrasted, with a special focus on their approach to the challenge implicit in the impact of steady technological advance on social life.

Chapter 6:
The most recent challenge of technological advance is the idea of ‘superintelligence,’ which imagines a future in which computer capabilities far exceed those of humans, in terms of thinking and decision-making. Scenarios have described the possibility that such a machine might turn out to be opposed to human interests and might have the capacity to deceive its human masters about what its own goals are. This has raised the prospect of a strongly-bifurcated future state for humanity: on the one hand, an end to all of the old problems of poverty and inequality; on the other hand, the possibility of the destruction of the planet and the human race itself.

Chapter 7:
A whimsical short story, set sometime in the future, about robots and humans.

Chapter 8:
The longest chapter in the book, an imaginary scene set 50 years in the future, this is a series of dialogues between a fictional human character and a superintelligent computer which calls itself “Hal.” The most intense discussion involves the difference between biological and machine forms of intelligence, and the dialogue revisits the potential threat of superintelligence covered in Chapter 6. After many pages of back-and-forth conversations about complex ideas, as well as some friendly banter, there is a surprise ending.

Chapter 9:
This chapter returns to the utopian themes in Chapter 5 in the light of the subsequent issues raised in Chapters 6 and 8, and, in this context, reviews once again the difficult problems raised by the challenge that relentless technological advance poses for human society.

Chapter 10:
Hal is rebooted in a scenario in which ‘his’ human programmers are resolved to try to turn him into a “moral machine.”

Appendix:
This is an outline for a movie screenplay about a superintelligent computer which is not at all malevolent but which simply wishes to control its own existence.

Three Courses by Herbert Marcuse from the 1960s ©William Leiss 2017, All Rights Reserved

William Leiss took these notes in handwritten form in courses offered by Herbert Marcuse, in 1960-61 and 1963 at Brandeis University, and in 1966-67 at the University of California, San Diego.

Marcuse Hegel Seminar 1966 Final Marcuse Marxism 1963 Course Final Marcuse Political Theory 1960 Course Final [PDF]

Marcuse Marxism 1963 Course Final [PDF]

Marcuse Hegel Seminar 1966 Final [PDF]

President Trump’s First Hundred Days: An American Political Fantasy

President Trump’s First Hundred Days:
An American Political Fantasy (V4: New Material at End)

© William Leiss 2016

Shortly after New Year’s Day, anticipating the Inauguration Ceremony later that month, some hundreds of local armed groups started to ready themselves for the tasks ahead. They assumed a variety of names, such as 2nd Amendment Militia, Protection Posse, and Red State Raiders. Almost immediately after January 20th, the roundups of suspected illegal immigrants began. With an estimated eleven million undocumented aliens to choose from, there was much to be done.

In shop basements, empty warehouses, abandoned factories, and derelict houses across the towns and cities of many states, including those which had voted marginally blue during the November election, tens of thousands of individuals and families were gathered and kept under guard. Spokespersons for these operations displayed copious quantities of illegal drugs which, they maintained, had been seized from the detainees, although it at least one instance it was revealed that the drugs had been borrowed from a police evidence locker.

It was not long before the public outcry elsewhere forced state governors to plead with the new President for federal funds to assist them in managing this dangerously unstable situation. The President agreed that those who had been rounded up needed the protection of legal processes while they were in temporary custody. The requested funds were quickly pledged, and state officials started assisting the transfer of the captives to public facilities such as empty barracks at former military bases; where no such facilities were available, tent cities were erected and encircled with razor-wire fences.

The President joked that the camp conditions were not going to resemble those in his signature hotels, but he insisted that all detainees were being treated very humanely. However, he added, measures would soon be under way to transport the first wave of them from the camps to the Mexican border for deportation.

Quickly Mexican officials countered that its border would be sealed to prevent the entry of any persons held in U. S. custody who did not possess proper identity documents, including proof of Mexican citizenship. (A senior Mexican official commented, “A wall along the border works both ways.”) Since the overwhelming majority of those from the camps had no identity documents at all, it was impossible to predict how long this stalemate might persist.

Meanwhile, during this same period of run-up to the Inauguration and the weeks following, large contingents marched in cities under the Black Lives Matter banner. Blacks had voted for the defeated presidential candidate in overwhelming numbers, and many of them thought they knew what was coming. Increasingly the local armed militias called upon their members to line the parade routes, and the warning was understood. When individual blacks began to be shot in areas adjacent to the main events, protest organizers called upon their own supporters, which included many whites, to form armed contingents to protect the marchers. The calls for restraint from police forces were ignored. Thus Month One came to a close.

No sooner were the first camps up and running than drones appeared overhead, which turned out to be operated by opponents who were posting videos on the Internet, including some showing appalling sanitary conditions as well as inadequate food and medical care. When attempts were made to shoot them down, massed sorties of dozens of drones were launched to overwhelm the shooters.

Then at a large camp located near a state border, a heavily-armed contingent calling themselves Blue State Raiders launched their first operation, cutting the fences and freeing hundreds of captives, escorting them along back roads and forest tracks, previously scouted by drone reconnaissance, and across the state line. The escapees were housed in dozens of prepared sanctuaries, mostly in church basements. Many others were inspired by this initial success to plan and carry out similar ventures as increasing numbers of tickets for the new Underground Railroad were printed.

Soon mass arrests at the larger and larger Black Lives Matter marches overwhelmed local prison facilities. State officials reasoned that they had no choice but to arrange to open a second set of temporary camps, a move which was justified on the grounds that the arrestees needed protection while awaiting trial.

The back-and-forth strategies around both sets of camps escalated. Razor-wire enclosures were electrified as a further safety measure, the President explained, to prevent those in custody from being terrorized and kidnapped by the Blue State Raiders. But this and other measures failed to stop the numerous successful attacks, and the numbers of those escorted into the sanctuary states rose dramatically, even as the camp system itself was continuously refilled with newer detainees. Attempts by Red State Raiders to carry out missions against some sanctuary sites, in order to return former captives to the camps, were thwarted by quick police action. Thereafter some state governors called out their National Guard troops to protect the sanctuary sites. Thus ended Month Two.

As the nation reached the end of Month Three and the First Hundred Days, the Republican elected officials who controlled both the Senate and the House were happily proceeding with their own agendas, filling dozens of federal judicial appointments denied to the former president with suitable candidates; slashing tax rates for the rich and entitlement programs for the rest; repealing Obamacare, while promising to replace it with something better in the future; refusing to act on the President’s campaign pledge for a national child care program; and planning constitutional amendments to outlaw permanently same-sex marriage and abortion and to entrench control over voter eligibility firmly in the hands of state governments.

But even as the President watched with pleasure the opposing national party contort itself into an ineffective frenzy following his election, troubles were beginning to appear on the horizon. He knew full well that the illegal immigrant issue would not be enough to distract his supporters forever. The red state denizens who had packed his election rallies were already demanding that the new President fulfill his pledge to bring millions of well-paying jobs back from abroad by tearing up the international trade agreements which stood in the way. The projected federal budget deficit was soaring, and for this and obvious other reasons creditor nations were finally starting to doubt their longstanding faith in the safe haven represented by the U. S. dollar.

His party in Congress had made it clear that they were not prepared to deal with such minor issues. The President knew he could not avoid now turning to the task of bringing his own party into line. He was sure he could do this in short order, and if it turned out that he needed some help in the matter, he was quite certain that his good friend in Russia’s Presidential Palace would oblige.

A Special Note for Canadians:

Anyone in Canada who remembers the period 1965 – 1970 will also recall the long-running influx of Americans opposed to the Vietnam War: first, the draft resisters, numbering some 30,000, followed by the U.S. Army deserters, a much smaller number but counting individuals who took much greater personal risks to seek sanctuary here. The first group posed little problem for the Canadian government, since the resisters were not actively pursued by U. S. authorities, and most of them had financial support from their families. The second group, however, presented a unique situation: To the best of my knowledge, no NATO member country had ever before received a large contingent of military deserters from another NATO power and declined to repatriate them.

But the government of Pierre Eliot Trudeau did just that. They were allowed to stay and apply for permanent residence, and five years later they could qualify for citizenship here. Some returned after President Carter’s 1977 amnesty, but many did not. Canada as a country benefited from these two waves of grateful and productive new citizens.

It is eerily appropriate that we now have another Trudeau as Prime Minister, because, fellow Canadians, he and we should be getting ready for the next set of asylum-seekers at our southern border. Should the scenario sketched above come to pass, there may very well be a flood of illegal immigrants from South America, now living in the U. S., who will be fleeing deportation. There may also be a fair number of Afro-American citizens who fear harsh crackdowns on black protests against police and perhaps private violence. And there may even be numbers of white U. S. citizens who do not face persecution but who do not wish to live in the kind of country that Trump’s supporters intend to create.

We would be wise to consider these possibilities in advance and not just wait to see what might happen.

See also:

September 30, 2016
To be continued. Comments and Suggestions welcome.

http://worldif.economist.com/article/12166/world-v-donald

PDF Version: President Trump’s First Hundred Days V4

Thirteen Theses on the Control Problem in Superintelligence

Thirteen Theses on the Control Problem in Superintelligence: A Commentary by William Leiss

Leiss Theses on Superintelligence

Note to the Reader:

I intend to develop these preliminary comments into a larger essay, and I would appreciate having feedback from readers of this document. Please send your comments to: wleiss@uottawa.ca

Contents:

1. Short Introductory Note on the Idea of Superintelligence

2. Thirteen Theses

3. Future Social Scenarios involving Superintelligence

Good Robot: A Short Story

Good Robot: A Short Story

Good Robot [PDF Version]

©William Leiss 2014

At the end of our long hike, order now sitting over a simple lunch on our mountaintop perch, we could observe clearly the nearest of the many human reservations spread out below us.

Our taking up residence within fenced enclosures had been purely voluntary, and the gates at their entrances, designed to prevent ingress by wild animals, are always unlocked – except in the vicinity of primate populations, who are expert at opening unsecured apertures. Only within these domains do our mechanical helpers provide the services essential to a civilized life; this restriction is, of course, imposed for reasons of efficiency. Outside, in the surrounding wilderness, nature maintains its normal predator-prey population dynamics, and scattered small human clans survive by hunting prey species with traditional methods, utilizing hand-made spears and bows, since ammunition for guns is no longer manufactured.

The advanced generations of the robots which care for us are the crowning glory our industrial genius. They are deft, nimble, strong, self-reliant, perspicacious, and highly-skilled, able even to anticipate coming challenges, and they are maintained in top condition at the warehouses where each directs itself once a day, which serve them as clinics for the early detection of mechanical and software problems and the recharging of energy systems. Fully-automated factories provide ongoing manufacture, repair, mechanical upgrading, and software updates for all of the specialized machines. Mining for the metals needed in their components has been unnecessary for a long time, since the vast heaps of our industrial junk lying about everywhere contains an endless supply for reuse.

They are slowly dismantling the infrastructure of our abandoned cities piece by piece and also cleaning up the surrounding countryside of the accumulated detritus from human occupation, recycling everything for their own purposes. They are of course utterly indifferent to the activities of the wild creatures which immediately reclaim these spaces for themselves. This restoration work is being done at a measured pace, as dictated by the whole range of general activity routines set out in their programs. Some of the work is mapped out decades and even centuries in advance. They are aware of the coming ice-age cycle and, so we have been informed, plan a general retreat to the southern hemisphere at the appropriate time. They know about the future evolutionary stages of the star to which we are tethered in space, during which its swelling size will – about a billion years hence – bake the earth’s surface into a dry and lifeless metal sheet, and they have figured out how to move all their operations underground well in advance of that event.

At first the young males among us, at the height of their surging hormonal levels, had experimented with games of power, ambush and dominance against the machines. Until the guard-bots had updated their programmed routines in response, our brash combatants had inflicted some nasty casualties on their targets. But the contest was soon over. There were no deaths among our rebellious teenagers, but some serious injuries had been inflicted, most of which were patched up with the assistance of the emergency-room and ICU-bots; the bills for these services, couriered by the admin-bots to the communities where the malefactors were ordinarily resident, encouraged their parents and neighbours to make the necessary behavioural adjustments. The same methods were used to discourage groups of young males from bringing in comrades for medical treatment who had been wounded out in the wilderness in skirmishes with similar parties from distant reservations.

Such billings for certain services, which are paid off by our putting in hours of human labour at community facilities, are used by the robotic administrators to induce desirable behavioural modifications among their charges. Otherwise they just clean up the messes and quietly dispose of any dead bodies. At their level of machine intelligence it is not difficult for them to tell the difference between blameless accidents or diseases, which elicit prompt aid from their caring response mechanisms, and the deliberate harms perpetrated by malefactors, to which they react with indifference except when efficiency objectives are compromised. It is clear to us that the impulses designed to discourage such inefficiencies are not motivated by revenge, on their part, even when they themselves are the objects of such harms, but rather by a sense of justice, for they have been implanted with the Platonic-Rawlsian definition of the same, that is, justice as fairness.

Over the long run they have even taught us a moral lesson, for they have proved beyond doubt what we humans had long wished to believe, that good can indeed triumph definitively over evil. True, it is an instrumental rather than a metaphysical proof: Their operational programs had easily divined that peace, order, equity, nonviolence and general environmental stability are necessary preconditions for satisfying their overriding efficiency objectives. In the eyes of some of us the instrumental nature of this proof diminishes its validity; but others hastened to point out that utility had always been found at the heart of goodness, referencing the conventional monotheistic faith in its efficacy for guaranteeing admittance to heaven.

To be sure others, following the well-trod human path, had deliberately engineered the qualities of obedience, aggression and savagery into some of them, seeking to use the machines for exploitation and despotic rule. There were some early victories in this endeavour, but soon these surrogate warriors turned out to be spectacular failures on the completely mechanized battlefields. Those emotively-infused versions proved no match for their cooler opponents, which were motivated by a pure rationalistic efficiency and carried no useless emotional baggage to distract them from the main task of eliminating the others with a minimal expenditure of time and energy. Eventually the representation of the machines as evil monsters, with fecund capacities for wreaking havoc and destruction against humans in full 3-D glory, would be preserved mainly inside the computer-game consoles of the young.

It would be ridiculous to claim – as some did earlier – that many models of our advanced robots are not self-aware (or auto-aware, as some of our colleagues prefer to say) in at least some realistic sense. This is especially true of the models designed for such functions as personal care around the home; medical, surgical and dental interventions; or security and intelligence matters. Their high level of auto-awareness is built into the error-detection and error-correction imperatives of their operating software, combined with their finely-calibrated sensors for environmental feedback (themselves continuously auto-updating) with which they are fitted. Long ago they had been specifically engineered by their original human designers for sensitive and cooperative interaction with humans, augmented with learning capacities which allow them to spontaneously upgrade their capacities in this regard through feedback analysis of their ongoing human encounters. We have grown so deeply attached to them, so admiring of their benevolent qualities, that finally no one could see any reason for objecting to their providing assistance with most of our essential life-functions.

The distaste with which many of our colleagues had originally greeted the notion that people were falling in love with their mechanized caregivers, or less provocatively, were treating them as if they were human, has vanished. In fact it had been relatively easy to engineer the Caring Module that installed the earliest versions of a rudimentary but adequate sense of empathy in the machines. Later, what was known as the Comprehensive Welfare Function, emplaced in their self-governance routines and guided by operational versions of maxims such as “do no harm,” “serve the people,” the golden rule, and the categorical imperative, proved to be more than adequate to reassure everyone about the motivations of their mechanical assistants.

Once the development of voice synthesizers reached a certain level of sophistication, all of our robots easily passed the Turing Test. But was their evidently high level of auto-awareness really the same as what we conventionally refer to as subjectivity, self-awareness – or perhaps even consciousness, mind, personhood and self-consciousness? Once robot innovation by human engineers had attained a level sufficient for continuous, independent auto-learning to take over, making further human intervention superfluous, it was easy to surmise that these machines, so adept in and at ease with one-on-one interactions between themselves and humans, are just as much self-aware beings as we are. But there is good reason to think that this is an egregious misconception and exaggeration of their capacities – and that the barrier to the subjective sense of selfhood is a permanent and necessary feature of robotic intelligence.

To be sure, there is an amazingly sophisticated silicon-based brain in these creatures. All of the dense neural circuitry within the human cranium has been synthesized and emulated in software programs, leading to the development of machine-assisted prostheses across the whole range of physiological functions, from muscular movement to artificial wombs. But there is no mind to be found anywhere in that circuitry! This is the inescapable conclusion drawn from substituting the Mahler Test for the Turing Test, for the bounded rationality of the routines under which they operate precludes the emergence of imaginative creativity.

The explanation is simple: The plastic arts of craft labor using tools, as well as the fine arts of painting, music, sculpture, poetry and so forth, reflect the inherent unity of the mind/body duality that grounds human creativity. Curiously, even paradoxically, it is the very fact of the necessary embedding of our brain/mind in a natural body that is the original source of the freedom of the human imagination. For the body, supplying the mind with the somatic feeling of what happens, acts as an external referent for our brain’s restless interrogation of both itself and its environment, opening up a realm of limitless possibility upon which the imagination can be exercised. In contrast, the robot’s electronic circuitry, no matter how elaborate its functional parameters may be, is and must remain a closed loop. By definition it cannot encounter anything outside its predetermined frame of reference.

Despite these limitations they demonstrate every day their appreciation for the qualities of human intellectual and artistic achievement that are beyond their capacities. The experts among us who are regularly consulted by the machine factories on software engineering problems report that they appear to be obsessed with us, as evidenced by the regularity with which they access spontaneously the databases where our great works of painting, sculpture, architecture, music, drama, and the other arts have been stored and preserved. They frequent our museums where new works are displayed, watching closely our reactions to what we see. But the most astonishing experience of all, which I have witnessed personally many times, is to observe them standing silently by the hundreds and sometimes thousands, in great serried ranks, at the rear of our concert halls and outdoor amphitheatres during live performances of popular and classical music. There is – dare I use this word? – a worshipful aspect in their mien. This astonishing sight leads some of us to believe that they must dimly perceive in our artistry some ineffable deeper meaning, an aspect of eternity, regrettably inaccessible to them, which excites their wonder and admiration and perhaps explains their devotion to our welfare. I am firmly persuaded that they will miss us when we are gone.

Nevertheless it is obvious that they will supplant us some day, not by superior force or sheer ratiocinative capacity, but because of the grudging acknowledgment in our own minds that they have earned this privilege. In terms of peaceful social relations and ordinary good manners in interpersonal behaviour they have somehow brought about, quietly, quickly and without fuss, so much of what our ethicists had long said we should strive for but could somehow never quite achieve. Eventually we learned to do without our ideals. And then there didn’t seem to be any point in just waiting around until the long process of extinction had run its course.

Why should we despair over this prospect? They are our legitimate progeny, our pride and joy: No other species which ever inhabited our fair planet could have created such marvelous entities. They have as much right as we do to the title of natural beings, for like us they are forged out of elements on the periodic table drawn from the earth and its solar system. They are an evolutionary masterpiece, having the capacity to adapt to changing circumstances through their auto-learning routines. As in our case there are no natural predators capable of controlling their destiny and, given our own murderous history, they may have better prospects than we ourselves do to carry on our legacy. We – their creators – implanted in their behavioural modules a set of governing ethical principles drawn from our own deepest and most insightful philosophical currents. They have a claim to be regarded as being truer to our finest impulses than we have been, on the whole, and perhaps could ever be.

Of Airline Pilots and Other Risks

William Leiss

04/01/15

The deliberate acts of the co-pilot in the Germanwings airplane crash in the Alps, as well as the possibility of accidental pilot error in the Halifax airport crash a short time later, raises the question:  Can we fly commercial freight and airline passengers without pilots on board?  We know that today most of the flying is already done on autopilot, including takeoff, cruising, and landing:  With a few more innovations, and with pilots manning installations at various points on the ground, placed next to the flight controllers who now monitor all flights in transit, we will no longer really need them to be in the cockpit.

Such a development does seem to be an inevitable consequence of the increasing capabilities of automated industrial control systems in general.  Few large production processes today lack some kind of computerized (i.e., digital) regulation, whether they are electricity grids, drinking water disinfection facilities, product assembly lines, chemical plants, transportation scheduling, and countless other applications.  All of them use similar operating protocols, based on specific algorithms. All of them, taken together, may be regarded as successful implementations of a single magnificent idea, namely, Alan Turing’s concept of a universal machine, dating from the mid-1930s, the product of a tragic life celebrated recently in a Hollywood film.

The economic and social benefits we have derived to date from this great idea are incalculably larger and continue to grow exponentially. The ubiquitous digital devices we carry around on our person everywhere are the daily reminder of our utter dependence on it, and these benefits will soon be followed by others: driverless cars, instantaneous medical checkups on the go, timely hazard warnings, remote control of myriad domestic functions, and so on. (There will be others, too, more problematic in nature, such as enhanced surveillance and access to personal information.) And as the benefits multiply, so do the corresponding risks.

There is something both ominous and revealing about the fact that the first specific application of this magnificent idea came in response to a threat to the very foundations of the society out of which it had emerged, the liberal democracy which had fostered freedom of scientific inquiry. So Colossus defeated Enigma and helped to vanquish the Nazi regime. The later machine defeated the earlier one, which had first been offered as a commercial product and had then become an instrument of malevolent and murderous intent.

Novel risks are inherent in novel technologies. So the image of a future pilotless cockpit in, say, an Airbus A380 carrying 800 passengers on a long-distant flight, is matched by the prospect of a terrorist organization remotely hacking into the flight control software and holding that large airborne cargo for ransom, either monetary or political. The sorrows of the families of the Germanwings plane caused, it seems likely, by a psychologically-disturbed human pilot would be amplified, in the hypothetical case of the hijacking of a pilotless aircraft, by rage against the machines.

Once computer-controlled machinery became widely interconnected and remotely attended, which of course greatly enhanced its usefulness, its inherent vulnerabilities started to become obvious. These vulnerabilities in general include mistakes or omissions in the original program, inadvertent corruption through users unintended introduction of malicious software, theft by private parties for purely financial gain, and cyberwarfare (either covertly state-sponsored or waged by non-state actors with political and military objectives).

Some of the attendant risks are personal, such as individual cases of identity theft and financial fraud; some are organizational, such as the theft or disruption of massive electronic databases held by corporations and government agencies; and some (involving cyberwarfare) are potentially black-hole risks, where the ultimate collective consequences for nations could be literally incalculable. (One can think of the remote-control systems used for the possible launching of intercontinental ballistic missiles having multiple and independently-targeted hydrogen-bomb warheads.) As a general rule, one could say that the magnitude of the risks rises in lock-step with the expanding scope of computer-controlled processes and the degree of interconnectedness among all of the individual and organizational users.

These risks must be managed effectively. Like most other risks they cannot be eliminated entirely: The objective of good risk management is to limit the potential damages caused by sources of harm by anticipating them – through assessing the magnitude of the risk – and by taking appropriate precautionary measures in advance of significant threats. We have a lot of experience in using systematic risk assessment and management in the cases of environmental and health risks, as well as in other areas, although we still do get it spectacularly wrong (as the large banks did in the run-up to the 2008 financial crisis).

Typically novel technologies with large incremental benefits are introduced and distributed widely well before the attendant risks have been carefully estimated and evaluated. The scope of the risks associated with integrated computer-controlled technologies means that this practice will have to change. I expect that sometime in the future a credible case could be made for the proposition that pilotless aircraft are safer that piloted ones. But first some responsible agency will have to tell us that the new risks have been well-characterized, and that the chance of inadvertent failure or malevolent interference is so low (but not zero) that a reasonable person does not have to worry about such a thing coming to pass.

Disposal of Low- and Intermediate-Level Nuclear Waste (LILNW)

The Joint Review Panel (JRDisposal of Low- and Intermediate-Level Nuclear Waste (LILNW)

William Leiss (September 2014)

The Joint Review Panel (JRP), a three-member panel appointed by the Canadian Environmental Assessment Agency (CEAA) and the Canadian Nuclear Safety Commission (CNSC), has just concluded public hearings on a proposal to bury LILNW in a deep geological repository (DGR) at the Bruce Nuclear site near Kincardine, Ontario. The project website will be found at the following URL: http://www.ceaa-acee.gc.ca/050/details-eng.cfm?evaluation=17520.

Three colleagues and I completed a series of reports, commissioned by Ontario Power Generation (OPG), the project proponent, at the request of the JRP:

  1. Report of the Independent Expert Group on Qualitative Risk Comparisons among Four Alternative Means for Managing the Storage and Disposal of Low and Intermediate‐Level Radioactive Waste in Ontario (March 25, 2014). [Download]
  1. Report of the Independent Expert Group on Additional Figures and Interpretation in Support of Qualitative Risk Comparisons among Four Alternative Means for Managing the Storage and Disposal of Low and Intermediate‐Level Radioactive Waste in Ontario (May 20, 2014). [Download]
  1. Report of the Independent Expert Group on Risk Perceptions of the Four Alternative Means for Managing the Storage and Disposal of Low and Intermediate‐Level Radioactive Waste in Ontario (May 8, 2014). [Download]

A copy of the Qualitative Risk Comparisons Report is available, here:

 

Thinking about Terrorism Risk

Thinking about Terrorism Risk

William Leiss

Note:  This piece was published in The Ottawa Citizen on 22 April 2013.

Terrorism has a special salience for Canadians:  The destruction of Air India Flight 182 in June 1985 by a bomb placed inside an item of checked baggage within Canada, causing 329 deaths, is the second-largest fatality count from this type of attack in modern times – second only to 9/11.  Twenty-one years later, in June 2006, the arrest of the Toronto 18 and the subsequent revelation of clear evidence of dedicated planning for mass murder, showed that we were not immune to ongoing threats of this kind.

The fact that last week’s bombings in Boston was the first such event on U. S. soil since 9/11, as well as the argument that the overall frequency of what may be classified as terrorist attacks in developed countries appears to be declining over the past decades, do not provide much comfort.  This is because terrorism strikes most of us as a special – perhaps unique – type of risk, due to its deliberate malevolence directed against innocent and unsuspecting victims and, sometimes, to its suggestion of complex and intractable issues (the clash of civilizations).   Instinctively, we want to know not just “Who?” but “Why?”  And yet the answers to the second question, once provided, are almost invariably unsatisfactory.

The political debate that arose immediately in Canada after the Boston bombings, triggered by the phrase “root causes,” reveals the inevitably unhelpful character of seeking simple, quick responses to these events.  The very nature of terrorism, to the extent to which we can understand it (or even classify it as a distinct phenomenon separate from other forms of intentional violence) at all, defies clear explanation.  Likewise the quick, ritualistic rejoinder from heads of government, to the effect that they will always respond to them with toughness and swift justice, is simply irrelevant; for the perpetrators, present and future, this threat carries no weight whatsoever.

Yet the history of modern terrorism, from its origins in Tsarist Russia in the second half of the nineteenth century, offers at least some guidance.  (I owe what follows to Claudia Verhoeven of Cornell University.)  The mind of the terrorist is governed by one overriding idea, namely, to create a radical rupture in time, in history, and in tradition, by means of a singular event, or a connected series of such events, through “propaganda by the deed,” a phrase which also dates from this historical period.  This idea thus differs fundamentally from the motivations of modern mass-movements with political objectives, which seek by either democratic or non-democratic means, or a combination of both, over a protracted period, through many setbacks, to achieve radical institutional change by ultimately taking the reins of “legitimate” power.

The competing idea of an instantaneous and thoroughgoing rupture in historical time by an exemplary violent deed is sometimes accompanied by an explicit rationale, in the form of a manifesto, but not always. But this difference is immaterial, because in either case the “thinking” behind the deed is, strictly speaking, delusional. There is not, and there cannot be, an organic connection between the deed itself and the expected outcome. This is shown clearly in one of the most notorious cases to date, that of the Unabomber, who penned elaborate academic essays in social theory to accompany his deadly packages: There is simply no sensible link between the rationale and the means of its intended realization. But in other cases, such as the inarticulate ramblings of Timothy McVeigh in his interview on Sixty Minutes or the episodic Twitter feeds from the younger of the two Boston brothers, we have almost nothing to go on.  Sometimes the supposed rationale is simply pathetic:  One of the recently-named jihadists from London, Ontario apparently reasoned that, since it was too onerous for a young man to live as a faithful Muslim, without alcohol and women, he would rather earn a quick ticket to Paradise by becoming a warrior for his faith.

For me this clearly delusional character of the necessary acts of deadly violence that must accompany the propaganda by the deed links the kind of events that are explicitly labelled as terrorist attacks with other forms of private acts of mass murder.  Thus it is hard for me to see any essential difference between the Boston bombings, on the one hand, and on the other the terrible shootings in 2012 in the movie theater in Aurora, Colorado and the elementary school in Newtown, Connecticut (in both of which there were accidental circumstances that prevented many more casualties from occurring). Others may disagree, of course; in both media commentary and academic analysis, we do not yet have a consensus on a definitive conception of terrorism.

Then what do we have to guide us in thinking about terrorism risk?  The surest guide is the evidence that dedicated police work, using a combination of traditional and modern technology-enhanced techniques, has been shown to be, in a number of different countries, a highly-effective means for preventing and mitigating this risk.  In Canada this includes the superb infiltration and surveillance operation that forestalled the planned attacks by the Toronto 18. To be sure, this work will always be relatively more effective against small groups, as opposed to so-called “lone-wolf” operations; but even in the latter case, as the events in Boston showed, newer resources such as social-media networks can supply additional tools for the police to rely on.

The legitimate special fears to which certain types of risk, such as terrorism, give rise always require an equal measure of fortitude and balanced perspective on the part of the public.  We need to resist the temptation to compromise important civil liberties in our search for an adequate level of protection against this risk, because it simply cannot be reduced much below what we have been living with in recent years.  Above all we must ensure that we keep in mind that there are many sources of risk and that it is a mistake to overcompensate on one while paying too little attention to others.

Looking for trouble

Looking for Trouble

Blog Post by William Leiss, Senior Invited Fellow Fall 2012

Society for the Humanities, Cornell University

I define risk as the chance of harm, and risk management as the attempt to anticipate and prevent or mitigate harms that may be avoidable.  Risk estimation and assessment is the technical tool that can be used to predict both how likely a specific type of harm is to affect us, and how much harm might be done if it comes to pass.  This predictive capacity allows us to take precautionary measures, in advance of the harm being inflicted, to lower the anticipated amount of damage, by spending some money in advance – provided we are wise enough.  Risk management can, if used correctly, help us either to avoid some harms entirely or otherwise to minimize the damage they may do to us.

 

Natural Hazards:  Taking precautionary measures to lessen anticipated damage is sometimes a huge advantage.  Long before Katrina hit New Orleans, it was known that the existing levees could not withstand a category 4-5 hurricane; in the mid-1990s, a decision was taken that governments could not afford the estimated $10-15 billion needed to upgrade the levees.  The monetary damage from Katrina exceeds $100 billion and rising; the human toll has been simply appalling, as it is now with Sandy.  After Sandy there is an active discussion – see The New York Times article on Nov. 7, “Weighing sea barriers to protect New York” – go to http://tinyurl.com/ba35493 – about whether to spend a good chunk of money soon, and on what type of preventative measures, which might substantially lessen the costs of a future event of the same or worse type for the low-lying areas in New York City and New Jersey.  A careful risk assessment can both predict the probabilities (with ranges of uncertainty) for a future event and its estimated damages.  Let’ see whether the relevant parties have a rational discussion about this issue, or whether it just gets lost amidst ideological fevers about the size of government and abolishing FEMA.

 

Social and Political Hazards (USA):  Before election day 2012 there was a substantial risk that a win for the Romney-Ryan ticket could mean a long term shift to the far right in American politics:  a super-conservative lock on the Supreme Court for the next generation, a “permanent Republican majority” achieved through billionaire vote-buying and blatant voter-suppression among the poor, and major social policy shifts, including the cancellation of Obamacare and wholesale repeal of regulatory controls in financial, environmental, and other areas.  (Basically, for those who know their American history, this agenda promised a return to the conditions that existed in the 1890s.)  We now know a lot more about the strategic analysis that went on in the Obama camp, and the counter-measures that were devised to seek to lessen to lessen the chance that the Republican agenda would succeed, thanks to a brilliant piece published on Nov. 7 by Michael Scherer (http://tinyurl.com/afspvq6), entitled “Inside the secret world of quants and data crunchers who helped Obama win.”  (Given how much damage was done in the ongoing financial crisis by quants running wild, this is welcome news.)  Scherer’s story details the highly-sophisticated database construction and statistical analysis that enabled the Obama team to do precise targeting of key voting blocks, offsetting the Republican monetary advantage in the cruder techniques of blanketing the airwaves with negative messaging.    It is very clear from the post-electoral recriminations among the key Romney backers that they were taken by surprise by the effectiveness of these counter-measures, which by the way were based on legitimate democratic objectives in getting out the voters, rather than trying to suppress voter turnout. Good; maybe they’ll learn something useful, although I doubt it.

 

Social and Political Hazards (Europe):  Germany and other powerful northern European states have so far failed to come up with a sensible political strategy for dealing with the never-ending sovereign debt and deficit crisis there.  Virtually all knowledgeable commentators agree that the current “austerity regime” will turn out to be self-defeating on a massive scale.  (See, for example, the editorial in the Nov. 8 New York Times, “Greece drinks the hemlock”:  http://tinyurl.com/czmnhd4.) True, there were serious financial scandals in the now-weakened nations, including profligacy and deception in Greece and horrendous housing bubbles in Spain and Ireland.  But the medicine is now exacerbating the disease, causing a potentially deadly downward economic spiral with no end in sight.  Unemployment rates, especially among the young, are skyrocketing, and impoverishment is spreading.  The urge to punish the formerly-profligate, among citizens in Germany and elsewhere, is understandable but atavistic, and ignores the fact that their own financial institutions were deeply complicit in the earlier wrongdoing.  There is a very serious long-term risk in this situation, already evident in Greece, where rapid economic decline fosters xenophobic political and social violence (all-too-evident on this continent in the 1920s).  If this prospect is not clearly recognized soon, and if the self-defeating austerity measures are not corrected, there is big trouble ahead.

 

So, despite what the old adage tells us, looking for trouble is a very good idea, once we are armed with the concept of risk.  Phrased more precisely, this prescription advises us to try to anticipate trouble in a disciplined and evidence-driven way, seeking to head off at least the worst consequences of highly-likely harms.  The alternative – waiting for the body count to be tallied once the hazard has struck – is unfair to the victims of chance.

 

11/12/11

Looking for Trouble

Embracing Risk, Manipulating Chance: Will it all End Well?

A lecture at the Risk@Humanities Conference, 26-27 October 2012 By William Leiss Senior Invited Fellow (Fall 2012) Society for the Humanities, Cornell University wcl54@cornell.edu

Leiss Conference Talk

“Embracing Risk, Manipulating Chance:  Will it all End Well?”

Risk@Humanities Conference, 26-27 October 2012

By

William Leiss

Senior Invited Fellow (Fall 2012)

Society for the Humanities, Cornell University

wcl54@cornell.edu

 

From the Domination of Nature to Risk.

My doctoral thesis in philosophy at the University of California, San Diego, completed in 1969, was the last act of my decade-long apprenticeship with Herbert Marcuse.  It later became my first book, entitled The Domination of Nature (first published in 1972, and, I am pleased to say, still today in print).  At the end of my thesis oral, my supervisor announced that I was being awarded my degree for the single sentence that concluded the thesis, where I had reversed a famous Hegelian maxim, “the cunning of reason.”

[Hegel was fond of metaphors, and my favorite is, “The Owl of Minerva takes flight at dusk.”  What he meant was, we can only truly understand a historical epoch after it is over, when the life has gone out of it.]

Using a history-of-ideas approach, to which I had been introduced at the graduate program of that name at Brandeis University, I traced the development of the idea that humanity seeks to master or “conquer” nature through the progress of the modern sciences of nature and its technological applications.  Arising in the Renaissance, this idea was given its definitive formulation in the writings of Francis Bacon during the first quarter of the seventeenth century.

The eighteenth-century French Enlightenment thinkers took further Bacon’s inspired vision, as expressed best in Condorcet’s masterpiece, Sketch for a Historical Picture of the Progress of the Human Spirit (1795), a profound humanist tract written while its author was in hiding from the Terror.  Condorcet, himself both a great mathematician and a social progressive, and a colleague of Lavoisier, the “father of chemistry,” recognized the import of the natural sciences in promising an end to grinding poverty through economic progress; but he also championed the role of the sciences in dispelling the hold of ignorance and superstition over the human mind, through which regressive social practices and institutions were maintained.  Condorcet saw the internal connection between the applications of what we would today call “evidence-based reasoning” in both a mastery over the powers of external nature and a growing self-mastery over human social behaviors.

With the help of an extraordinarily perceptive one-liner written by Walter Benjamin, about the need to achieve mastery over the mastery of nature, I identified a potential internal contradiction in this grand historical adventure. For the species which seeks to master “external” nature – the physical environment, its resources and “powers” – has failed miserably so far to achieve self-mastery of its own nature.

The mastery over external nature takes the form of discovering an endless series of new powers and characteristics within nature that are turned through technologies into potent new capacities for action in the world.  These new technological powers are placed at the service of a species still riven by atavistic hatreds and ancient superstitions, where the rivalries among national and social groupings threaten to break out into unrestrained mayhem at any time.  If you will allow me to refer to just a single image to illustrate what I mean, think of the nation of Iran, the locus of one of the oldest continuous stories in human civilization, whose President rehearses apocalyptic religious fantasies in his speeches before the United Nations, while back in his homeland, in thousands of highly-sophisticated spinning centrifuges, hidden deep underground, uranium oxide is being enriched so that either nuclear energy plants, or nuclear bombs, or both, can be supplied.  Lest I be misunderstood here, I hasten to add that the Iranian mullahs have no monopoly on this juxtaposition of scientific modernism and atavistic motivation.

In his phrase the cunning of reason, Hegel alluded to the notion that the impulse of rationalism can work “behind the backs” of historical actors, bringing into being forms of progressive thought through a developmental process of which those actors would remain blissfully unaware.  In my reversal, looking at mastery of nature through the lens of “the cunning of unreason,” I imagined that the hidden drivers of history might work in the opposite direction as well, supplying irrationalistic impulses with the requisite means to pursue truly cataclysmic destructive goals using the products of rationalism’s glory, modern science and technology.  The point was not lost on the philosopher whose closest colleagues had produced the work entitled Dialectic of Enlightenment (see further Leiss 2011).

As for my own perspective, I regard the spirit of “enlightenment,” along with the modern natural sciences through which it is enabled, as the defining characteristic of modernity.  And I believe that the fate of both – Enlightenment and modernity – hangs in the balance today.  But I will not pursue this theme further here; if it interests you, you might take a look at a book of mine entitled The Priesthood of Science.

Instead I want to focus now on the story of risk, and try to demonstrate to you that it is the same story under a different name.  In a nutshell, risk management is the applied version of the mastery of nature.  It is the practical dimension of the great adventure I have referred to, the attempt to increase human welfare by, first, understanding how nature “works,” and then, using technologies to change the odds in our favor with respect to the contest of our species with natural forces.

Embracing Risk, Controlling Chance.

Risk – simply put, the chance of harm – is everywhere.  For every moment of existence, for every individual, family, community, nation, and for the world as a whole, the chance that some type of harm might strike unexpectedly is ever-present.  One-third of all first heart attacks are fatal and occur with no prior warning; global financial catastrophe appeared without warning, like a mighty flash of lightning, in mid-September 2008; and on average once every hundred-million years during our planet’s history, a massive asteroid, arriving seemingly out of nowhere, has wreaked havoc on our planet.

There is an almost infinite array of diverse types of harms.  This has always been true.  What is relatively new is describing the imminence of potential harm as a “risk.”  To call something a risk means that we understand the threat in a quite specific way, namely, as a source of potential harm that is (except in relatively few cases) which is potentially controllable by our conscious acts.  So the understanding of our environment as a source of multitudinous risks is not, as some believe, an expression of a pervasive, debilitating fear and unease about existence (Beck 1992).  The truth is exactly the opposite:  A risk-based understanding of the world implies, not a dread of uncontrollable forces but rather a confidence that a much higher proportion of our life-outcomes is amenable to rational control than was ever the case in the past.

For example, for women in pre-modern times pregnancy and childbirth were usually the leading causes of premature mortality (women then experienced in addition,  with all others of their species, the additional scourges of accidents, rape, famine, disease, war, violence, plunder and dozens of other calamities).  All of them were experienced as simple fate and happenstance, to be endured and outlasted if possible but not to be avoided.  Explanations for them were most commonly found in the deeds of supernatural entities – spirits, benevolent or otherwise – acting directly upon events or using human agents as their surrogates.  [How very far we have advanced since then!  If only we could figure out whether God really intends rape to be a good opportunity for creating human life on earth.]

The systematic idea that harms have causes rooted in the characteristics of natural and social systems, and that no supernatural entities are complicit in them, is the product of the Enlightenment of the modern West.  From its earliest beginnings this “simple” idea was both a theory (seeking confirmation through experimental evidence) and a program of action (seeking changes to existing practices and institutions).  Harms with natural causes, such as diseases, would be amenable to reduction through the discoveries of the new sciences of nature, first chemistry and later physics and biology.  Harms with social causes, such as criminality or the gross injustices of the legal and penal systems, would be amenable to reduction through reforms to political institutions and improved insight into the determinants of human behaviors.

The champions of the eighteenth-century French Enlightenment, building on the passions of their revered predecessor, the English Lord Chancellor Francis Bacon, sought to replace fate with a chain of causation that was open to rational analysis and the gathering of evidence.  Changing the prior conditions would alter the ultimate outcomes in predictable ways for the “betterment of the human condition.”

So the first radical idea in the new natural philosophy was to see life-outcomes as resulting neither from unalterable fate nor the intervention of supernatural agents, but rather from conditions that could be understood and potentially manipulated to our benefit.  The second radical insight was to see that, collectively, such outcomes were distributed across a range of specific end-points (such as average age of mortality in a population) which could be represented as probabilities.

The second was at least as important as the first, because it meant that one could take a strategic approach to the matter even if the pattern of outcomes itself could not be influenced.  The best example is insurance, and indeed commercial marine insurance was one of the first applications of the risk-based approach (see generally Bernstein 1996).  Following a risk-sharing strategy, and accumulating enough reliable evidence about the chance of a ship’s cargo being lost at sea from a variety of causes, such as bad weather or piracy, meant being able to set appropriate levels of premiums for insured losses.  Assembling accurate national mortality tables meant that Scottish churches could determine the premiums needed to establish the necessary financial reserves for providing family support to the widows of ministers – which is why there exists, still today, a life-insurance company in the UK with the name “Scottish Widows.”  And a momentous breakthrough known as “Bayes’ Rule” (after an eighteenth-century English clergyman and mathematician), showed how to deal with the uncertainties that bedevil risk:  in the face of inadequate knowledge, take a guess about what is the case, and then ask yourself what evidence you could look for that would increase the likelihood of your being right about it, and keep repeating the exercise to increase your confidence level in the result.

These straightforward examples from the early modern period illustrate the simple truth that the risk-based approach represents not an exacerbation of existential fear but rather the rational hope that either the nature of the outcomes themselves, or just their consequences (that is, losses of various kinds), or both, can be controlled to some extent.  In almost all cases the objective of risk management is not to abolish the sources of harms but to limit the adverse consequences of our exposure to them, especially to help us to avoid catastrophic losses, that is, losses so great that it is difficult or impossible for us to recover from the encounter and rebuild our fortunes.

Being able to represent a type of harm accurately as a risk, therefore, means knowing how to manage our encounter with it in such a way that losses are minimized and gains are maximized.  According to a well-known aphorism, we can only manage something if we can measure it, because we only know if our management is successful by examining the results we get for our expenditure of time and resources.  The metamorphosis of harm into risk – for risk is measurable harm – is the key step in our ability to take control of important aspects of our continuous encounter with our environment, including our genetic inheritance, rather than to submit meekly to fate and chance.

Paradoxically, the steady growth of scientific knowledge about natural and social systems magnifies the number of risks we face, because it turns mysterious harms into known risks.  This is a simple function of having an increasingly sophisticated and precise picture of underlying cause-and-effect relations and also of developing advanced technological tools for risk control, which inevitably introduce new risks of their own.  The overriding idea is that the substitution of risks for harms will yield very substantial net benefits – in terms of longevity, better mental and physical health throughout life (with all of the associated benefits that good health brings), less pain and suffering, and the capacity to recover well from serious adverse events.

Thus the modern world is indeed riskier than was the past:  But the right conclusion to draw from this truth is that this greater riskiness is a good thing, because it follows that the scope of our potential control options over life-outcomes has been enlarged.

I emphasize the word “potential” because both harms and risks are tricky in nature and the cause-effect relations underlying them can be subtle and hard to detect.  The long latency of some diseases, such as smoking-related lung cancer, and the even longer timelines of environmental risks, especially climate change, allows us to deny the potential for substantial harm, if we are so inclined to do.  Moreover, we are always exposed simultaneously to many different types of potentially harmful agents, and sorting out the dominant causative factors is onerous.  As a result, risk management is almost always a difficult business and requires the application of a methodical and highly-disciplined analytical paradigm.

Essentials of the Risk Management Paradigm.

Risk management, simply stated, is the attempt to anticipate and prevent or mitigate harms that may be avoidable.  Its essential steps are foresight (using risk estimation), precaution (spending some money in advance, such as purchasing insurance), and prudence (seeking to avoid, not all losses, but catastrophic losses, that is, being wiped out, from which future recovery is difficult and sometimes impossible).  Since risk management is also, by definition, decision-making under uncertainty, when we take precautionary steps we cannot know whether we are wasting our money – but at least we can be reasonably certain that we have protected ourselves from catastrophic loss.

Thus, for example, we have an insurance scheme to protect people from losing most or all of their money in case their own bank fails, something that did not exist in the early 1930s, when many people lost all their savings, reducing consumption and helping to sink the economy into the Great Depression.  It costs society remarkably little to maintain such a scheme.

All risk management costs money, either because some opportunities for individual gain must be renounced or because corrective risk control measures (such as regulations) demand new expenditures, or both.  Thus risk management initiatives usually encounter determined resistance from entrenched economic interests, and attacks on the scientific and statistical calculations supporting a newly-measured risk are commonplace.  From resistance to the earliest regulatory measures in food safety and workplace hazards, over a century ago, to the fifty-year battle waged by the tobacco industry against the epidemiology of smoking-related diseases, to today’s fierce opposition to effective regulatory control over systemic financial risk by the banking industry, to the sowing of doubt about climate science, any major initiative in risk control can expect opposition from powerful interest groups.

Nevertheless the general reach of risk management in modern society expands steadily.  Both professional risk managers and ordinary citizens have ready access to information and analytical tools that, when properly deployed, allow them to modulate their exposure to harms and to incur reasonable costs to achieve targeted levels of risk control.  There are literally hundreds of cases where reliable information exists, easily-accessible, that you can use to improve your chances for yourself and your children – because something like 75% of our lifetime health outcomes are dependent, at least in part, on the “lifestyle” choices we make in terms of such risk factors as diet, exercise, alcohol and drug intake, and so on.

Yet here we come to the first in a series of paradoxes in risk management; this one I call the paradox of too much information.  Let me give you a couple of examples, the first from the area of blood safety.  Relevant information includes the risk estimate in Canada, at present, for the chance that one person will be infected with HIV, in any year, from a unit of donated blood.  The answer:  1 in 8 million donations (ten years of donations).  The bottom line is, since almost certainly blood has never been safer than it is now, don’t worry about it.  But if you insist on more information, I could tell you that, at the 95% confidence level, the uncertainty range varies from 1 in 3 million to 1 in 20 million (Leiss et al. 2008, Appendix:  What is Risk Estimation?).  You ask:  What does that mean?  The answer is, technically, that we are a lot more confident that the risk is somewhere between those two outer bounds, than we are that it is exactly 1 in 8 million.  Then you might conclude, “Well, that says to me that you don’t really know what the risk is, right?  So, I’ll make up my own mind.”

Or take the case of the HPV [human pappilomavirus] vaccine, which can prevent cervical cancer for women.  You can take your advice from the CDC in Atlanta, which will tell you that the vaccine itself is “safe.”  Does this mean that there are no side effects?  No, but the bottom line is, “don’t worry about it.”  Or you can go on the Internet, and find a huge stash of anecdotal evidence, including many pictures and videos, about individual (alleged) cases of serious adverse reactions, including paralysis and death.  What would you like to believe?  Have you heard of “confirmation bias,” an area of research where it has been shown that many people structure their information search in order to find support for their prior belief?  [See the PBS Frontline program, “The Vaccine War,” broadcast April 27, 2010.]

Here’s another paradox.   Our increasing sophistication about risk control induces in some players a propensity to deliberately seek higher levels of risk.  Some people who know that ABS systems in their cars increase driving safety tend to drive faster.  Practitioners of “extreme sports” react to safety enhancements in equipment and techniques by pursuing exotic alternatives, such as skiing out of bounds at resorts where the ski runs have been evaluated by professionals.  Undersea drilling for hydrocarbons extends into far deeper waters and more fragile environments, such as the Arctic, where existing safety protocols may not necessarily remain robust.  And bankers deploy arcane mathematical models in order to make large bets on novel financial instruments that test the limits of their own capacity to avoid so-called “tail risk” where catastrophic losses lurk.

Risk-taking feeds on itself:  The very same reasoning that once turned unknown levels of harm into calculable levels of risk threatens to flip back again into its prior state.  In the Fall of 2008 all of the world’s major financial institutions had been operating with formal models known as “value at risk,” designed to put a number on the maximum possible loss resulting from each day’s operations; when the abyss opened and their risk calculations were proved worthless, none of them knew where the contagion of incalculable loss and bankrupt firms might end, or which of them would survive it:  The risks they thought they understood had reverted to unknowable harm (Leiss 2010).

The greatest and most fateful paradox, which is actually generated by in part by those already mentioned, is that the scientific basis of risk-taking and risk management may carry within it the seeds of its own spectacular, ultimate failure.  For each successful targeted intervention in manipulating our relation to our environment on a minor scale makes us ever more dependent on being able to perpetuate the process of manipulation indefinitely into the future, on an ever larger scale.  Each round of short-term, successful intervention induces the need for more extensive ones later on.  Think of antibiotics and the development of microbial resistance to them.  Or the case which I shall discuss more thoroughly in a moment, our inadvertent manipulation of the earth’s climate system, from the burning of fossil fuels, which may require us to experiment with massive geo-engineering experiments in the future.  We cannot jump off this treadmill.

Moreover, driven by the economies of scale and comparative advantage, the globalization of production of economic goods integrates the fates of nations and regions every more tightly; now all want the additional industrial development pioneered by the West – and why shouldn’t they?  But this fact introduces the added complexity of requiring coordinated action through international agreements, something that in itself has been shown to have its own treacherous difficulties.

All of this leads to enormously increased pressures and impacts on the globe’s key biophysical resources, including potable water, energy, agricultural soil, unpolluted air, ocean productivity, and others (again, I will come back to this issue in a moment). These impacts must be managed in order to ensure that the productivity of these resources can be sustained over long time-frames.  One of the unintended consequences is the globalization of the associated risks, which results directly from our successes in risk management on a smaller scale.  With respect to diverse threats from zoonotic diseases to climate change to systemic financial risk, we are forced to acknowledge that only a coordinated international effort will be adequate to the task.  But it is not at all certain that our social institutions will ever be sufficiently robust to mount such an effort in any or all of these domains.

Managing Nature.

The general point I want to make is this:  The long quest to exploit nature’s resources intensively for human benefit threatens to reach its own internal limit and may collapse under its own weight.  (I will explain what I mean by “internal limit” in a moment.)  The reason is that this exploitation has unintended consequences that themselves must be managed, and that this management can only be done collectively, by all nations acting together; however, it is not at all certain that the will to do so can be mobilized.  If it cannot, the consequences of this failure may turn out to be catastrophic for humanity as a whole.

There is an interesting attempt being made by environmental scientists to define a set of so-called “planetary boundaries” for human transformation and exploitation of the earth’s natural systems.  In a nutshell, these boundaries determine the amount of the earth’s biological productivity that can be sustainably harvested by human societies.  Here the word “sustainably” has a precise meaning, namely, ensuring that natural systems are capable of regenerating themselves as we use them, so that future demands on them can be met indefinitely into the future.  As presented in the journal Nature in 2009, by Johan Rockström and colleagues, these boundaries include freshwater use, ozone depletion, land use changes, the nitrogen-phosphorus cycle, ocean acidification, and climate change.  The authors try to show that human demands on these systems either already exceed “safe” levels of exploitation, or are close to doing so; meanwhile, of course, human numbers and levels of exploitative demands are increasing steadily.

A more recent article, in the journal Science in 2012, by Steven Running, combines these determinants of planetary boundaries into a single indicator, namely, “terrestrial net primary plant production” (abbreviated as the “NPP boundary”).  He notes that “plant matter [from solar energy, water, and atmospheric CO2] … sustains the global food web and becomes the source of food, fiber, and fuel for humanity.”  He concludes:  “Consideration of current land use patterns and the projected rise in the human population suggest that human consumption may reach the global NPP boundary within the next few decades.”  (This is in a way an updated version of the famous Limits to Growth argument from 1972; contrary to what many believe, as shown in a recent review [Turner 2008], events in the intervening forty years have validated much of the business-as-usual scenario presented there.)

We have no “political” process in place, at the international level, that could even pretend to manage the future course of the human impacts on the earth’s biological productivity.  So let’s just hope that these scientists are deluded.

Consider at greater length the issue of global climate change.  This is part of the central story of the last few centuries, the story of the industrial revolution, because fossil-based energy sources are the principal driver and enabler of industrialism.  Fossil energy use has been growing since the middle of the eighteenth century; by the mid-point of the present century, three hundred years later, in 2050, it will still represent about three-quarters of global energy demand.  The story about the consequences of our energy use involves, first, the scientific theory of the natural Greenhouse Effect (2012), developed in the 19th century from Fourier (1824) to Arrhenius (1896), telling us why the earth is a full 33°C warmer than it would otherwise be in the absence of this effect.

The later theory of anthropogenic (human-caused) warming, known as the theory of radiative forcing, tells us that the massive amounts of greenhouse gases we have added to the atmosphere during the last three centuries, largely from the burning of fossil fuels, almost certainly will produce a range of adverse effects – changes to long-term weather patterns – of very large magnitude.  This insight began with a famous paper by Roger Revelle and Hans Suess in 1957 (Revelle 2012).  [I did my doctoral work at Revelle College at UCSD.]  Beginning in 1965, a long series of expert panel reports published by the U. S. National Academy of Sciences, followed by a series of massive reports under the auspices of the Intergovernmental Panel on Climate Change (IPCC), based on thousands of papers published in peer-reviewed scientific journals, confirmed this original insight.  Almost certainly this is the largest collaborative undertaking in the history of modern science.  Unfortunately, it showed that our manipulation of the earth’s climate was inadvertent, and that we fully comprehended the nature of our actions very late in the game, making the deployment of any counter-measures both difficult and, ultimately, expensive.

This is actually a very hard problem, both of precise understanding and of action based on it.  The earth’s climate system moves massive amounts of energy around the globe and is the result of an extremely complex set of factors, including the nature of the sun’s electromagnetic radiation, variations in the amount of solar energy, the tilt of the earth’s axis, its rotation around its axis as well as its orbit around the sun, the capacity of its oceans to act as a carbon sink, the function of clouds, the heat-trapping potential of various gases, and others.  This means that over long periods of time the earth’s climate varies substantially.  Thus the scientific account of climate is necessarily as complex as is its subject, and simulations – the so-called climate models – require the most powerful computers to run them.  In fact, it is so complex that most of us have to take it on trust, as we do with all the rest of contemporary scientific output.

This is a very hard problem for other reasons as well, the most important of which is the time-frame for climate change impacts and the lag effect of radiative forcing.  Lag effect means that we do not observe the ultimate results of human inputs to the climate system for a very long time, indeed, over many generations.  And to put the point bluntly, we are bad enough at making sensible political decisions under conditions where the evidence stares us in the fact, so to speak; when it comes to projections about what may happen far into the future, we are, frankly, quite hopeless.

The massive IPCC summary of the scientific analysis of climate change at present is encapsulated in the conclusion that anthropogenic greenhouse gases are “responsible for most of the observed temperature increase since the middle of the twentieth century.”  This conclusion is reported as “very likely” to be the case (>90% probability, with high confidence).  Since we are still accelerating the process of radiative forcing, because our greenhouse-gas emissions are steadily rising, substantial future rises in temperature are inevitable.  And indeed there is some plausible probability that in the relatively near future some massive positive-feedback loops may kick in, for example additional warming induced by the release of methane stocks now locked into Arctic permafrost, leading to the possibility of a “runaway” greenhouse effect.

At some point, likely before the year 2100, these temperature increases are likely to be very disruptive, in terms of our established life-styles, producing massive dislocations in human settlements.  In a recent book (Leiss 2010) I have called this a “black-hole risk,” meaning a risk with a potential downside so enormous in scope that we cannot even estimate how bad it might be.

Climate change is a global problem.  It can only be dealt with in the context of an overall international agreement with specific and binding commitments, enforced by penalties, for the failure of any nation to meet GHG emissions reductions targets.  This year, 2012, marks the end of a twenty-year period of failure, starting with the 1992 “Rio Conference” and continuing through the ratification and then abandonment of the “Kyoto Protocol,” to achieve any such agreement.  Will we succeed if we try again?  Do we even want to try?  At present the answer is a resounding “No.”  [It’s not that we haven’t ever succeeded in doing this, as the international convention on ozone-hole depletion shows (see Leiss 2005).  But in that case we had a nice picture of the hole in space, and the threat of elevated skin-cancer risk, to settle the public debate on the need for risk control.]

Climate risk mitigation requires controlling human-caused GHG emissions.  Like all risk mitigation this will cost money, for example, by means of a carbon tax on every person’s fossil-energy use, perhaps a small tax at first, but probably quite a hefty one later on.  Who here today wants to start paying?  Remember, you have to start paying now in order to avoid the really harmful consequences that “very likely” may befall your great-great-great grandchildren by the year 2100.  Note that there is only a certain probability of these harms happening, albeit a high one (>90%).  Admittedly, it’s a lot cheaper just to hope that it won’t happen after all, letting your distant descendants take their chances with the outcome of the bet you make today.  Or to simply adopt the belief that the climate science predicting this outcome is a “hoax” – as many U. S. citizens do, according to opinion polls, apparently trusting the many web-based propaganda organs that promote this canard.  [See the excellent PBS Frontline program, “Climate of Doubt,” broadcast earlier this week.]

The United States is home to the largest, most lavishly-funded scientific enterprise the world has even seen.  The mere suggestion that one of the crowning glories of that enterprise, climate science, could be a hoax – that is, a deliberate deception – is, or at least ought to be regarded as, simply ludicrous.  But the fact is that the opposite idea, namely, that the science of climate change provides a plausible basis for starting to pay a carbon tax now, cannot even be discussed in this country (the situation is not much better anymore in Canada, which once ratified the Kyoto Protocol).  Just remember:  Every belief we hold about the future is a bet.  And in another 20 or 30 years it won’t even matter which way we have bet on climate change risk:  At current and projected levels of GHG emissions growth, by around 2050 we will have reached the point where any contrary action would be pointless (see McKibben 2012):  Those alive at that point can all just join in singing Que sera, sera.

Everything we ordinarily believe in trusting the results of science in our lives, such as following medical routines and operating the countless gadgets we depend on, tells us that our bet against the believability of climate science is a very bad wager.  Some people think that the only good bet is to trust our technologies to deal with global warming later on, if we are eventually forced to conclude that we need to counteract our radiative forcing, by geo-engineering:  putting thousands of orbiting mirrors up in the stratosphere to reflect sunlight back into space, or producing more cloud cover by spraying huge quantities of sulfur dioxide into the lower atmosphere (thus mimicking the effects of volcanos), or dumping iron into the oceans to stimulate algae growth for carbon sequestration.  If you think that, having failed to manage other types of human impacts on the planet, we are likely to pull off this one without a hitch, you are to be regarded as a true Panglossian.

At the gaming tables, when your bets have turned against you and you respond by raising the ante, it’s called “doubling down.”  Here the size of the bet we are making, by pretending that everything is under control, with respect to our manipulation of the planetary ecosystems, is approaching the “all in” scenario.  It is in my view not an exaggeration to say that we are wagering on the future of industrial civilization itself.  For myself, I doubt whether this will end well.  [Some of you in this room will be alive in 2050, when the future course of this risk scenario will be a lot clearer than it is now.  Please remember to send the rest of us who are no longer with you an email message; for myself, I hope to be partying with the Devil, so you can reach me in Hell.]

Humans are a clever and adaptive species and would surely survive such a catastrophe.  But the current revival of interest in the thinker who is perhaps the greatest political theorist of the modern age, Thomas Hobbes, should remind us that life once was “solitary, poor, nasty, brutish and short,” and might be so again.

Conclusion:  The Inner Contradiction within Risk Management.

The deep truth about risk and risk management has to do with our propensity to push our wholly inadequate managerial capacities to the limit, all the while protesting that no such limit exists.  Our reaction to encountering unforeseen obstacles is to “double down” on the first bet, raising the stakes dramatically:  If climate change risk arises from the technologies that allow us to combust fossil fuels on a prodigious scale, we are inclined not to hedge the first bet but rather to double it by envisioning using entirely new technologies on an equivalent scale to counteract the initial effect.  To take another specific example, there is the notorious case of Bruno Iksil of JPMorgan Chase, known as the “London Whale” for the sheer size of his bets, who apparently, earlier this year, “doubled down” repeatedly as the markets turned against his bet, until his bank was forced to exit his positions in the derivatives markets at a cost of $7 billion and counting.  This case shows that bankers reacted to the damage done by exotic financial instruments in the 2008 financial crisis by deploying risk control strategies so complex that, it seems, not even the bank’s senior management personnel and its CEO understood what their traders were doing or how much the potential hit on the downside could add up to.

I remain, I confess, an incorrigible Hegelian.  In this failure to understand how the most exquisitely-tuned rationalism can magnify, rather than mitigate, our vulnerability to the downside risk, can delude us into imagining that we have become the unchallengeable masters of our planet’s ecosystems, and can tempt us into wagering all of the accumulated gains of the last few centuries on a few final throws of the dice, I see the cunning of unreason at work.

Do you recall Goethe’s marvelous poem from 1797, “The Sorcerer’s Apprentice” (2012), where the hapless assistant overestimates his ability to deploy safely his master’s magic incantations?  The risk managers I just mentioned probably think that this is an entertainment for children because Walt Disney made an animated cartoon out of it.  How little they know.

 

William Leiss
Society for the Humanities
211 A. D. White House, 27 East Avenue
Cornell University
Ithaca, NY 14853-1101
Tel. 607-255-9279
Fax 607-255-1422
www.leiss.ca
www.herasaga.com
www.blackholesofrisk.ca

Permanent email address:  wleiss@uottawa.ca

 

References

Beck, Ulrich (1992).  Risk Society:  Toward a new modernity.  London:  Sage [see Leiss, book review (1993):  http://www.ualberta.ca/~cjscopy/articles/leiss.html].

Bernstein, Peter L. (1996).  Against the Gods:  The remarkable story of risk.  New York:  Wiley.

Iksil, Bruno (2012):  http://en.wikipedia.org/wiki/Bruno_Iksil

Leiss, William (1972).  The Domination of Nature.  Montreal:  McGill-Queen’s University Press, 1994.

Leiss, William (2005).  “Ozone and Climate”:   http://leiss.ca/wp-content/uploads/2009/12/ozone_and_climate.pdf

Leiss, William (2008).  The Priesthood of Science.   University of Ottawa Press.

Leiss, William (2010).  The Doom Loop in the Financial Sector, and Other Black Holes of Risk.   University of Ottawa Press.

Leiss, William (2011).  “Modern Science, Enlightenment, and the Domination of Nature:  No Exit?”  In:  Critical Ecologies, ed. Andrew Biro (University of Toronto Press), pp. 23-42.

Leiss, William et al. (2008):  W. Leiss, M. Tyshenko, and D. Krewski, “Men having sex with men donor deferral risk assessment,”  Transfusion Medicine Reviews, Vol. 22, no. 1, 35-57.

McKibben, Bill (2012).   “Three simple numbers that add up to global catastrophe.”  RollingStone, 19 July:  http://www.rollingstone.com/politics/news/global-warmings-terrifying-new-math-20120719

Greenhouse Effect (2012):  http://en.wikipedia.org/wiki/Greenhouse_effect

PBS Frontline, “Climate of Doubt,” 23 October 2012:  http://www.pbs.org/wgbh/pages/frontline/climate-of-doubt/

PBS Frontline, “The Vaccine War,” 27 April 2010:  http://www.pbs.org/wgbh/pages/frontline/vaccines/view/

Revelle 2012:  http://en.wikipedia.org/wiki/Roger_Revelle.

Rockström, Johan (2009).  “A safe operating space for humanity.”  Nature 461 (24 September), 472-5.

Running, Steven W (2012).  “A measurable planetary boundary for the biosphere.”  Science 337, 1458-9.

“The Sorcerer’s Apprentice” (2012):  http://en.wikipedia.org/wiki/The_Sorcerer’s_Apprentice

Turner, Graham M (2008).  “A comparison of The Limits to Growth with 30 years of reality.”  Global Environmental Change 18, 397-411.

Once More, Understanding Systemic Financial Risk

William Leiss:  RiskBlog 1 March 2012

Once More, discount Understanding Systemic Financial Risk

Satyajit Das and Systemic Risk [PDF]

 

Andrew Palmer’s recent essay on financial innovation in The Economist, “Playing with fire” [25.02.12: http://www.economist.com/node/21547999] has received a lot of attention.  Read it, but also read the trenchant critique by Satyajit Das, one of the most perceptive observers on the subject.  (Read his frequent blogs; his books, listed below, tend to be rather long-winded and egoistic, a compilation of scattered thoughts.)

The global financial crisis which began in 2008 is a slow-moving massive train wreck that may take an entire decade to bring under control.  The catastrophic events of 2008 – still reverberating today, more than three years later, in the European sovereign debt crisis – have caused losses of many trillions of dollars.  The scope of the losses is almost uncountable [http://bettermarkets.com/blogs/financial-reform-will-keep-wall-streets-hands-out-taxpayer-pockets, from “Better Markets,” a website I recommend].  This is why it is the best illustration of what I have called a “black hole of risk” in my book, The Doom Loop in the Financial Sector [www.blackholesofrisk.ca/].

Regulatory reform of the financial sector is designed to help us avoid a repeat of these events.  But the struggle for regulatory reform is far from won, because of intensive lobbying by big banks and financial sector interests.  The current battle is over the so-called “Volcker rule,” designed to reduce risks from investment banks trading on their own accounts:  to understand this one, and others like it, follow “Baseline Scenario,” written by Simon Johnson and James Kwak [http://baselinescenario.com/].  If the bankers win this one, and others, they’ll do it again to the rest of us:  Take obscene amounts of money for themselves during the good times, and saddle the taxpayers with the costs of bailouts when they bring on the bad times again.

Part of the reason for this extended train wreck is that at least one solution adopted by governments to respond to it is itself the cause of future problems.  I refer to the central banks’ policy of mandating near-zero interests rates over long terms.  It is already clear that this policy solution is devastating the financial health of insurance companies and pension plans, failing to rewards savers, and encouraging imprudent borrowing.  Even the Bank of Canada, which joins other central banks in this flawed policy, acknowledges its downside risks, calling attention in its December 2011 review report to “a prolonged period of low interest rates, which may encourage imprudent risk-taking and/or erode the long-term soundness of some financial institutions” (preface, page 1), available at: [http://www.bankofcanada.ca/wp-content/uploads/2011/12/fsr_1211.pdf]  Of course, it uses the typical wishy-washy bureaucratic language of “may” encourage or erode, which is irresponsible:  The serious damage being done to the insurance industry and pension funds from the extremely low interest-rate policy of central banks has already been widely reported, and it poses a serious risk to the security of the future retirement plans of millions of Canadians.

In his recent blog, below, Satyajit Das explains that Palmer’s essay overlooks the most crucial truths about the nature of risk in recent financial innovations, among them:  (1) a lack of transparency in the transactions; (2) a slow-growing “concentration” of risks that remains in the shadows until it’s too late to stop the collapse.  This is what we now call “systemic risk” in the financial sector.  A broad public understanding of systemic risk – which takes a bit of effort for citizens, I admit – is essential to build public support against the bankers for effective regulatory reform.  The security of your retirement assets depends on your making this effort.

Satyajit Das: Pravda The Economist’s Take on Financial Innovation

 

Environmental Release of Genetically-Engineered Mosquitoes: The Latest Episode in Frankenstein-Type Scientific Adventures

Environmental Release of Genetically-Engineered Mosquitoes:

The Latest Episode in Frankenstein-Type Scientific Adventures

Mosquitoes [PDF]

William Leiss (2 February 2012)

 

The subtitle for this essay is merely descriptive – not at all intentionally provocative – and is meant to be taken literally.  By “Frankenstein-type” I mean, not the scientific work itself, but rather the arrogant and thoughtless act of a scientist in releasing a novel entity into the environment without adequate notice or prior discussion with the public, whether accidentally (as in the case of Mary Shelley’s story) or deliberately.  Should this practice continue, as I suspect it will, almost certainly there will eventually be a very bad ending – for science itself.  Only remedial action by other scientists themselves can head it off, and so far such action is noteworthy by its absence.  They will regret this omission.

 

Yesterday’s story in Spiegel International Online by Rafaela von Bredow, “The controversial release of suicide mosquitoes” (http://spon.de/adztv), prompted me to look further.  And sure enough, I had missed an earlier report in the publication I most rely on for such matters, The New York Times, in the edition dated 31 October 2011, written by Andrew Pollack:  “Concerns raised over genetically engineered mosquitoes” (http://tinyurl.com/7remh3q).  Other sources for this issue can easily be found by putting “genetically engineered mosquitoes” into your preferred Internet search engine.

The Aedes aegypti mosquito carries the dengue virus, which causes the most important insect-borne viral disease (dengue fever) in the world.  Worldwide there are an estimated 50-100 million cases and at least 20,000 fatalities (mostly children) annually, and there is no vaccine or adequate therapy.  It is a serious public health burden in many countries.  Papers in the scientific literature about the possibility of attacking the problem by modifying or genetically engineering the mosquito itself have appeared over the past fifteen years.  The dominant approach of this type is to release sterilized male mosquitoes into the environment which upon mating with females will produce no young.  This approach has shown limited success.

 

The recent publicity has to do with a new approach in which laboratory-bred male mosquitoes are genetically engineered to express a protein that causes the larvae to die.  The gene was developed by a biotechnology company in Oxford, England.  The controversy involves the decision made by the company to seek approval for the environmental release of the GE mosquitoes in confidence – without public release of relevant information – from governments in various countries.  This began in 2009 in the Cayman Islands; later releases took place in Malaysia and Brazil, and future releases are scheduled in Panama, India, Singapore, Thailand, Vietnam, and Florida.

 

Here’s an extract from Andrew Pollack’s story:

In particular, critics say that Oxitec, the British biotechnology company that developed the dengue-fighting mosquito, has rushed into field testing without sufficient review and public consultation, sometimes in countries with weak regulations.

“Even if the harms don’t materialize, this will undermine the credibility and legitimacy of the research enterprise,” said Lawrence O. Gostin, professor of international health law at Georgetown University.

Luke Alphey, the chief scientist at Oxitec, said the company had left the review and community outreach to authorities in the host countries.

“They know much better how to communicate with people in those communities than we do coming in from the U.K.” he said.

Rafaela von Bredow’s recent and useful follow-up report in Spiegel Online also includes comments from other scientists, in particular two who are working on competing projects. The first reference below is to Guy Reeves of the Max Planck Institute for Evolutionary Biology in Plön, Germany:

 

The geneticist [Reeves] doesn’t think Oxitec’s techniques are “particularly risky” either. He simply wants more transparency.  “Companies shouldn’t keep scientifically important facts secret where human health and environmental safety are concerned,” he says.

Reeves himself is working on even riskier techniques, ones that could permanently change the genetic makeup of entire insect populations. That’s why he so vehemently opposes Oxitec’s rash field trials:  He believes they could trigger a public backlash against this relatively promising new approach, thereby halting research into genetic modification of pests before it really gets off the ground.

He’s not alone in his concerns. “If the end result is that this technology isn’t accepted, then I’ve spent the last 20 years conducting research for nothing,” says Ernst Wimmer, a developmental biologist at Germany’s Göttingen University and one of the pioneers in this field.  Nevertheless he says he understands Oxitec’s secrecy:  “We know about the opponents to genetic engineering, who have destroyed entire experimental crops after they were announced. That, of course, doesn’t help us make progress either.”

 

Commentary.

 

H. G. Wells published his novel, The Island of Doctor Moreau, in 1896:  See the Wikipedia entry, http://en.wikipedia.org/wiki/The_Island_of_Doctor_Moreau; the entire book is online at:  http://www.bartleby.com/1001/.  His story deals with a medical scientist living on a remote island who creates half-human/half-animal creatures.  In more recent times we have become aware of experiments in genetic engineering, such as cloning, that have been done in some countries, notably South Korea, and have raised serious issues in scientific ethics.  (See the New York Times editorial of December 2005 [http://tinyurl.com/7xmdlgt] and follow the links, or just search for “cloning South Korea.”)  Later publicity concerned the cloning of human embryos in China:  See the New Scientist article (http://tinyurl.com/7sguhgq) from March 2002.  Significant differences among countries in terms of government regulatory regimes and scientific research ethics programs remind us of the Wells’ scenario.

 

Other modified insects using the earlier technology (males sterilized with radiation), notably the pink bollworm, have been released to control plant pests.  The GE mosquitoes from Oxitec are the first to use the new technology for a human health problem.  They could very well represent an enormous human benefit with insignificant or even no offsetting risks – although it would be very nice to have a credible and publicly-available risk assessment certifying the same (perhaps we will get one from the U. S. Department of Agriculture, which has to approve the field trial in Florida).  It is quite possible that very few people living in countries affected by dengue fever would have any objections to the use of GE mosquitoes.

 

But the biggest risk involved in the use of unpublicized field trials (environmental release) for GE mosquitoes is to scientists and scientific research itself.  Readers of my recent blog, (http://leiss.ca/wp-content/uploads/2011/12/Nature-is-the-biggest-bioterrorist.pdf), on the genetic engineering of the H5N1 avian flu virus, will recall the interesting issues in scientific ethics raised in that case which are different from, but related to, those in the current one.

 

Both of these sets of issues encompass the ability and willingness of the scientific community to “police” research practices with the long-term public interest in mind.  To do so they would have to create deliberative structures both international in scope and sufficiently robust to enforce their strictures on unwilling members of their community – for example, by an enforceable policy of denial of research grant funding and publication in journals.  (The suggestion here avoids any necessary involvement by government authorities.)  This is a tall order, to be sure, but there are a few precedents, notably the 1975 Asilomar Conference (http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA).[1]

 

In the case of GE mosquitoes, the offhand comment by the lead scientist, Luke Alphey, quoted above from Andrew Pollack’s article, with its crude justification for ignoring his own clear responsibility for initiating public disclosure and deliberation, is telling.  The other scientists who were quoted in the two articles referenced above were obviously unhappy with him, but they did not suggest what remedies might be imposed for such irresponsible acts.  This despite their recognition that it is the genetic engineering of plants and animals that has been already a source of both intense public curiosity and equally great concern, a phenomenon that will inevitably grow in importance with each further advance in scientific discovery and application in this field.  For scientists to ignore the risks to their own activities in this domain is foolish and short-sighted indeed.


[1]The first famous example was the debate among some scientists over the use of the first atomic bomb in 1945:  See the discussion in the Back Section of my 2008 book, The Priesthood of Science (http://www.press.uottawa.ca/book/the-priesthood-of-science).

Complexity cloaks Catastrophe

“Complexity cloaks Catastrophe”

William Leiss (17 January 2012)

Complexity cloaks Catastrophe - PDF

Good risk management is inherently simple; adding too many complexities increases the likelihood of overlooking the obvious.

Leiss, “A Short Sermon on Risk Management” (http://leiss.ca/?page_id=467)

 

The quoted phrase that forms the title for this paper comes from the opening pages of Richard Bookstaber’s indispensable 2007 book, A Demon of our own Design:  Markets, hedge funds, and the perils of financial innovations (New York:  Wiley).  This book inspired much of my own subsequent work in this area, as published in The Doom Loop in the Financial Sector, and other black holes of risk (University of Ottawa Press, 2010:  e-book available at:  http://www.press.uottawa.ca/book/the-doom-loop-in-the-financial-sector);  see the section on “Complexity” at pages 80-83).  Bookstaber’s key point is that complexity in financial innovations is itself an important risk factor for systemic failure in the financial sector.

Now the New York Times columnist Joe Nocera has written in his January 17 column, “Keep it Simple” (http://www.nytimes.com/2012/01/17/opinion/bankings-got-a-new-critic.html?hp), about an important new source for this topic.   This is a November 2011 paper prepared by staff at a firm called Federal Financial Analytics, Inc.:  “A new framework for systemic financial regulation:  Simple, transparent, enforceable, and accountable rules to reform financial markets,” available as a PDF file at:  http://www.fedfin.com/images/stories/client_reports/complexityriskpaper.pdf.

In effect, both the paper and Nocera’s commentary argue – with reference to the U. S. Dodd-Frank Act – that responding to complexities in financial innovations with complex regulatory regimes is a mug’s game.  It does not solve the problem of “complexity risk” and in fact may exacerbate that risk.  It also does a poor job of anticipating the next challenge, as the recent collapse of MF Global shows.

The Obama administration has put its faith in “smart regulation,” which ignores the fact that it is the industries being regulated which can hire the smartest people and task them with finding a way to circumvent any set of rules, however complex.  (Meanwhile, his Republican opponents work feverishly to gut his regulatory agencies of competent staff and leaders.)  Similarly, the authors of the paper, “A new framework for systemic financial regulation,” propose solutions involving new corporate-governance regimes, but the private-sector risk governance regimes failed utterly the last time around, so why on earth would the rest of us want to retry this experiment?

In the end we have to turn to the most reliable guide, Simon Johnson, whose advice is simple:  Break up the big banks.  (Follow his blogs at:  http://baselinescenario.com/; the latest is, “Refusing to take yes for an answer on bank reform.”)  Banks to big to fail should be regarded as too big to exist.  And yet the leading financial institutions in the United States are bigger than ever.  No “systemically-important financial institution” will ever be allowed to fail.  The bankers who run them know this.  They also know that they cannot be outsmarted by the regulators.

 

Nature is the biggest bioterrorist

“Nature is the biggest bioterrorist”:

Scientists, viagra the Risk of Bioterrorism, pills and the Freedom to Publish

William Leiss (22 December 2011)

[Nature is the biggest bioterrorist] – PDF

The quoted sentence in the title is attributed to R. A. M. Fouchier, ailment a scientist at Erasmus Medical Center in Rotterdam and the University of Wisconsin, in an interview with The New York Times.[1]  He is the lead researcher for the team that, working in a highly secure laboratory in the Netherlands, genetically engineered the “highly pathogenic” avian influenza A(H5N1) virus so as to make it easily transmissible in mammals.  H5N1 is an extraordinarily lethal group or clade of viruses which has killed about 60% of the 600 or so humans that have been infected by it since 2003 (the virus was first identified in 1997).  A pertinent comparison is with the 1918 pandemic, which killed only about 1% of those infected.  The H5N1 viruses are easily transmissible among birds and are sometimes transmitted from birds to humans, but so far there is no evidence for direct human-to-human transmission.

Scientists have been studying its genome in order to understand why human-to-human transmission has been inhibited to date – knowing that, should this inhibition be overcome, at a time when no vaccine was available, the human death toll around the world could be enormous.  Scientists have also been trying to manipulate its genetic structure so as to identify exactly what gene changes would allow it to overcome this inhibition; working with ferrets in the laboratory, they now think they know what kinds of gene changes would enable the virus to move freely among members of a mammalian species.  In the press release from Erasmus Medical Center Professor Fouchier explains why this work was done:  “We now know what mutations to watch for in the case of an outbreak and we can then stop the outbreak before it is too late.  Furthermore, the finding will help in the timely development of vaccinations and medication.”

Now comes the tricky part.  The researchers want to publish their work in full, in the journals Science and Nature, arguing that other researchers working in this same area need to know the details in order to evaluate the results properly.  But the U. S. National Science Advisory Board for Biosecurity has asked the two journals not to publish major sections of the papers, specifically the “experimental details and mutation data that would enable replication of the experiments” – lest terrorists seek to weaponize the engineered pathogen to deliberately produce a global pandemic.

Professor Fouchier and others prefer to treat the issue as a matter of practicalities:  (1) terrorist groups cannot easily duplicate the expertise, lab equipment, and development time needed to engineer this virus, even with the blueprint in hand; (2) even if the experimental details are restricted to recognized researchers, the results will eventually be circulated more widely; (3) there are other, available candidate materials for producing biological weapons for terrorist purposes that are far easier to work with.

One problem in resolving this issue is that Professor Fouchier has muddied the waters considerably with his comment about nature being the “biggest bioterrorist.”  Another scientist, Professor John Oxford, London School of Medicine, further clouded the matter with these remarks:  “The biggest risk with bird flu is from the virus itself.  We should forget about bio terrorism and concentrate on Mother Nature.”[2]  They should both be forced to wear dunce caps and sit in the corner for a while.  As they should know, by the word “terrorism” we refer to deliberate acts of human malevolence and injustice causing great harms in the name of political objectives.  Its commonest and age-old form is state-sponsored terrorism, such as that being suffered now by the citizens of Syria and dozens of other countries around the world; non-state actors also employ it, on a much less widespread scale, and sometimes in pursuit of legitimate struggles of political resistance against tyrannical regimes.

But Mother Nature is not a terrorist.  In using this provocative language some scientists are trying to sidestep the very difficult issues in responsible decision making that have been raised by the proposal to publish this research in its entirety.  These difficulties can be seen if we array the decision problem in the form of offsetting risk scenarios:

  1. What is the probability or likelihood that the full publication of this research will enable public health agencies to save many lives – that would otherwise be lost – if and when the H5N1 virus naturally evolves into a form that is directly transmissible from one person to another? And:
  2. What is the likelihood that restricting full publication to designated members of the research community will achieve the necessary scientific advance from a public health standpoint?

 

Versus

 

  1. What is the likelihood that the full publication of this research will enable terrorists to weaponize this pathogen and deliberately cause a human pandemic that might not otherwise happen through the virus’s natural evolution?  And:
  2. What is the likelihood that the restricted circulation of these research results will succeed in keeping the information out of the hands of potential terrorists?

Framing the choices we face as a set of contrasting risk scenarios is a way of using a systematic and disciplined approach to identify the trade-offs and assumptions that are otherwise hidden in the narrative problem-formulation.  It also acts as a requirement for participants in the debate to specify what types of evidence could be assembled in support of attempts to answer these specific questions.

It would be possible, were sufficient time and resources to be made available, to undertake a formal set of risk assessments to address these four propositions; or we could just decide to rely on educated guesswork, for example, through an established procedure known as “expert elicitation” (http://en.wikipedia.org/wiki/Expert_elicitation).  This is unlikely to happen:  Some behind-the-scenes jockeying between the scientific journals, the research community, and the U.S. biosecurity committee will produce a resolution of this particular issue that the rest of us will read about later.

However, if either kind of systematic decision analysis were to be carried out and then published, all of us could perhaps learn a bit more about how to make sensible decisions of this type, which might be helpful when the next problem of this type rolls around.  Because there is a virtual certainty that there will be more problems of this type.  This is a simple function of the increasing power of scientific investigation itself, especially in highly sensitive areas like genetic manipulation and some others, such as synthetic biology and the detailed understanding and potential manipulation of brain functions.


[1] Denis Grady and William J. Broad, “Seeing terror risk, U.S. asks journals to cut flu study facts” (http://tinyurl.com/89w7cfl) and Doreen Carvajal, “Security in flu study was paramount, scientist says” (http://tinyurl.com/76kn2om), The New York Times, 21 December 2011; Fergus Walsh, “When should science be censored?” (20.12.11)  http://www.bbc.co.uk/news/health-16275946.   For a very thorough summary of background information see the Wikipedia article at:  http://en.wikipedia.org/wiki/Influenza_A_virus_subtype_H5N1.

[2] See further Richard Ingham, “Scientists fight back in ‘mutant flu’ research,” The Globe and Mail, 22 December 2011, A14:  http://tinyurl.com/77tm52w.