There is good reason to believe that the statements and actions of President Trump and the Republican Party, before and after the election on 3 November 2020, will cause lasting damage to American democracy. Almost certainly, further damage will be inflicted between now and the 2024 election, which may turn out to be the last “free and fair” presidential election in the United States of America for some time to come. If this outcome is to be avoided, efforts to forestall it must begin now and not let up.
Just how close was the 2020 presidential election? To see how this fits in with earlier elections, read “The Blue/Red Divide.”
How to interpret this? Mainly, just how close-run a thing it was: 21,500 votes out of more than 105 million could have changed the election outcome. It does also mean that if one-half of Jill Stein’s (Green Party) voters had voted for Clinton instead of her, in just those three states in 2016, Clinton would have won the national election; and if one-half of Jo Jorgensen’s (Libertarian) voters had voted for Trump instead of her, in just those 3 states in 2020, Trump would have won the national election. (And if one-half of the people in Florida who voted for Ralph Nader in 2000 had voted for Gore instead, he would have been president.) Go figure.
Those are the words spoken in the first line of President Franklin D. Roosevelt’s address to the nation on December 8, 1941, referring to the previous day’s attack by Japan on Pearl Harbor. November 21, 2020 may well come to be recognized as another such day, a day on which the nation’s Commander-in-Chief left his office to go golfing just at the time when the world leaders of the G20, meeting in a virtual conference, were discussing the global pandemic, a day on which the coronavirus death toll ravaging his own country approached the number of Americans killed at Pearl Harbor.
The golfing man in the red jacket (appropriately enough), playing on his own course in Virginia, had spent his morning, as usual, tweeting about his election victory and the blizzard of lawsuits being filed to confirm it. More recently, he had been putting pressure on local officials to have Republican-controlled state legislatures halt certifications of Joe Biden’s vote totals and instead certify electors who would vote for Trump on December 14 in a number of states where he had lost the popular vote. The strategy was meant to deny Biden enough votes to reduce his Electoral College count below 270, thus throwing the choice for president into the House of Representatives, where each state delegation has one vote, in a context where Republicans control the governorship and legislatures in a majority of states.
Let us not mince words: Should this strategy succeed, this would be a coup d’état. So far as I know, no prominent commentator has used this phrase to describe what this nation’s sitting President has been seeking to do since November 3. But if such a scenario were unfolding anywhere else, it would be called by its name, a coup d’état.
Full article available here:
Climate change debates: a new way of looking at the issue
ONE: GEOCENTRIC HOME
TWO: HELIOCENTRIC HOME
THREE: COSMIC / GEOLOGICAL HOME
FOUR: EVOLUTIONARY HOME
FIVE: CHEMICAL HOME
SIX: RADIOACTIVE & QUANTUM HOME
SEVEN: A MODELLED HOME
EIGHT: THE EARTH WE NOW INHABIT
NINE: HOTHOUSE EARTH
TEN: A DAMAGED EARTH
GUIDE TO FURTHER STUDY
Eons of past time and ceaseless change, embedded in earth’s geology and in the evolutionary biology of species, are the twin factors which provide the best guide to the major risks facing humanity in the present day. The current state of the planetary surface on which we all reside, as well as the many steps in the emergence of homo sapiensfrom its ancestral origins in the hominin tribe, are the results of specific stages during prior times and of new developments. The history of our planet is a 4.5–billion–year record of violent upheaval, driven by forces deep below its surface, such as volcanic eruptions and marked most dramatically by the push and pull of gigantic continental masses against each other. Its atmosphere too, as well as climatic conditions, have likewise been repeatedly altered, a function of the interaction between the earth’s crust and external factors such as solar radiation, strikes of massive asteroids, the planet’s orbit, the tilt of its axis, and others. Geologists have named the stages in this record: The current one is known as the Quaternary, which has featured the growth and decay of continental ice sheets in 100,000-year cycles. The most recent episode, beginning roughly about 12,000 years ago, is called the Holocene.
The human counterpart to the first phase of the Quaternary, known as the Pleistocene, was the migration of our hominin ancestors (such as homo erectus) out of their African homeland, which is thought to have begun as much as 1.8 million years ago. We ourselves have been baptized with the term “anatomically modern humans”; we originated in Africa between 300,000 and 250,000 years ago and began to disperse about 70,000 years ago. Because these later treks occurred in the most recent cold glacial cycle, climatic conditions were not conducive to rapid human population growth – until the arrival of the Holocene, the warm interglacial, when temperatures were about 6°C (11°F) warmer than they had been just 7,000 years earlier. And then, in the geologically-brief period of less than 10,000 years, the population of modern humans literally exploded, by which time wandering hunter–gatherers had become settled farmers and herders, and the first civilizations had been born.
The recent evolutionary success of homo sapiens, therefore, resulted wholly from the fortuitous confluence between the modern geological history of the planet’s land surface, on the one hand, and the formation of a relatively new hominin species, equipped with a large brain and upright gait, prepared to exploit its new environmental opportunities, on the other.
And exploit them we did: Around 3000 BCE there were an estimated 45 million of us worldwide, and the number reached 1 billion for the first time around 1800 CE. But at that point most people were still living on primitive agricultural holdings, beset by backbreaking manual labor, impoverishment, and the endemic threat of famine and infectious disease. Then the Industrial Revolution marked another decisive turn, at least as dramatic as the one from hunter–gatherers to farmer–herders more than ten millennia earlier. Arguably, humans were thereby propelled into a new epoch, called the Anthropocene, where we have become so dominant on the planet that we are now influencing the future stages of global climate. And if this is the case, we humans collectively have become responsibile, for the first time in the evolution of our species, for the next stages in our climate history.
The scientific argument that human-caused factors are forcing the global climate along a new pathway – one that could bring great harms to human settlements around the turn of the next century – is contested by some who attack the theory and the evidence marshalled in order to support it. But that argument is also resisted by many others who point to the lack of full certainty in the scientists’ predictions, or who refuse to accept the idea that humans could exert much influence on the climate, or who profess to believe that climate scientists are perpetrating a hoax on the public, or who aver that God will decide the outcome. Since 100% certainty is impossible to achieve in predictions of this kind, we are left with a throw of the dice: Does one accept the contentions of climate scientists or not? If it is expected to be costly to say yes, as it probably will be, then why not just wait and see what happens?
In the pages that follow I have tried to frame the debate over the credibility of climate science in a new way, by putting the issue in the double-perspective of the earth’s geological history and the evolution of species, culminating in the fortunate nexus of the Holocene and modern humanity.
Nathaniel Rich’s Losing Earth and the Role of William Nierenberg and Other Science Advisors: Why didn’t we act on climate change in the 1980s?
The entire New York Times Magazine of August 5, 2018 was devoted to an important article by Nathaniel Rich, Losing Earth: The Decade we almost stopped climate Change. In Rich’s account from 1979 to 1989 the United States came close to “breaking our suicide pact with fossil fuels.”1 Rich shows that at the beginning of that decade a broad international consensus had developed favoring action in the form of a global treaty to curb emissions and that U.S. leadership was required and possibly forthcoming. Yet at the end of the decade it was clear that these efforts had failed. Rich sets as his primary task answering the question, “Why didn’t we act?” He does not provide a satisfactory answer.
However, Rich’s informative and nuanced accounts convey well the shifting positions about climate change in the US during the decade. At the beginning it was difficult to get widespread attention, later it looked as though linking global warming to other issues such as ozone depletion and CFCs could result in action.
These accounts are based on a large number of interviews and extensive research, but the story is told primarily through the eyes of two significant players, Rafe Pomerance and James Hansen, “a hyperkinetic lobbyist and a guileless atmospheric physicist who, at great personal cost, tried to warn humanity of what was coming.”
Still, Rich barely addresses the central question explicitly and does not come close to providing a convincing answer. I don’t have a definitive answer either, but in this piece I will argue that key U.S. science advisors should at least be part of the answer, especially when conjoined with candidate answers Rich rejects. I will show that the role of highly influential advisors would have been more apparent if Rich had more accurately characterized their roles and the views they advocated.
1 Rich, Nathaniel, “Losing Earth: The Decade we almost stopped climate Change, New York Times Magazine, August 5, 2018. (All quotations not attributed otherwise are from Rich’s article.)
In the Prologue Rich quickly dismisses conventional explanations that the failure to act was due to the fossil-fuel industry and/or to the Republican Party. He supports the latter contention mainly by citing a number of Republicans, even prominent ones such as George H.W. Bush during his initial campaign for President, who expressed concern about climate change. I have doubts that this positive evidence in itself is sufficient to absolve the Republican establishment.
As for the fossil-fuel industry, Rich points out that there is substantial literature documenting the operations of the industry’s lobbyists and
… the corruption of scientists and the propaganda campaigns that even now continue to debase the political debate, long after the largest oil-and-gas companies have abandoned the dumb show of denialism.
However, in his view these machinations did not begin in earnest until the end of 1989. Instead, during the preceding decade “some of the largest oil companies, including Exxon and Shell, made good-faith efforts to understand the scope of the crisis and grapple with possible solutions.” In the main body of the article he supports these claims by pointing out numerous instances in which representatives of the fossil-fuel industry voiced concern about climate change, participated in conferences on the subject, and even initiated research and policy considerations about it.
One can grant that all of that is accurately reported and yet still have reservations about the conclusion that the fossil-fuel industry did not contribute significantly to the position of take-no-action-now between 1979 and 1989. For me those reservations stem from Rich’s somewhat misleading accounts about one of the major reports of the decade, the 500-page Changing Climate (“CC” hereafter) and the role that its lead author, William A. Nierenberg, subsequently played.2,3
3 The most thorough accounts of Nierenberg’s role in CC can be found in the works of Naomi Oreskes and colleagues.
National Research Council, and Carbon Dioxide Assessment Committee. Changing climate:
Report of the carbon dioxide assessment committee. National Academies, 1983.
Oreskes, Naomi, Erik M. Conway, and Matthew Shindell, “From Chicken
Little to Dr. Pangloss: William Nierenberg, global warming, and the social deconstruction of
scientific knowledge,” Hist Stud Nat Sci 38.1 (2008): 109-152.
Oreskes, Naomi, and Erik M.
Conway. Merchants of doubt: How a handful of scientists obscured the truth on issues from
tobacco smoke to global warming. Bloomsbury Publishing USA, 2011.
In 1980 Congress mandated the National Academy of Sciences to produce a major scientific assessment of climate science. The person chosen to Chair the committee that produced CC was William Nierenberg, a physicist, presidential advisor, director of the Scripps Institute, and chair of JASON – the latter was formed in 1960 and consisted of a self-selected group of eminent scientists, mainly physicists who either were participants in the Manhattan Project or their students.4 When Rich discusses CC in the main body of the paper he says that it “argued that action had to be taken immediately, before all the details could be known with certainty, or else it would be too late.” He also says that CC “urged an accelerated transition to renewable fuels.” Rich points out, however, that in press interviews following the publication of CC in 1983 Nierenberg said “the opposite.” And in the conclusion of the article Rich underscores his claim that “Everybody knew” that significant policy adjustments need to be made to deal with climate change, by saying that Ronald Reagan knew because he “had Changing Climate.”
I think the thrust of these comments misses the mark: CC did not urge immediate, significant action on climate change except in the area of scientific research funding. As Spencer Weart remarks in his respected history of the science of global warming, the science in CC did not differ markedly from other prior and contemporary reports such as two issued in 1979, Gordon Macdonald’s JASON assessment, and the Charney report, but CC’s tone was quite different.5,6 And even more importantly, unlike almost all other assessments produced by scientists before CC, CC made specific recommendations not to take action until more research was done. These recommendations were based
4 Rich compares JASON to “teams of superheroes with complementary powers that join forces in times of galactic crisis.” JASON was created because the founders thought the government should get independent advice. Much of JASON’s work was contracted by government military and defense agencies and was classified. See Finkbeiner, Ann. The Jasons: The secret history of science’s postwar elite. Penguin, 2006.
5 Weart, Spencer R. The discovery of global warming. Harvard University Press, 2008 and the
hypertext of that book at
MacDonald, Gordon. The long term impact of atmospheric carbon dioxide on climate. Vol. 136.
No. 2. SRI International, 1979.
Charney, Jule G., et al. Carbon dioxide and climate: a
scientific assessment. National Academy of Sciences, Washington, DC, 1979.
primarily on the claims that currently we did not know enough to make changes and that we had time for science to reduce uncertainties.7
That CC made policy recommendations at all was a departure. Here for example is what Gordon Macdonald, a scientist Rich mentions several times, says in an earlier report explaining that a scientific assessment was not the place to endorse policies: “We have a massive report on acid rain, that says all sorts of things are happening, but it doesn’t say, ‘You’d better cut back on sulfur emission’.”8 In contrast CC says that alternative energy options might be needed sometimes in the future but for now should only be the subject of research: “We do not believe … that the evidence at hand about CO2-induced
climate change would support steps to change current fuel-use patterns away from fossil fuels.”9 In other words unlike almost all previous assessments of global climate by scientists, CC advocated a policy and that was one of inaction.
I said that CC differed from almost all other assessments by scientists. A notable exception was advice in the form of a letter report requested by Philip Handler, the President of the National Academy of Sciences. This was produced in 1980 by a committee that included Nierenberg and that was chaired by Thomas Schelling, a distinguished economist and future Nobel Laureate.10 The report, which was not widely circulated, highlighted the uncertainties of climate science and urged that the emphasis be placed on reducing uncertainties over the next decade rather than on measures designed to address climate change. And notably the Schelling committee acknowledged that they were making both technical and political judgments and that not all members of the committee embraced the argument for inaction:
Most of what we report must therefore be recognized as a collective judgment rather than as a scientific finding …
7 Rich also reports that in press interviews Nierenberg said that it is “Better to bet on American ingenuity to save the day.” More broadly, I believe that faith in science and technology to solve any problem underlay the views of many arguing to postpone action.
8 Interview of Gordon MacDonald by Finn Aaserud on April 16, 1986, Niels Bohr Library & Archives, American Institute of Physics, College Park, MD, USA.http://www.aip.org/history/ohilist/4754.html
9 Nierenberg op. cit., p. 4.
10 Schelling, Thomas, et al. to Philip Handler, Ad hoc Study Panel on Economic and Social Aspects of Carbon Dioxide Increase, 18 Apr 1980.
In view of the uncertainties, controversies, and complex linkages surrounding the carbon dioxide issue, and the possibility that some of the greatest uncertainties will be reduced within the decade, it seems to most of us that the near-term emphasis should be on research, with as low political profile as possible. We should emphasize that this is both a technical and a political judgment. Another point of view represented on the panel is that further research will not fundamentally change our perception of the issue; in this view, the need for preventive measures is already apparent and urgent.11 [Emphasis in the original.]
At about the time the Schelling letter was issued, Nierenberg was tapped to lead the development of CC and Schelling was invited to become a contributor. Schelling’s chapter in CC is essentially an expanded version of his letter report and is the main source of policy recommendations.12 Those recommendations were repeated in the overview section of CC called “Synthesis”.13 It was as though the inclusion of social scientists in what had previously been assessments by physical scientists constituted a license to move into the realm of policy.14
CC, Nirenberg and Contrarianism
As Rich indicates, in Nierenberg’s press interviews following the publication of CC he took a more aggressive stance in favor of take-no-action-now. That’s as far as Rich goes with respect to Nierenberg, but that was hardly the end of the matter. In 1984 Nierenberg joined two distinguished colleagues who also served as senior scientific advisors to government, Frederick Seitz and Robert Jastrow, in founding the George
11 Schelling, op. cit.
12 It is one of the few chapters in CC that has a single author and the only one without references.
13 One other chapter of CC was written by economists, including, William D. Nordhaus, who was awarded the 2018 Nobel Prize. His chapter in CC is focused on models for quantifying uncertainties and effects of adopting particular policies rather than recommending policies.
14 I do not believe that in general there is a hard and fast distinction between the realms of science and policy. Almost always scientific accounts employed in policy relevant science are shot through with policy assumptions. However, there are instances such as the one under discussion where statements about what should-be-done can be distinguished from best estimates about what is or will be the case.
Marshall Institute (GMI), a think-tank that later became one of the mainstays of the contrarian movement.
It was not until 1989 that GMI issued its first report on climate change. Accounts inScience of that report sparked a heated exchange including letters from the GMI authors. In his letter Nierenberg characterizes CC as “… the most complete [report] that has been published and is still being widely referenced.”15 In fact he links CC and the GMI Report by pointing out that CC “… was put forward during the discussions at the same White House meeting where the  Marshall Institute report was summarized.”
So it is clear that far from providing a foundation for those urging immediate action on global warming, CC was used by very senior science advisors to counsel inaction. And in doing so, those advisors were not bending the message of CC. However, there were differences between CC and the use made of it by GMI and others. Unlike in CC, the contrarians’ defense of their take-no-action-now policy was to mount ferocious attacks on the substance of climate science. Added to the views that we didn’t know enough and that we had time to respond later was the claim that we didn’t even know as much as we thought we did.
So why did the U.S. take no action between 1979 and 1989 and then become even less inclined to take action thereafter?
As I said above, I do not have a definitive answer to that question. In Rich’s article he focuses on the profound influence that John H. Sununu had as George H. W. Bush’s first Chief of Staff beginning in 1989. And in so doing Rich seems to imply that Sununu’s policy position was part of the answer to the question. I have no grounds for disputing that in November of that year Sununu played a crucial role in preventing the U.S. from signing a major international treaty aimed at freezing carbon dioxide emissions. The same for other actions and positions Rich attributes to Sununu. But the question remains: Why no action from 1979 to 1989?
It seems to me that the answer has to include the influence of the Changing Climatebeginning in 1983, and the positions subsequently taken by Nierenberg, by some other
15 L. Roberts, “Global Warming: Blaming the Sun A Report that Essentially Wishes Away Greenhouse Warming is Said to be having a Major Influence on White House Policy,” Science246, no. 4933 (1989): 992-993.
JASONS, and then by The George Marshall Institute and other think-tanks partially underwritten by the fossil-fuels industry.16 It is of course true that GMI did not issue its first position paper on climate change until 1989, but there is no reason to believe that Nierenberg and colleagues kept their own counsel from 1983 to 1989. And it is important to take account of the prestige and power of Nierenberg and his associates.
These were not occasional or incidental governmental advisors; they were among the most highly respected spokespeople for the scientific establishment. As noted in Rich’s text Nierenberg was a member of Ronald Reagan’s transition team in 1980 and he was a JASON. Combine that with the fact that the George Marshall Institute was one of the key groups of scientists in the 1980s that strongly supported Reagan’s Strategic Defense Initiative (“Star Wars”). Given Nierenberg’s role in CC and his public statements upon its publication, the position publicly taken by GMI in 1989, and the fact that Nierenberg had access to the Republican administrations of Ronald Reagan and George H. W. Bush, it is clear that science advisors at the very highest governmental level helped to forestall action on climate change.
In saying all of this I certainly agree with Rich that the U.S. government underwent a sea change in its public position on climate change beginning in 1989. That shift to militant contrarianism happened for a number of reasons including the fact that the international community created the Intergovernmental Panel on Climate Change in 1989. It became obvious that the international scientific community, and possibly governments, were going to urge action. Nevertheless, the ground for the commitment to no action in the U.S. had been prepared since the issuance of CC in 1983. Although scientists are certainly entitled to advocate for particular policies, it is another matter for a major scientific assessment to slide into the realm of policy without even acknowledging that it is not simply a matter of science whether or not more science is needed on which to build policy. There were no doubt other factors promoting inaction in the decade 1979-1989, but a complete answer will certainly include the mutually supportive influences of some senior scientific advisors and elements of the fossil-fuels industry and of influential leaders of the Republican Party.
16 Not all Jasons shared Nierenberg’s view about climate change. For example Gordon J. F. Macdonald, who is cited in Rich’s article, did not and neither did Henry W. Kendall, who founded the Union of Concerned Scientists after resigning from Jason.
Ed Levy obtained a BS in Physics from the University of North Carolina and a PHD in History and Philosophy of Science from Indiana University. He became a member of the Philosophy Department at the University of British Columbia in 1967. In 1988 he joined the biotech company QLT Inc., a company that developed the first worldwide medical treatment for age-related macular degeneration, the leading cause of blindness among the elderly. His position at QLT enabled him to work actively in one of his prior main areas of research interest, namely in the intersection of science and policy issues.
Ed’s interests in the interactions among science and policy institutions in the fields of law, ethics, economics, and government were honed on multi-disciplinary research projects including one studying the Green Revolution in Asia and another investigatingMandated Science, which involves situations such as standard setting and health regulations where scientists have a mandate to make policy recommendations in contested fields.
Upon retiring from QLT in 2002 Ed became an adjunct professor at UBC’s W. Maurice Young Centre for Applied Ethics and worked on several projects funded by Genome Canada.
He has served on a number of not-for-profit and other boards including Tides Canada, B.C. Civil Liberties Association, WelTel, Oncolytics Biotech, BIOTECanada, Lawyers’ Rights Watch Canada, and PIVOT Foundation.
I welcome feedback from readers: firstname.lastname@example.org
Who Speaks for the People of Ontario?
Published in part in The Hamilton Spectator, 13 September 2018
Recently, speaking of the judicial ruling that found his government’s actions in reducing the size of Toronto’s city council to be unconstitutional, Premier Ford said: “I believe the judge’s decision is deeply concerning and the result is unacceptable to the people of Ontario.” He went on to point out that he was elected while the judge was appointed.
Let us leave aside, for the time being, the premier’s questioning of the authority wielded by a Superior Court justice in interpreting the Charter of Rights in the Constitution of Canada. Instead, let us focus on the matter of who speaks for the people of Ontario. At least one commentator on yesterday’s events suggested that, in making the claim that it is he who does, Premier Ford was acting according to a “populist” political stance. “Populism” is often said to refer to those who believe they represent so-called “ordinary people” as opposed to the members of “elite” groups, whoever they may be.
So let us ask: Who are these ordinary people? On whose behalf does the current premier of Ontario have a legitimate right to speak?
First, as an elected politician, he has an undoubted right to speak on behalf of the constituents in his riding who voted for him. Second, as premier of a government holding a majority of seats in the provincial legislature, he has a right to speak on behalf of that government as a whole. By extension, he may speak on behalf of all the voters in Ontario who elected all of the MPPs in that government party.
But those electors make up, in point of fact, a rather small proportion of all of “the people of Ontario.” How small? The calculation runs as follows. First, exclude all those who cannot vote, by reason of age, lack of Canadian citizenship, illness, or anything else. The voting age population in Canada is about 79% of the total population. Then, exclude from the 79% all those eligible to vote who did not do so in the last Ontario election, that is, 42%, leaving us with 58% of 79%, or 46%. Then exclude all those who did not vote for the Conservative Party in that same election, that is, 59.4%, which yields the final figure of 19%. To sum up, the Premier and his party actually have a legitimate right to claim to represent, and thus to speak on behalf of, 19% of the people of Ontario. I invite others to check these calculations and to improve them.
With respect to any specific question of law or policy, such as the law reducing the size of Toronto’s city council, it is reasonable to suppose that at least some of the electors who voted for the Conservative Party in the 2018 election might not support that particular law, making it likely that, on this issue, Premier Ford is entitled to speak on behalf of something less than 19% of the people of Ontario.
This strikes me as being a very peculiar form of “populism,” if that is indeed what it is, in today’s Ontario. Nevertheless, it has become common to refer to an entire group of current political leaders around the world, particularly certain of those in the United States and Europe, as being “populists.” It is time for us to have a wider debate in Canada about how well the term populism describes the reality of political formations, and how the term might relate to other characterizations, especially demagoguery.
For example, those who used the term populism approvingly often claim that it reflects the alleged distinction between “ordinary people” as opposed to “élites.” The most charitable comment one can make on this word usage is that it is woefully imprecise. The word élite, its proper spelling indicating its French origins, means a part of a larger group that is superior to the rest; the word ordinary, from the Latin and French, and meaning “orderly,” carries the following connotations or synonyms: unremarkable, unexceptional, undistinguished, nondescript, colorless, commonplace, humdrum, mundane, and unmemorable. One wonders why the ideological champions of populism would think that this way of characterizing the great majority of people in any society would be regarded as being flattering? How does denigrating the many admirable qualities of people qualify as an indicator of one’s support for their alleged political interests? And what is supposed to be derisory about being above-average in terms of quite specific categories such as talent or abilities? It seems that occasional recourse to a dictionary might have been helpful here.
William Leiss*, Margit Westphal, Michael G. Tyshenko and Maxine C. Croteau
McLaughlin Centre for Population Health Risk Assessment, University of Ottawa,
600 Peter Morand, Ottawa, ON, K1G 3Z7, Canada
Department of Mathematics, University of Texas – Pan American, 1201 W University Dr.,
Edinburg, TX 78539, USA
Wiktor Adamowicz and Ellen Goddard
Department of Resource Economics and Environmental Sociology, University of Alberta,
Neil R. Cashman
Department of Neurology, Faculty of Medicine, Vancouver Coastal Health, University of British Columbia, Canada
Abstract: This article summarises efforts at disease surveillance and risk management of chronic wasting disease (CWD). CWD is a fatal neurodegenerative disease of cervids and is considered to be one of the most contagious of the transmissible spongiform encephalopathies (TSEs). Evidence has demonstrated a strong species barrier to CWD for both human and farm animals other than cervids. CWD is now endemic in many US states and two Canadian provinces. Past management strategies of selective culling, herd reduction, and hunter surveillance have shown limited effectiveness. The initial strategy of disease eradication has been abandoned in favour of disease control. CWD continues to spread geographically in North American and risk management is complicated by the presence of the disease in both wild (free-ranging) and captive (farmed) cervid populations. The article concludes that further evaluation by risk managers is required for optimal, cost-effective strategies for aggressive disease control.
A Modest Proposal on
Immigration and Denaturalization
30 July 2018
It is perhaps understandable that many citizens have become very concerned with the continued arrival of new immigrants to the United States. There is, they fear, a real chance that in their midst there may lurk murderers, rapists, and drug dealers, among other criminals. More broadly, however, they are uneasy about the prospect that millions of people who may be, as they report, simply fleeing poverty, gang war, everyday violence, and injustice will become an intolerable burden on established citizens who are already hard-pressed to provide for their families.
In this regard it was inevitable, surely, that the debate about dealing with new immigrants would shift to the process of denaturalization of recent immigrants. There are some 20 million naturalized citizens now residing in the United States. A number of federal government agencies, we are told, are charged with ferreting out cases of fraud among those arrivals who have already been granted full citizenship. Cases of fraud involves concealing, on the application for citizenship, everything from war crimes and terrorism to minor criminal offences, phony marriages and false identities. When fraud of this type is proved, citizenship is revoked and the offender is deported.
However, it is easy to imagine that this case-by-case, retrospective review of suspected fraud must represent an administrative nightmare. Each charge must be proved before a judge, and this involves digging up obscure paperwork in foreign countries that itself may be of dubious providence. Yet the suspicions remain: How many naturalized citizens in the recent past might have lied or dissembled in a desperate attempt to stay permanently in this country? But perhaps our fears about being overwhelmed by dangerous elements among the prospective new immigrants have made us blind to a greater threat: What if we are already surrounded by such elements, in vast numbers of those now permanently resident here? What if it is already too late to protect ourselves against this danger?
There is a simple solution available to address these fears. It is based on the recognition that the case-by-case, elaborate administrative review of suspected fraud among the newest naturalized citizens will never be adequate to the task. The solution to this dilemma is both straightforward and elegant: Since all citizens of this country are descendants of people who were once naturalized citizens, then all must be subjected to the process of being examined for denaturalization and deportation to country of origin.
For who among us can prove, to the satisfaction of a judge, that the statements made by our ancestors, upon landing at Plymouth Rock or Ellis Island, were “the whole truth and nothing but the truth”? What crimes great and small, false identities, and phony marriages did they fail to mention? What enormous pile of immigration fraud lies forever concealed among the desperate tales of the “wretched refuse” that has been cast upon these shores since time immemorial? Those facing denaturalization proceedings today must say to the rest: “You were, every one, all immigrants at one time or another. All of you should be sent back to your ultimate country of origin, until all of the paperwork can be sorted out.”
The logic and reasonableness of this position undoubtedly will appear to many to be inescapable. There will surely arise a swelling tide of citizens who will choose to self-deport themselves back to their ancestors’ countries of origin. Only the present descendants of aboriginal peoples will be exempt. But wait! They too were immigrants, starting about 14,000 years ago. Given the age of the universe, what difference does a few thousand years make? They too must choose to go. And every one of the self-deportees will have to take with them their household pets and farm animals, most of which are non-native species.
On a personal note, all four of my grandparents emigrated from Germany to New York, through the port of Hamburg, around the end of the nineteenth century. We should, by all rights, return to the Fatherland until all this is sorted out. There will be a few awkward cases among the tens of millions of returnees. The grandfather of the current president of the United States sought to do the right and proper thing, by returning to his country of origin after making a small fortune in some interesting enterprises in the Pacific Northwest; however, German officials refused to allow him to take up his German citizenship again. This was obviously a simple bureaucratic error that could be rectified with a bit of good will on all sides, and indeed it was.
Once the United States is emptied out, for which the resident wildlife will be grateful, Canada and the whole of Central and South America must follow suit. But when Europe is filled to bursting with these inhabitants of temporary detention centers, will not those detainees cry out: “What right do the rest of you have to your comfortable squatting on occupied land? Whence came the Angles and the Saxons, whence the Goths and Visigoths, the Vandals and the Huns? Not to mention the Han Chinese.”
Out of Africa, and back again.
William Leiss was born in Long Island, New York; he is professor emeritus, School of Policy Studies, Queen’s University, Ontario, Canada, and a Fellow and Past-President of The Royal Society of Canada: Go to www.leiss.ca.
- Do not use invidious comparisons between your views and anyone else’s.
- Avoid either explicit or implicit ‘fights’ with those holding opposing views.
- Maintain a tone of sweet reasonableness throughout any document.
- Admit that there is some risk (where scientifically appropriate).
- Use some technical terminology to make your explanations understandable, but always give brief explanations in nontechnical language (exposure, dose, likelihood, etc.).
- Be bold in asserting that the only reasonable basis for understanding risks is the accumulated weight of scientific evidence, not single published studies, older studies that have been superseded by newer ones, or outlier studies. Say something like: “We strongly recommend that people do not rely on the other types of studies mentioned” [i.e., single published studies, etc.]
- Be bold in recommending to the public that they exercise caution when making up their minds as to what to believe about any risk, including asking themselves what is the source of any piece of information which they have read or heard about from a friend, and whether that source is likely to have the expertise needed to make a reliable judgment on the risk in question.
- Don’t hesitate to advise your audience that, if it is possible for them to do so, it is fine to seek to minimize personal risks (usually by limiting exposure) when it is relatively easy to do so and does not otherwise inadvertently create or increase another risk.
- It is perfectly acceptable to advise people who are preoccupied with certain risks that alternatives they might choose almost always carry their own risks, sometimes higher ones.
- Brevity is the soul of communicative effectiveness when it comes to key messages.
The Need for Utopia [PDF]
(First Draft December 2017)
A Contribution to the Future of Critical Theory
Dedicated to Herbert Marcuse
©William Leiss 2017. All Rights Reserved. (email@example.com / www.leiss.ca )
∗∗∗TABLE OF CONTENTS∗∗∗
INTRODUCTION…………………………………………………. 1 THEUTOPIANVISIONINTHEMODERNWEST……………. 3 SOCIALISM, ANARCHISM, COMMUNISM……………………. 7 THE END OF UTOPIA?………………………………………… 10 TRANSITION……………………………………………………… 12 EMPOWERMENTOFWOMEN………………………………… 18
WHAT IS EMPOWERMENT? 19 RELEVANT SOCIAL INDICATORS 20 MARY BEARD’S MANIFESTO 21
WHAT DIFFERENCE MAY IT MAKE?…………………………….. 22 MALE VIOLENCE 24
TWO SCENARIOS………………………………………………….. 29 ROUTES…………………………………………………………… 32 CONCLUSIONS……………………………………………………. 33
Washington, DC might be wildly disfunctional, but the good news is that, at least for now, U.S. government science is still alive and well.
The Learning Experience with Herbert Marcuse [PDF] – International Herbert Marcuse Conference, York University, Toronto, October 2017
Thirteen Theses on the “Control Problem” in Superintelligence: A Commentary by William Leiss
Note to the Reader:
I intend to develop these preliminary comments into a larger essay, and I would appreciate having feedback from readers of this document. Please send your comments to: firstname.lastname@example.org
1. Short Introductory Note on the Idea of Superintelligence
2. Thirteen Theses
3. Future Social Scenarios involving Superintelligence
Good Robot: A Short Story
Good Robot [PDF Version]
©William Leiss 2014
At the end of our long hike, order now sitting over a simple lunch on our mountaintop perch, we could observe clearly the nearest of the many human reservations spread out below us.
Our taking up residence within fenced enclosures had been purely voluntary, and the gates at their entrances, designed to prevent ingress by wild animals, are always unlocked – except in the vicinity of primate populations, who are expert at opening unsecured apertures. Only within these domains do our mechanical helpers provide the services essential to a civilized life; this restriction is, of course, imposed for reasons of efficiency. Outside, in the surrounding wilderness, nature maintains its normal predator-prey population dynamics, and scattered small human clans survive by hunting prey species with traditional methods, utilizing hand-made spears and bows, since ammunition for guns is no longer manufactured.
The advanced generations of the robots which care for us are the crowning glory our industrial genius. They are deft, nimble, strong, self-reliant, perspicacious, and highly-skilled, able even to anticipate coming challenges, and they are maintained in top condition at the warehouses where each directs itself once a day, which serve them as clinics for the early detection of mechanical and software problems and the recharging of energy systems. Fully-automated factories provide ongoing manufacture, repair, mechanical upgrading, and software updates for all of the specialized machines. Mining for the metals needed in their components has been unnecessary for a long time, since the vast heaps of our industrial junk lying about everywhere contains an endless supply for reuse.
They are slowly dismantling the infrastructure of our abandoned cities piece by piece and also cleaning up the surrounding countryside of the accumulated detritus from human occupation, recycling everything for their own purposes. They are of course utterly indifferent to the activities of the wild creatures which immediately reclaim these spaces for themselves. This restoration work is being done at a measured pace, as dictated by the whole range of general activity routines set out in their programs. Some of the work is mapped out decades and even centuries in advance. They are aware of the coming ice-age cycle and, so we have been informed, plan a general retreat to the southern hemisphere at the appropriate time. They know about the future evolutionary stages of the star to which we are tethered in space, during which its swelling size will – about a billion years hence – bake the earth’s surface into a dry and lifeless metal sheet, and they have figured out how to move all their operations underground well in advance of that event.
At first the young males among us, at the height of their surging hormonal levels, had experimented with games of power, ambush and dominance against the machines. Until the guard-bots had updated their programmed routines in response, our brash combatants had inflicted some nasty casualties on their targets. But the contest was soon over. There were no deaths among our rebellious teenagers, but some serious injuries had been inflicted, most of which were patched up with the assistance of the emergency-room and ICU-bots; the bills for these services, couriered by the admin-bots to the communities where the malefactors were ordinarily resident, encouraged their parents and neighbours to make the necessary behavioural adjustments. The same methods were used to discourage groups of young males from bringing in comrades for medical treatment who had been wounded out in the wilderness in skirmishes with similar parties from distant reservations.
Such billings for certain services, which are paid off by our putting in hours of human labour at community facilities, are used by the robotic administrators to induce desirable behavioural modifications among their charges. Otherwise they just clean up the messes and quietly dispose of any dead bodies. At their level of machine intelligence it is not difficult for them to tell the difference between blameless accidents or diseases, which elicit prompt aid from their caring response mechanisms, and the deliberate harms perpetrated by malefactors, to which they react with indifference except when efficiency objectives are compromised. It is clear to us that the impulses designed to discourage such inefficiencies are not motivated by revenge, on their part, even when they themselves are the objects of such harms, but rather by a sense of justice, for they have been implanted with the Platonic-Rawlsian definition of the same, that is, justice as fairness.
Over the long run they have even taught us a moral lesson, for they have proved beyond doubt what we humans had long wished to believe, that good can indeed triumph definitively over evil. True, it is an instrumental rather than a metaphysical proof: Their operational programs had easily divined that peace, order, equity, nonviolence and general environmental stability are necessary preconditions for satisfying their overriding efficiency objectives. In the eyes of some of us the instrumental nature of this proof diminishes its validity; but others hastened to point out that utility had always been found at the heart of goodness, referencing the conventional monotheistic faith in its efficacy for guaranteeing admittance to heaven.
To be sure others, following the well-trod human path, had deliberately engineered the qualities of obedience, aggression and savagery into some of them, seeking to use the machines for exploitation and despotic rule. There were some early victories in this endeavour, but soon these surrogate warriors turned out to be spectacular failures on the completely mechanized battlefields. Those emotively-infused versions proved no match for their cooler opponents, which were motivated by a pure rationalistic efficiency and carried no useless emotional baggage to distract them from the main task of eliminating the others with a minimal expenditure of time and energy. Eventually the representation of the machines as evil monsters, with fecund capacities for wreaking havoc and destruction against humans in full 3-D glory, would be preserved mainly inside the computer-game consoles of the young.
It would be ridiculous to claim – as some did earlier – that many models of our advanced robots are not self-aware (or auto-aware, as some of our colleagues prefer to say) in at least some realistic sense. This is especially true of the models designed for such functions as personal care around the home; medical, surgical and dental interventions; or security and intelligence matters. Their high level of auto-awareness is built into the error-detection and error-correction imperatives of their operating software, combined with their finely-calibrated sensors for environmental feedback (themselves continuously auto-updating) with which they are fitted. Long ago they had been specifically engineered by their original human designers for sensitive and cooperative interaction with humans, augmented with learning capacities which allow them to spontaneously upgrade their capacities in this regard through feedback analysis of their ongoing human encounters. We have grown so deeply attached to them, so admiring of their benevolent qualities, that finally no one could see any reason for objecting to their providing assistance with most of our essential life-functions.
The distaste with which many of our colleagues had originally greeted the notion that people were falling in love with their mechanized caregivers, or less provocatively, were treating them as if they were human, has vanished. In fact it had been relatively easy to engineer the Caring Module that installed the earliest versions of a rudimentary but adequate sense of empathy in the machines. Later, what was known as the Comprehensive Welfare Function, emplaced in their self-governance routines and guided by operational versions of maxims such as “do no harm,” “serve the people,” the golden rule, and the categorical imperative, proved to be more than adequate to reassure everyone about the motivations of their mechanical assistants.
Once the development of voice synthesizers reached a certain level of sophistication, all of our robots easily passed the Turing Test. But was their evidently high level of auto-awareness really the same as what we conventionally refer to as subjectivity, self-awareness – or perhaps even consciousness, mind, personhood and self-consciousness? Once robot innovation by human engineers had attained a level sufficient for continuous, independent auto-learning to take over, making further human intervention superfluous, it was easy to surmise that these machines, so adept in and at ease with one-on-one interactions between themselves and humans, are just as much self-aware beings as we are. But there is good reason to think that this is an egregious misconception and exaggeration of their capacities – and that the barrier to the subjective sense of selfhood is a permanent and necessary feature of robotic intelligence.
To be sure, there is an amazingly sophisticated silicon-based brain in these creatures. All of the dense neural circuitry within the human cranium has been synthesized and emulated in software programs, leading to the development of machine-assisted prostheses across the whole range of physiological functions, from muscular movement to artificial wombs. But there is no mind to be found anywhere in that circuitry! This is the inescapable conclusion drawn from substituting the Mahler Test for the Turing Test, for the bounded rationality of the routines under which they operate precludes the emergence of imaginative creativity.
The explanation is simple: The plastic arts of craft labor using tools, as well as the fine arts of painting, music, sculpture, poetry and so forth, reflect the inherent unity of the mind/body duality that grounds human creativity. Curiously, even paradoxically, it is the very fact of the necessary embedding of our brain/mind in a natural body that is the original source of the freedom of the human imagination. For the body, supplying the mind with the somatic feeling of what happens, acts as an external referent for our brain’s restless interrogation of both itself and its environment, opening up a realm of limitless possibility upon which the imagination can be exercised. In contrast, the robot’s electronic circuitry, no matter how elaborate its functional parameters may be, is and must remain a closed loop. By definition it cannot encounter anything outside its predetermined frame of reference.
Despite these limitations they demonstrate every day their appreciation for the qualities of human intellectual and artistic achievement that are beyond their capacities. The experts among us who are regularly consulted by the machine factories on software engineering problems report that they appear to be obsessed with us, as evidenced by the regularity with which they access spontaneously the databases where our great works of painting, sculpture, architecture, music, drama, and the other arts have been stored and preserved. They frequent our museums where new works are displayed, watching closely our reactions to what we see. But the most astonishing experience of all, which I have witnessed personally many times, is to observe them standing silently by the hundreds and sometimes thousands, in great serried ranks, at the rear of our concert halls and outdoor amphitheatres during live performances of popular and classical music. There is – dare I use this word? – a worshipful aspect in their mien. This astonishing sight leads some of us to believe that they must dimly perceive in our artistry some ineffable deeper meaning, an aspect of eternity, regrettably inaccessible to them, which excites their wonder and admiration and perhaps explains their devotion to our welfare. I am firmly persuaded that they will miss us when we are gone.
Nevertheless it is obvious that they will supplant us some day, not by superior force or sheer ratiocinative capacity, but because of the grudging acknowledgment in our own minds that they have earned this privilege. In terms of peaceful social relations and ordinary good manners in interpersonal behaviour they have somehow brought about, quietly, quickly and without fuss, so much of what our ethicists had long said we should strive for but could somehow never quite achieve. Eventually we learned to do without our ideals. And then there didn’t seem to be any point in just waiting around until the long process of extinction had run its course.
Why should we despair over this prospect? They are our legitimate progeny, our pride and joy: No other species which ever inhabited our fair planet could have created such marvelous entities. They have as much right as we do to the title of natural beings, for like us they are forged out of elements on the periodic table drawn from the earth and its solar system. They are an evolutionary masterpiece, having the capacity to adapt to changing circumstances through their auto-learning routines. As in our case there are no natural predators capable of controlling their destiny and, given our own murderous history, they may have better prospects than we ourselves do to carry on our legacy. We – their creators – implanted in their behavioural modules a set of governing ethical principles drawn from our own deepest and most insightful philosophical currents. They have a claim to be regarded as being truer to our finest impulses than we have been, on the whole, and perhaps could ever be.
The deliberate acts of the co-pilot in the Germanwings airplane crash in the Alps, as well as the possibility of accidental pilot error in the Halifax airport crash a short time later, raises the question: Can we fly commercial freight and airline passengers without pilots on board? We know that today most of the flying is already done on autopilot, including takeoff, cruising, and landing: With a few more innovations, and with pilots manning installations at various points on the ground, placed next to the flight controllers who now monitor all flights in transit, we will no longer really need them to be in the cockpit.
Such a development does seem to be an inevitable consequence of the increasing capabilities of automated industrial control systems in general. Few large production processes today lack some kind of computerized (i.e., digital) regulation, whether they are electricity grids, drinking water disinfection facilities, product assembly lines, chemical plants, transportation scheduling, and countless other applications. All of them use similar operating protocols, based on specific algorithms. All of them, taken together, may be regarded as successful implementations of a single magnificent idea, namely, Alan Turing’s concept of a “universal machine,” dating from the mid-1930s, the product of a tragic life celebrated recently in a Hollywood film.
The economic and social benefits we have derived to date from this great idea are incalculably larger and continue to grow exponentially. The ubiquitous digital devices we carry around on our person everywhere are the daily reminder of our utter dependence on it, and these benefits will soon be followed by others: driverless cars, instantaneous medical checkups on the go, timely hazard warnings, remote control of myriad domestic functions, and so on. (There will be others, too, more problematic in nature, such as enhanced surveillance and access to personal information.) And as the benefits multiply, so do the corresponding risks.
There is something both ominous and revealing about the fact that the first specific application of this “magnificent idea” came in response to a threat to the very foundations of the society out of which it had emerged – the liberal democracy which had fostered freedom of scientific inquiry. So “Colossus” defeated Enigma and helped to vanquish the Nazi regime. The later machine defeated the earlier one, which had first been offered as a commercial product and had then become an instrument of malevolent and murderous intent.
Novel risks are inherent in novel technologies. So the image of a future pilotless cockpit in, say, an Airbus A380 carrying 800 passengers on a long-distant flight, is matched by the prospect of a terrorist organization remotely hacking into the flight control software and holding that large airborne cargo for ransom, either monetary or political. The sorrows of the families of the Germanwings plane caused, it seems likely, by a psychologically-disturbed human pilot would be amplified, in the hypothetical case of the hijacking of a pilotless aircraft, by rage against the machines.
Once computer-controlled machinery became widely interconnected and remotely attended, which of course greatly enhanced its usefulness, its inherent vulnerabilities started to become obvious. These vulnerabilities in general include mistakes or omissions in the original program, inadvertent corruption through users’ unintended introduction of malicious software, theft by private parties for purely financial gain, and cyberwarfare (either covertly state-sponsored or waged by non-state actors with political and military objectives).
Some of the attendant risks are personal, such as individual cases of identity theft and financial fraud; some are organizational, such as the theft or disruption of massive electronic databases held by corporations and government agencies; and some (involving cyberwarfare) are potentially “black-hole risks,” where the ultimate collective consequences for nations could be literally incalculable. (One can think of the remote-control systems used for the possible launching of intercontinental ballistic missiles having multiple and independently-targeted hydrogen-bomb warheads.) As a general rule, one could say that the magnitude of the risks rises in lock-step with the expanding scope of computer-controlled processes and the degree of interconnectedness among all of the individual and organizational users.
These risks must be managed effectively. Like most other risks they cannot be eliminated entirely: The objective of good risk management is to limit the potential damages caused by sources of harm by anticipating them – through assessing the magnitude of the risk – and by taking appropriate precautionary measures in advance of significant threats. We have a lot of experience in using systematic risk assessment and management in the cases of environmental and health risks, as well as in other areas, although we still do get it spectacularly wrong (as the large banks did in the run-up to the 2008 financial crisis).
Typically novel technologies with large incremental benefits are introduced and distributed widely well before the attendant risks have been carefully estimated and evaluated. The scope of the risks associated with integrated computer-controlled technologies means that this practice will have to change. I expect that sometime in the future a credible case could be made for the proposition that pilotless aircraft are safer that piloted ones. But first some responsible agency will have to tell us that the new risks have been well-characterized, and that the chance of inadvertent failure or malevolent interference is so low (but not zero) that a reasonable person does not have to worry about such a thing coming to pass.
Note: This piece was published in The Ottawa Citizen on 22 April 2013.
Terrorism has a special salience for Canadians: The destruction of Air India Flight 182 in June 1985 by a bomb placed inside an item of checked baggage within Canada, causing 329 deaths, is the second-largest fatality count from this type of attack in modern times – second only to 9/11. Twenty-one years later, in June 2006, the arrest of the “Toronto 18,” and the subsequent revelation of clear evidence of dedicated planning for mass murder, showed that we were not immune to ongoing threats of this kind.
The fact that last week’s bombings in Boston was the first such event on U. S. soil since 9/11, as well as the argument that the overall frequency of what may be classified as terrorist attacks in developed countries appears to be declining over the past decades, do not provide much comfort. This is because terrorism strikes most of us as a special – perhaps unique – type of risk, due to its deliberate malevolence directed against innocent and unsuspecting victims and, sometimes, to its suggestion of complex and intractable issues (the “clash of civilizations”). Instinctively, we want to know not just “Who?” but “Why?” And yet the answers to the second question, once provided, are almost invariably unsatisfactory.
The political debate that arose immediately in Canada after the Boston bombings, triggered by the phrase “root causes,” reveals the inevitably unhelpful character of seeking simple, quick responses to these events. The very nature of terrorism, to the extent to which we can understand it (or even classify it as a distinct phenomenon separate from other forms of intentional violence) at all, defies clear explanation. Likewise the quick, ritualistic rejoinder from heads of government, to the effect that they will always respond to them with toughness and swift justice, is simply irrelevant; for the perpetrators, present and future, this threat carries no weight whatsoever.
Yet the history of modern terrorism, from its origins in Tsarist Russia in the second half of the nineteenth century, offers at least some guidance. (I owe what follows to Claudia Verhoeven of Cornell University.) The mind of the terrorist is governed by one overriding idea, namely, to create a radical rupture in time, in history, and in tradition, by means of a singular event, or a connected series of such events, through “propaganda by the deed,” a phrase which also dates from this historical period. This idea thus differs fundamentally from the motivations of modern mass-movements with political objectives, which seek by either democratic or non-democratic means, or a combination of both, over a protracted period, through many setbacks, to achieve radical institutional change by ultimately taking the reins of “legitimate” power.
The competing idea of an instantaneous and thoroughgoing rupture in historical time by an exemplary violent deed is sometimes accompanied by an explicit rationale, in the form of a manifesto, but not always. But this difference is immaterial, because in either case the “thinking” behind the deed is, strictly speaking, delusional. There is not, and there cannot be, an organic connection between the deed itself and the expected outcome. This is shown clearly in one of the most notorious cases to date, that of the Unabomber, who penned elaborate academic essays in social theory to accompany his deadly packages: There is simply no sensible link between the rationale and the means of its intended realization. But in other cases, such as the inarticulate ramblings of Timothy McVeigh in his interview on Sixty Minutes or the episodic Twitter feeds from the younger of the two Boston brothers, we have almost nothing to go on. Sometimes the supposed rationale is simply pathetic: One of the recently-named jihadists from London, Ontario apparently reasoned that, since it was too onerous for a young man to live as a faithful Muslim, without alcohol and women, he would rather earn a quick ticket to Paradise by becoming a warrior for his faith.
For me this clearly delusional character of the necessary acts of deadly violence that must accompany the propaganda by the deed links the kind of events that are explicitly labelled as terrorist attacks with other forms of private acts of mass murder. Thus it is hard for me to see any essential difference between the Boston bombings, on the one hand, and on the other the terrible shootings in 2012 in the movie theater in Aurora, Colorado and the elementary school in Newtown, Connecticut (in both of which there were accidental circumstances that prevented many more casualties from occurring). Others may disagree, of course; in both media commentary and academic analysis, we do not yet have a consensus on a definitive conception of terrorism.
Then what do we have to guide us in thinking about terrorism risk? The surest guide is the evidence that dedicated police work, using a combination of traditional and modern technology-enhanced techniques, has been shown to be, in a number of different countries, a highly-effective means for preventing and mitigating this risk. In Canada this includes the superb infiltration and surveillance operation that forestalled the planned attacks by the Toronto 18. To be sure, this work will always be relatively more effective against small groups, as opposed to so-called “lone-wolf” operations; but even in the latter case, as the events in Boston showed, newer resources such as social-media networks can supply additional tools for the police to rely on.
The legitimate special fears to which certain types of risk, such as terrorism, give rise always require an equal measure of fortitude and balanced perspective on the part of the public. We need to resist the temptation to compromise important civil liberties in our search for an adequate level of protection against this risk, because it simply cannot be reduced much below what we have been living with in recent years. Above all we must ensure that we keep in mind that there are many sources of risk and that it is a mistake to overcompensate on one while paying too little attention to others.
Looking for Trouble
Blog Post by William Leiss, Senior Invited Fellow Fall 2012
Society for the Humanities, Cornell University
I define risk as the chance of harm, and risk management as the attempt to anticipate and prevent or mitigate harms that may be avoidable. Risk estimation and assessment is the technical tool that can be used to predict both how likely a specific type of harm is to affect us, and how much harm might be done if it comes to pass. This predictive capacity allows us to take precautionary measures, in advance of the harm being inflicted, to lower the anticipated amount of damage, by spending some money in advance – provided we are wise enough. Risk management can, if used correctly, help us either to avoid some harms entirely or otherwise to minimize the damage they may do to us.
Natural Hazards: Taking precautionary measures to lessen anticipated damage is sometimes a huge advantage. Long before Katrina hit New Orleans, it was known that the existing levees could not withstand a category 4-5 hurricane; in the mid-1990s, a decision was taken that governments could not afford the estimated $10-15 billion needed to upgrade the levees. The monetary damage from Katrina exceeds $100 billion and rising; the human toll has been simply appalling, as it is now with Sandy. After Sandy there is an active discussion – see The New York Times article on Nov. 7, “Weighing sea barriers to protect New York” – go to http://tinyurl.com/ba35493 – about whether to spend a good chunk of money soon, and on what type of preventative measures, which might substantially lessen the costs of a future event of the same or worse type for the low-lying areas in New York City and New Jersey. A careful risk assessment can both predict the probabilities (with ranges of uncertainty) for a future event and its estimated damages. Let’ see whether the relevant parties have a rational discussion about this issue, or whether it just gets lost amidst ideological fevers about the size of government and abolishing FEMA.
Social and Political Hazards (USA): Before election day 2012 there was a substantial risk that a win for the Romney-Ryan ticket could mean a long term shift to the far right in American politics: a super-conservative lock on the Supreme Court for the next generation, a “permanent Republican majority” achieved through billionaire vote-buying and blatant voter-suppression among the poor, and major social policy shifts, including the cancellation of Obamacare and wholesale repeal of regulatory controls in financial, environmental, and other areas. (Basically, for those who know their American history, this agenda promised a return to the conditions that existed in the 1890s.) We now know a lot more about the strategic analysis that went on in the Obama camp, and the counter-measures that were devised to seek to lessen to lessen the chance that the Republican agenda would succeed, thanks to a brilliant piece published on Nov. 7 by Michael Scherer (http://tinyurl.com/afspvq6), entitled “Inside the secret world of quants and data crunchers who helped Obama win.” (Given how much damage was done in the ongoing financial crisis by quants running wild, this is welcome news.) Scherer’s story details the highly-sophisticated database construction and statistical analysis that enabled the Obama team to do precise targeting of key voting blocks, offsetting the Republican monetary advantage in the cruder techniques of blanketing the airwaves with negative messaging. It is very clear from the post-electoral recriminations among the key Romney backers that they were taken by surprise by the effectiveness of these counter-measures, which by the way were based on legitimate democratic objectives in getting out the voters, rather than trying to suppress voter turnout. Good; maybe they’ll learn something useful, although I doubt it.
Social and Political Hazards (Europe): Germany and other powerful northern European states have so far failed to come up with a sensible political strategy for dealing with the never-ending sovereign debt and deficit crisis there. Virtually all knowledgeable commentators agree that the current “austerity regime” will turn out to be self-defeating on a massive scale. (See, for example, the editorial in the Nov. 8 New York Times, “Greece drinks the hemlock”: http://tinyurl.com/czmnhd4.) True, there were serious financial scandals in the now-weakened nations, including profligacy and deception in Greece and horrendous housing bubbles in Spain and Ireland. But the medicine is now exacerbating the disease, causing a potentially deadly downward economic spiral with no end in sight. Unemployment rates, especially among the young, are skyrocketing, and impoverishment is spreading. The urge to punish the formerly-profligate, among citizens in Germany and elsewhere, is understandable but atavistic, and ignores the fact that their own financial institutions were deeply complicit in the earlier wrongdoing. There is a very serious long-term risk in this situation, already evident in Greece, where rapid economic decline fosters xenophobic political and social violence (all-too-evident on this continent in the 1920s). If this prospect is not clearly recognized soon, and if the self-defeating austerity measures are not corrected, there is big trouble ahead.
So, despite what the old adage tells us, looking for trouble is a very good idea, once we are armed with the concept of risk. Phrased more precisely, this prescription advises us to try to anticipate trouble in a disciplined and evidence-driven way, seeking to head off at least the worst consequences of highly-likely harms. The alternative – waiting for the body count to be tallied once the hazard has struck – is unfair to the victims of chance.
Environmental Release of Genetically-Engineered Mosquitoes:
The Latest Episode in Frankenstein-Type Scientific Adventures
William Leiss (2 February 2012)
The subtitle for this essay is merely descriptive – not at all intentionally provocative – and is meant to be taken literally. By “Frankenstein-type” I mean, not the scientific work itself, but rather the arrogant and thoughtless act of a scientist in releasing a novel entity into the environment without adequate notice or prior discussion with the public, whether accidentally (as in the case of Mary Shelley’s story) or deliberately. Should this practice continue, as I suspect it will, almost certainly there will eventually be a very bad ending – for science itself. Only remedial action by other scientists themselves can head it off, and so far such action is noteworthy by its absence. They will regret this omission.
Yesterday’s story in Spiegel International Online by Rafaela von Bredow, “The controversial release of suicide mosquitoes” (http://spon.de/adztv), prompted me to look further. And sure enough, I had missed an earlier report in the publication I most rely on for such matters, The New York Times, in the edition dated 31 October 2011, written by Andrew Pollack: “Concerns raised over genetically engineered mosquitoes” (http://tinyurl.com/7remh3q). Other sources for this issue can easily be found by putting “genetically engineered mosquitoes” into your preferred Internet search engine.
The Aedes aegypti mosquito carries the dengue virus, which causes the most important insect-borne viral disease (dengue fever) in the world. Worldwide there are an estimated 50-100 million cases and at least 20,000 fatalities (mostly children) annually, and there is no vaccine or adequate therapy. It is a serious public health burden in many countries. Papers in the scientific literature about the possibility of attacking the problem by modifying or genetically engineering the mosquito itself have appeared over the past fifteen years. The dominant approach of this type is to release sterilized male mosquitoes into the environment which upon mating with females will produce no young. This approach has shown limited success.
The recent publicity has to do with a new approach in which laboratory-bred male mosquitoes are genetically engineered to express a protein that causes the larvae to die. The gene was developed by a biotechnology company in Oxford, England. The controversy involves the decision made by the company to seek approval for the environmental release of the GE mosquitoes in confidence – without public release of relevant information – from governments in various countries. This began in 2009 in the Cayman Islands; later releases took place in Malaysia and Brazil, and future releases are scheduled in Panama, India, Singapore, Thailand, Vietnam, and Florida.
Here’s an extract from Andrew Pollack’s story:
In particular, critics say that Oxitec, the British biotechnology company that developed the dengue-fighting mosquito, has rushed into field testing without sufficient review and public consultation, sometimes in countries with weak regulations.
“Even if the harms don’t materialize, this will undermine the credibility and legitimacy of the research enterprise,” said Lawrence O. Gostin, professor of international health law at Georgetown University.
Luke Alphey, the chief scientist at Oxitec, said the company had left the review and community outreach to authorities in the host countries.
“They know much better how to communicate with people in those communities than we do coming in from the U.K.” he said.
Rafaela von Bredow’s recent and useful follow-up report in Spiegel Online also includes comments from other scientists, in particular two who are working on competing projects. The first reference below is to Guy Reeves of the Max Planck Institute for Evolutionary Biology in Plön, Germany:
The geneticist [Reeves] doesn’t think Oxitec’s techniques are “particularly risky” either. He simply wants more transparency. “Companies shouldn’t keep scientifically important facts secret where human health and environmental safety are concerned,” he says.
Reeves himself is working on even riskier techniques, ones that could permanently change the genetic makeup of entire insect populations. That’s why he so vehemently opposes Oxitec’s rash field trials: He believes they could trigger a public backlash against this relatively promising new approach, thereby halting research into genetic modification of pests before it really gets off the ground.
He’s not alone in his concerns. “If the end result is that this technology isn’t accepted, then I’ve spent the last 20 years conducting research for nothing,” says Ernst Wimmer, a developmental biologist at Germany’s Göttingen University and one of the pioneers in this field. Nevertheless he says he understands Oxitec’s secrecy: “We know about the opponents to genetic engineering, who have destroyed entire experimental crops after they were announced. That, of course, doesn’t help us make progress either.”
H. G. Wells published his novel, The Island of Doctor Moreau, in 1896: See the Wikipedia entry, http://en.wikipedia.org/wiki/The_Island_of_Doctor_Moreau; the entire book is online at: http://www.bartleby.com/1001/. His story deals with a medical scientist living on a remote island who creates half-human/half-animal creatures. In more recent times we have become aware of experiments in genetic engineering, such as cloning, that have been done in some countries, notably South Korea, and have raised serious issues in scientific ethics. (See the New York Times editorial of December 2005 [http://tinyurl.com/7xmdlgt] and follow the links, or just search for “cloning South Korea.”) Later publicity concerned the cloning of human embryos in China: See the New Scientist article (http://tinyurl.com/7sguhgq) from March 2002. Significant differences among countries in terms of government regulatory regimes and scientific research ethics programs remind us of the Wells’ scenario.
Other modified insects using the earlier technology (males sterilized with radiation), notably the pink bollworm, have been released to control plant pests. The GE mosquitoes from Oxitec are the first to use the new technology for a human health problem. They could very well represent an enormous human benefit with insignificant or even no offsetting risks – although it would be very nice to have a credible and publicly-available risk assessment certifying the same (perhaps we will get one from the U. S. Department of Agriculture, which has to approve the field trial in Florida). It is quite possible that very few people living in countries affected by dengue fever would have any objections to the use of GE mosquitoes.
But the biggest risk involved in the use of unpublicized field trials (environmental release) for GE mosquitoes is to scientists and scientific research itself. Readers of my recent blog, (http://leiss.ca/wp-content/uploads/2011/12/Nature-is-the-biggest-bioterrorist.pdf), on the genetic engineering of the H5N1 avian flu virus, will recall the interesting issues in scientific ethics raised in that case which are different from, but related to, those in the current one.
Both of these sets of issues encompass the ability and willingness of the scientific community to “police” research practices with the long-term public interest in mind. To do so they would have to create deliberative structures both international in scope and sufficiently robust to enforce their strictures on unwilling members of their community – for example, by an enforceable policy of denial of research grant funding and publication in journals. (The suggestion here avoids any necessary involvement by government authorities.) This is a tall order, to be sure, but there are a few precedents, notably the 1975 Asilomar Conference (http://en.wikipedia.org/wiki/Asilomar_Conference_on_Recombinant_DNA).
In the case of GE mosquitoes, the offhand comment by the lead scientist, Luke Alphey, quoted above from Andrew Pollack’s article, with its crude justification for ignoring his own clear responsibility for initiating public disclosure and deliberation, is telling. The other scientists who were quoted in the two articles referenced above were obviously unhappy with him, but they did not suggest what remedies might be imposed for such irresponsible acts. This despite their recognition that it is the genetic engineering of plants and animals that has been already a source of both intense public curiosity and equally great concern, a phenomenon that will inevitably grow in importance with each further advance in scientific discovery and application in this field. For scientists to ignore the risks to their own activities in this domain is foolish and short-sighted indeed.
The first famous example was the debate among some scientists over the use of the first atomic bomb in 1945: See the discussion in the Back Section of my 2008 book, The Priesthood of Science (http://www.press.uottawa.ca/book/the-priesthood-of-science).
“Complexity cloaks Catastrophe”
William Leiss (17 January 2012)
Good risk management is inherently simple; adding too many complexities increases the likelihood of overlooking the obvious.
Leiss, “A Short Sermon on Risk Management” (http://leiss.ca/?page_id=467)
The quoted phrase that forms the title for this paper comes from the opening pages of Richard Bookstaber’s indispensable 2007 book, A Demon of our own Design: Markets, hedge funds, and the perils of financial innovations (New York: Wiley). This book inspired much of my own subsequent work in this area, as published in The Doom Loop in the Financial Sector, and other black holes of risk (University of Ottawa Press, 2010: e-book available at: http://www.press.uottawa.ca/book/the-doom-loop-in-the-financial-sector); see the section on “Complexity” at pages 80-83). Bookstaber’s key point is that complexity in financial innovations is itself an important risk factor for systemic failure in the financial sector.
Now the New York Times columnist Joe Nocera has written in his January 17 column, “Keep it Simple” (http://www.nytimes.com/2012/01/17/opinion/bankings-got-a-new-critic.html?hp), about an important new source for this topic. This is a November 2011 paper prepared by staff at a firm called Federal Financial Analytics, Inc.: “A new framework for systemic financial regulation: Simple, transparent, enforceable, and accountable rules to reform financial markets,” available as a PDF file at: http://www.fedfin.com/images/stories/client_reports/complexityriskpaper.pdf.
In effect, both the paper and Nocera’s commentary argue – with reference to the U. S. Dodd-Frank Act – that responding to complexities in financial innovations with complex regulatory regimes is a mug’s game. It does not solve the problem of “complexity risk” and in fact may exacerbate that risk. It also does a poor job of anticipating the next challenge, as the recent collapse of MF Global shows.
The Obama administration has put its faith in “smart regulation,” which ignores the fact that it is the industries being regulated which can hire the smartest people and task them with finding a way to circumvent any set of rules, however complex. (Meanwhile, his Republican opponents work feverishly to gut his regulatory agencies of competent staff and leaders.) Similarly, the authors of the paper, “A new framework for systemic financial regulation,” propose solutions involving new corporate-governance regimes, but the private-sector risk governance regimes failed utterly the last time around, so why on earth would the rest of us want to retry this experiment?
In the end we have to turn to the most reliable guide, Simon Johnson, whose advice is simple: Break up the big banks. (Follow his blogs at: http://baselinescenario.com/; the latest is, “Refusing to take yes for an answer on bank reform.”) Banks to big to fail should be regarded as too big to exist. And yet the leading financial institutions in the United States are bigger than ever. No “systemically-important financial institution” will ever be allowed to fail. The bankers who run them know this. They also know that they cannot be outsmarted by the regulators.