Skip to content
September 11, 2013 / compassioninpolitics

Worst Case Scenario Is Bad on Utilitarian Grounds

WORST CASE POLICYMAKING DOES MORE DAMAGE THAN TAKING RISKS – LOOK TO HISTORY
Evan 12
– PhD; Philosophy; London School of Economics, lecturer; behavioral science; University College Cork School of Medicine in Ireland, founder of Projection Point; leading provider of risk intelligence solutions, (Dylan, “Nightmare Scenario: The Fallacy of Worst-Case Thinking,” Risk and Insurance Management Society, April 2, http://www.rmmagazine.com/2012/04/02/nightmare-scenario-the-fallacy-of-worst-case-thinking/)

There’s something mesmerizing about apocalyptic scenarios. Like an alluring femme fatale, they exert an uncanny pull on the imagination. That is why what security expert Bruce Schneier calls “worst-case thinking” is so dangerous. It substitutes imagination for thinking, speculation for risk analysis and fear for reason. One of the clearest examples of worst-case thinking was the so-called “1% doctrine,” which Dick Cheney is said to have advocated while he was vice president in the George W. Bush administration. According to journalist Ron Suskind, Cheney first proposed the doctrine at a meeting with CIA Director George Tenet and National Security Advisor Condoleezza Rice in November 2001. Responding to the thought that Al Qaeda might want to acquire a nuclear weapon, Cheney apparently remarked: “If there’s a 1% chance that Pakistani scientists are helping Al Qaeda build or develop a nuclear weapon, we have to treat it as a certainty in terms of our response. It’s not about our analysis…It’s about our response.” By transforming low-probability events into complete certainties whenever the events are particularly scary, worst-case thinking leads to terrible decision making. For one thing, it’s only half of the cost/benefit equation. “Every decision has costs and benefits, risks and rewards,” Schneier points out. “By speculating about what can possibly go wrong, and then acting as if that is likely to happen, worst-case thinking focuses only on the extreme but improbable risks and does a poor job at assessing outcomes.” An epidemic of worst-case thinking broke out in the United States in the aftermath of the Three Mile Island accident in 1979. A core meltdown in the nuclear power station there led to the release of radioactive gases. The Kemeny Commission Report, created by presidential order, concluded that “there will either be no case of cancer or the number of cases will be so small that it will never be possible to detect them,” but the public was not convinced. As a result of the furor, no new nuclear power plants were built in the United States for 30 years. The coal- and oil-fueled plants that were built instead, however, surely caused far more harm than the meltdown at Three Mile Island, both directly via air pollution and indirectly by contributing to global warming.

(Insert tag)
Yudkowsky ‘4 “Cognitive biases potentially affecting judgment of global risks” Forthcoming in Global Catastrophic Risks, eds. Nick Bostrom and Milan Cirkovic Draft of August 31, 2006

The conjunction fallacy Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations. Rank the following statements from most probable to least probable: 1. Linda is a teacher in an elementary school. 2. Linda works in a bookstore and takes Yoga classes. 3. Linda is active in the feminist movement. 4. Linda is a psychiatric social worker. 5. Linda is a member of the League of Women Voters. 6. Linda is a bank teller. 7. Linda is an insurance salesperson. 8. Linda is a bank teller and is active in the feminist movement. 89% of 88 undergraduate subjects ranked (8) as more probable than (6). (Tversky and Kahneman 1982.) Since the given description of Linda was chosen to be similar to a feminist and dissimilar to a bank teller, (8) is more representative of Linda’s description. However, ranking (8) as more probable than (6) violates the conjunction rule of probability theory which states that p(A & B) ≤ p(A). Imagine a sample of 1,000 women; surely more women in this sample are bank tellers than are feminist bank tellers. Could the conjunction fallacy rest on subjects interpreting the experimental instructions in an unanticipated way? Perhaps subjects think that by “probable” is meant the probability of Linda’s description given statements (6) and (8), rather than the probability of (6) and (8) given Linda’s description. Or perhaps subjects interpret (6) to mean “Linda is a bank teller and is not active in the feminist movement.” Although many creative alternative hypotheses have been invented to explain away the conjunction fallacy, the conjunction fallacy has survived all experimental tests meant to disprove it; see e.g. Sides et. al. (2002) for a summary. For example, the following experiment excludes both of the alternative hypotheses proposed above: Consider a regular six-sided die with four green faces and two red faces. The die will be rolled 20 times and the sequence of greens (G) and reds ® will be recorded. You are asked to select one sequence, from a set of three, and you will win $25 if the sequence you chose appears on successive rolls of the die. Please check the sequence of greens and reds on which you prefer to bet. 1. RGRRR 2. GRGRRR 3. GRRRRR 125 undergraduates at UBC and Stanford University played this gamble with real payoffs. 65% of subjects chose sequence (2). (Tversky and Kahneman 1983.) Sequence (2) is most representative of the die, since the die is mostly green and sequence (2) contains the greatest proportion of green faces. However, sequence (1) dominates sequence (2) because (1) is strictly included in (2) – to get (2) you must roll (1) preceded by a green face. In the above task, the exact probabilities for each event could in principle have been calculated by the students. However, rather than go to the effort of a numerical calculation, it would seem that (at least 65% of) the students made an intuitive guess, based on which sequence seemed most “representative” of the die. Calling this “the representativeness heuristic” does not imply that students deliberately decided that they would estimate probability by estimating similarity. Rather, the representativeness heuristic is what produces the intuitive sense that sequence 2 “seems more likely” than sequence 1. In other words, the “representativeness heuristic” is a built-in feature of the brain for producing rapid probability judgments, rather than a consciously adopted procedure. We are not aware of substituting judgment of representativeness for judgment of probability. The conjunction fallacy similarly applies to futurological forecasts. Two independent sets of professional analysts at the Second International Congress on Forecasting were asked to rate, respectively, the probability of “A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983” or “A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983”. The second set of analysts responded with significantly higher probabilities. (Tversky and Kahneman 1983.) In Johnson et. al. (1993), MBA students at Wharton were scheduled to travel to Bangkok as part of their degree program. Several groups of students were asked how much they were willing to pay for terrorism insurance. One group of subjects was asked how much they were willing to pay for terrorism insurance covering the flight from Thailand to the US. A second group of subjects was asked how much they were willing to pay for terrorism insurance covering the round-trip flight. A third group was asked how much they were willing to pay for terrorism insurance that covered the complete trip to Thailand. These three groups responded with average willingness to pay of $17.19, $13.90, and $7.44 respectively. According to probability theory, adding additional detail onto a story must render the story less probable. It is less probable that Linda is a feminist bank teller than that she is a bank teller, since all feminist bank tellers are necessarily bank tellers. Yet human psychology seems to follow the rule that adding an additional detail can make the story more plausible. People might pay more for international diplomacy intended to prevent nanotechnological warfare by China, than for an engineering project to defend against nanotechnological attack from any source. The second threat scenario is less vivid and alarming, but the defense is more useful because it is more vague. More valuable still would be strategies which make humanity harder to extinguish without being specific to nanotechnologic threats – such as colonizing space, or see Yudkowsky (this volume) on AI. Security expert Bruce Schneier observed (both before and after the 2005 hurricane in New Orleans) that the U.S. government was guarding specific domestic targets against “movie-plot scenarios” of terrorism, at the cost of taking away resources from emergency-response capabilities that could respond to any disaster

Focusing on existential risk means humanity turns a collective blind eye to actually existing worldly problems—resolving these “ordinary” harms is a prerequisite to resolving existential ones. Williams in 2010
Lynda Williams, Faculty in Engineering/Physics at Santa Rosa Junior College, 2010 (“Irrational Dreams of Space Colonization,” Peace Review: A Journal of Social Justice, Volume 22, Issue 1, Available Online to Subscribing Institutions via Taylor & Francis Online, p. 4-5)

We have much to determine on planet Earth before we launch willy-nilly into another space race that would inevitably result in environmental disaster and include a new arms race in the heavens. If we direct our intellectual and technological resources toward space [end page 7] exploration without consideration of the environmental and political consequences, what is left behind in the wake? The hype surrounding space exploration leaves a dangerous vacuum in the collective consciousness of solving the problems on Earth. If we accept the inevitability of the destruction of Earth and its biosphere, then it is perhaps not too surprising that many people grasp at the last straw and look toward the heavens for solutions and a possible resolution. Many young scientists are perhaps fueling the prophesy of our planetary destruction by dreaming of lunar and/or Martian bases to save humanity, rather than working on the serious environmental challenges that we face on Earth. Every space-faring entity, be they governmental or corporate, faces the same challenges. Star Trek emboldened us all to dream of space as the final frontier. The reality is that our planet Earth is a perfect spaceship and may be our final front-line. We travel around our star, the sun, once every year, and the sun pulls us around the galaxy once every 250,000,000 years through star systems, star clusters, and gas clouds that may contain exosolar planets that host life or that may be habitable for us to colonize. The sun will be around for billions of years and we have ample time to explore the stars. It would be wise and prudent for us as a species to focus our intellectual and technological knowledge into preserving our spaceship for the long voyage ahead so that, once we have figured out how to make life on Earth work in an environmentally and politically sustainable way, we can then venture off the planet into the new frontier of our dreams.

Just because you have an extinction impact does not mean you don’t have to answer internal link takeouts, their impact framing diminishes the capacity to find meaning in patters of events and should be rejected.
Furedi 9
(Frank Furedi is a professor of Sociology, School of Social Policy, Sociology,
Social Research, The University of Kent “PRECAUTIONARY CULTURE AND THE RISE OF POSSIBILISTIC RISK ASSESSMENT” http://www.frankfure…7_augustus1.pdf //Donnie)
The culture that has been described as the culture of fear or as precautionary culture encourages society to approach human experience as a potential risk to our safety.6 Consequently every conceivable experience has been transformed into a risk to be managed. One leading criminologist, David Garland, writes of the ‘Rise of Risk’ – the explosion in the growth of risk discourse and risk literature. He notes that little connects this literature other than the use of the word risk.7 However, the very fact that risk is used to frame a variety of otherwise unconnected experiences reflects a taken-forgranted mood of uncertainty towards human experience. In contemporary society, little can be taken for granted other than an apprehensive response towards uncertainty. Arguably, like risk, fear has become a taken-for-granted idiom, even a cultural affectation for expressing confusion and uncertainty. The French social theorist Francois Ewald believes that the ascendancy of this precautionary sensibility is underwritten by a cultural mood that assumes the uncertainty of causality between action and effect. This sensibility endows fear with a privileged status. Ewald suggests that the institutionalisation of precaution ‘invites one to consider the worst hypothesis (defined as the “serious and irreversible” consequence) in any business decision’.8 The tendency to engage with uncertainty through the prism of fear and therefore anticipate the worst possible outcome can be understood as a crisis of causality. Riezler in his early attempt to develop a psychology of fear draws attention to the significant influence of the prevailing system of causality on people’s response to threats. ‘They have been taken for granted – and now they are threatened’ is how he describes a situation where ‘“causes” are hopelessly entangled’.9 As noted previously, the devaluation of people’s capacity to know has significant influence on the way that communities interpret the world around them. Once the authority of knowledge is undermined, people become hesitant about interpreting new events. Without the guidance of knowledge, world events can appear as random and arbitrary acts that are beyond comprehension. This crisis of causality does not simply deprive society from grasping the chain of events that has led to a particular outcome; it also diminishes the capacity to find meaning in what sometimes appears as a series of patternless events.

The Pre-Cautionary principal is bad—it encourages policymakers to downplay risks associated with non intervention
Powell 10
(Russell , is a Research Associate for both the Oxford Centre for Neuroethics and the James Martin 21st Century School Program on Ethics “What’s the Harm? An Evolutionary Theoretical Critique of the Precautionary Principle” accessed by means of Project Muse, Donnie)
First, it might increase the salience of factors that decision makers and their constituencies tend to overlook or underestimate given the foibles of human psychology, such as the tendency to ignore low-risk outcomes in the face of probable short-term gains. But if so, there is a danger that the Principle will encourage a comparably dangerous disposition—namely, the tendency to downplay the risks associated with nonintervention. For example, in some clinical and research settings, there is a tendency for caregivers to allocate disproportionate attention and decisional weight to the risks of medical intervention, and hence to neglect the potentially serious consequences of not intervening (Lyerly et al. 2007). Likewise, patients often focus primarily on the risks of taking a medication, rather than on the risks associated with not taking it. At best, then, the Principle may be said to trade one irrational human penchant for another, without demonstrating that one is more pervasive or pernicious than the other. A second noteworthy feature of the Rio Declaration is its explicit requirement that scientific uncertainty about causal relations not undermine cost-effective preventive action. If by “scientific uncertainty” the Declaration means “subject to or characterized by intermediate probabilities,” then it is chasing a red herring. Commercial regulation has never required incontrovertible proof of causality (Soule 2004), as this would be an impossible standard to meet in epidemiology, ecology, and other fields that wrestle with multi-factorial, geometrically nonlinear, and irreducibly statistical causal relations. Even apparently “straightforward” causal relationships, such as that between smoking and lung cancer, involve imperfect regularities that are approached via probabilistic theories of causation. Surely, private and governmental actors should not be allowed to hide behind a veil of epidemiological—or more broadly scientific—complexity in order to derogate their moral responsibilities. It is not clear, though, how scientific uncertainty per se has been used to undermine sensible regulation, or how the Rio Declaration would ameliorate this problem if it can be shown to exist.

Emergency high-magnitude framing focuses on top-heavy solutions to existential problems, that leads to serial policy failure, and collapses long-term change
Hodder and Martin 9
(Patrick, Bachelor of Arts HonoursBrian, professor of Social Sciences at the University of Wollongong“Climate crisis? The politics of emergency framing” Economic and Political Weekly, Vol. 44, No. 36, 5 September 2009, pp. 53-60. http://www.bmartin.c…09epw.html) MFR
Should climate change be considered an emergency? Our aim here is to present some cautionary comments. Most discussion has approached the issue in terms of whether climate change really is an emergency. For example, does the evidence show that warming is proceeding faster than previously thought? Is there a tipping point beyond which climate change is irreversible? How soon and how drastically must carbon emissions be reduced? This way of thinking seems to be concerned with scientific matters, but actually it builds in social assumptions. Many of those who talk of a climate crisis or emergency assume that evidence about climate processes means that addressing climate change is the most urgent social issue, that the solution is policy change at the top, and that thinking of the issue as an emergency is an effective way of bringing about change. It is not the use of the word “emergency” that is necessarily significant here but rather the assumptions that so commonly go along with the word. We think these assumptions need to be brought out into the open and discussed. Let us be clear. We believe climate change is a vitally important issue. We believe action should be taken, the sooner and the more effective the better, to prevent the adverse consequences of global warming. Calling climate change an emergency might be a good approach – but on the other hand it might not be, indeed it might be counterproductive. We think both the advantages and disadvantages of emergency framing should be discussed. The emergency frame implicitly prioritises climate change above other issues. On the other hand, some critics, like Lomborg (2006), argue that other issues should have higher priority. We think it can be a mistake to prioritise one issue over others, because this may encourage competition between activists rather than cooperation. There are plenty of issues of vital importance in which millions of lives are at stake, among them nuclear war, global poverty, HIV, inequality – and smoking, which could kill one billion people this century (Proctor 2001). It is natural to expect campaigners on other vitally important issues – such as torture, sexual slavery and genocide – to remain committed to their concerns. Rather than prioritise climate change as more urgent, it may be more effective for climate change activists to work with other social justice campaigners to find ways to help each other – indeed, some are doing this already. Emergency framing can be used to sideline dissent within the climate change movement itself. For example, those who advocate highly ambitious targets for CO2reduction may seek the high ground, presenting their position as the only option for humanity and stigmatising others as selling out. Internal democracy, divergent approaches and openness to new viewpoints can be dismissed as unaffordable luxuries when the future is at stake. Our view, instead, is that because climate change is such an important issue, maintaining democracy, diversity and dialogue within the movement is even more vital. One of the consequences of framing climate change as an emergency is an orientation to solutions implemented at the top, usually by government. The assumption is that only governments have the capacity to create change quickly enough. The subtext is that change must be imposed on a reluctant population. In the longer term, this is not good politics, because the way to lasting change is through popular mobilisation, with as many people as possible supporting the change and getting behind it. Imposing policies from the top runs the risk of provoking a backlash, with gains in the short-term reversed later on. With climate change, the additional shortcoming of focusing on governments – as opposed to building a mass movement that governments feel obliged to follow – is that governments are the least reliable sources of support. Some are captives of fossil fuel lobbies; some operate massive fossil fuel industries themselves. More deeply, governments depend on economic growth to maintain tax revenues used to maintain functions that perpetuate government itself – various bureaucracies, including the military, police and prisons – and to pacify constituencies and lobbies through expenditure, for the rich as much as the poor. Few governments are keen to promote a steady-state economy, a necessity for long-term ecological sustainability. A third major shortcoming of emergency framing is that it is not effective. Psychologically, calling something a crisis may lead to disbelief – if immediate evidence of dramatic effects is not apparent – or disempowerment and withdrawal because there seems to be little an individual can do to address an overwhelming problem. Large numbers of people already think climate change is important, so to get them active the key is to provide practical ways of engaging. Saying that the problem is even bigger and more urgent than before is not likely to make people do more if they cannot already see practical ways to act. Emergency framing is risky. It is, ironically enough, not a good way to create a sustainable movement – a movement that continues to be strong a decade or more down the track after the media have moved on to other issues. The movements against nuclear war fell into this trap: most activists concentrated on protesting in the here and now, demanding short-term change. But the problem of nuclear weapons, part of the wider problem of the mobilisation of science and technology for warfare, was never going to go away in a few years. The movement rose and fell, leaving only a few persistent campaigners attempting to keep the issue alive in the intervening years. The same applies to the climate change movements. They are active now in many countries, but will they be just as active in five or ten years? The challenge is to build a long-term movement, cooperating with other movements, that will persist after media attention declines should climate change not occur as rapidly as scientists anticipate, and will also persist should some of the more calamitous scenarios eventuate. The world needs a sustainable climate change movement built not on fear but on widespread commitment.

You can also find the following evidence here.

Catastrophe scenarios make policy fail – reduce their probability to zero.
Rescher, ’83
(Nicholas, 1983, Professor of Philosophy, University of Pittsburg, Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management, University Press of America, p. 39-40)
But in decision theory there are two different, more pressing reasons for dismissing sufficiently improbable possibilities. One is that there are just too many of them. To be asked to reckon with such remote possibilities is to baffle our thought by sending it on a chase after endless alternatives. Another reason lies in our need and desire to avoid stultifying action. It’s simply “human nature” to dismiss sufficiently remote eventualities in one’s personal calculations. The “Vacationer’s Dilemma” of Figure 1 illustrates this. Only by dismissing certain sufficiently remote catastrophic possibilities as outside the range of real possibilities – can we avoid the stultification of action on anything like standard decision-making approach represented by expected-value calculations. The vacationer takes the plausible line of reviewing the chance of disaster as effectively zero, thereby eliminating that unacceptable possible outcome from playing a role by way of intimidation. People generally (and justifiedly) proceed on the assumption that the probability of sufficiently unlikely disasters can be set at zero; that unpleasant eventuations of “substantial improbability” can be dismissed and taken to lie outside the realm of “real” possibilities.

Linear predictions are impossible. Prefer solving existing problems. Sa in 2004
Deug Whan Sa, 04 Dong-U College, South Korea, (“CHAOS, UNCERTA I N T Y, AND POLICY CHOICE: UTILIZING THE ADAPTIVE MODEL,” International Review of Public Administration, vol. 8, no. 2, 2004, scholar)RK

Third, the characteristic is the existing Newtonian determinism theory which presumes linear relations where things proceed from the starting point toward the future on the thread of a single orbit. Thus, it also assumes that predictions of the future are on the extended line of present knowledge and future knowledge is not as unclear as the present one (Saperstein 1997: 103107), and that as similar inputs generate similar outcomes, there will be no big differences despite small changes in initial conditions. However, chaos theory assumes that the outcome is larger than the input and that prediction of the future is fundamentally impossible.3 Hence, due to extreme sensitivity to initial fluctuations and non-linear feedback loops, small differences in initial conditions are subject to amplifications and eventual different outcomes, known as ‘chaos.’4 Chaos is sometimes divided into strong chaos and weak chaos (Eve, Horsfall and Lee 1997: 106); and goes through a series of orbit processes of close intersections and divisions. In particular, weak chaos is found in the limits that account for the small proportion inside a system, while strong chaos features divisions at some points inside a system, which lead to occupation of the entire system in little time. CHAOS, UNCERTAINTY AND POLICY CHOICE 1. Review of Existing Policy Models Social scientists have tried to explain and predict policy matters, but never have generated satisfactory outcomes in terms of accuracy of predictions. There could be a variety of reasons for this inaccuracy in prediction, but one certain reason is that policies themselves are intrinsically governed by uncertainty, complexity and chaos in policies that produce many different outcomes though they are faced with the same initial internal states, the same environments, and governed by the same causal relationships

As a policy maker, you should evaluate low probability impacts at zero
Herbeck & Katsulas 92
– Dale A., Professor of Communication and Director of the Fulton Debating Society at Boston College & John P., Debate Coach at Boston College
[October 29-November 1, “The Use and Abuse of Risk Analysis in Policy Debate”, Paper presented at 78th Annual Meeting of the Speech Communication Association, Chicago, IL, Available Online via ERIC Number ED354559, p. 10-12]
First, and foremost, we need to realize that some risks are so trivial that they are simply not meaningful. This is not to argue that all low probability/high impact arguments should be ignored, but rather to suggest that there is a point beneath which probabilities are meaningless. The problem with low probability arguments in debate is that they have taken on a life of their own. Debate judges routinely accept minimal risks which would be summarily dismissed by business and political leaders. While it has been argued that our leaders should take these risks more seriously, we believe that many judges err in assessing any weight to such speculative arguments. The solution, of course, is to recognize that there is a line beyond which probability is not meaningfully evaluated. We do not believe it is possible to conclude, given current evidence and formats of debate, that a plan might cause a 1 in 10,000 increase in the risk of nuclear conflagration.17 Further, even if it were possible, we need to recognize that at some point a risk becomes so small that it should be ignored. As the Chicago Tribune aptly noted, we routinely dismiss the probability of grave impacts because they are not meaningful: It begins as soon as we awake. Turn on the light, and we risk electrocution; several hundred people are killed each year in accidents involving home wiring or appliances. Start downstairs to put on the coffee, and you’re really asking for it; about 7,000 Americans die in home falls each year. Brush your teeth, and you may get cancer from the tap water. And so it. goes throughout the day — commuting to work, breathing the air, working, having lunch, coming home, indulging in leisure time, going back to bed.18 Just as we ignore these risks in our own lives, we should be willing to ignore minimal risks in debates. Second, we must consider the increment of the risk. All too often, disadvantages claim that the plan will dramatically increase the risk of nuclear war. This might be true, and still not be compelling, if the original risk was itself insignificant. For example, it means Hale to double the probability of nuclear war if the original probability was only 1 in one million. To avoid this temptation, advocates should focus on the initial probability, and not on the marginal doubling of the risk claimed by the negative.

As a judge, you must not base decisions on worst-case scenarios but rather balanced risk assessment
Rescher 83
– Nicholas, Professor of Philosophy at University of Pittsburgh, Chairman of the Philosophy Department, Director of the Center for Philosophy of Science, Honorary degrees from 8 universities on 3 continents, Doctorate in Philosophy from Princeton
[Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management, p. 50]
The “worst possible case fixation” is one of the most damaging modes of unrealism in deliberations about risk in real-life situations. Preoccuption about what might happen “if worst comes to worst” is counterproductive whenever we proceed without recognizing that, often as not, these worst possible outcomes are wildly improbable (and sometimes do not deserve to be viewed as real possibilities at all). The crux in risk deliberations is not the issue of loss “if worst comes to worst” but the potential acceptability of this prospect within the wider framework of the risk situation, where we may well be prepared “to take our chances,” considering the possible advantages that beckon along this route. The worst threat is certainly something to be borne in mind and taken into account, but it is emphatically not a satisfactory index of the overall seriousness or gravity of a situation of hazard.

Low probabilities should be dismissed as zero risk
Rescher 83
– Nicholas, Professor of Philosophy at University of Pittsburgh, Chairman of the Philosophy Department, Director of the Center for Philosophy of Science, Honorary degrees from 8 universities on 3 continents, Doctorate in Philosophy from Princeton
[Risk: A Philosophical Introduction to the Theory of Risk Evaluation and Management, p. 50]
But in decision theory there are two different, more pressing reasons for dismissing sufficiently improbable possibilities. One is that there are just too many of them. To be asked to reckon with such remote possibilities is to baffle our thought by sending it on a chase after endless alternatives. Another reason lies in our need and desire to avoid stultifying action. It’s simply “human nature” to dismiss sufficiently remote eventualities in one’s personal calculations. The “Vacationer’s Dilemma” of Figure 1 illustrates this. Only by dismissing certain sufficiently remote catastrophic possibilities as outside the range of real possibilities – can we avoid the stultification of action on anything like standard decision-making approach represented by expected-value calculations. The vacationer takes the plausible line of reviewing the chance of disaster as effectively zero, thereby eliminating that unacceptable possible outcome from playing a role by way of intimidation. People generally (and justifiedly) proceed on the assumption that the probability of sufficiently unlikely disasters can be set at zero; that unpleasant eventuations of “substantial improbability” can be dismissed and taken to lie outside the realm of “real” possibilities.

I believe there are other posts on Learn Policy Debate which speak to the issue of underviews and worst-case scenarios. For instance, the Kessler and Berube evidence here (actually the cite). Here are a couple cards for underviews. Finally, here is another thread which has cites and evidence on this question.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: