# Is Motivated Reasoning Bad Reasoning? II

## Alternatives to Antagonism: Ambiguity and Uncertainty

This is part II of a three-part series. This series will be posted simultaneously on Je Fais, Donc Je Suis, my personal blog, as well as the Rotman Institute Blog.

In the first part of this post, I discussed the work of social psychologist Dan Kahan on motivated reasoning. As he defines it, motivated reasoning is “the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.” According to what I called the antagonistic picture, motivated reasoning is bad reasoning; it leads us to have false or unjustified beliefs. And Kahan’s work shows that motivated reasoning is pervasive; specifically, I discussed some work that shows that high science literacy and numeracy seems to exacerbate, not remove, motivated reasoning.

All together, this leads us to a gloomy conclusion. But, in this post, I’ll argue that things aren’t necessarily so gloomy. Specifically, I’ll argue that motivated reasoning isn’t necessarily bad reasoning. I’ll do this by first thinking a bit more about why we expected high science literacy and numeracy to lead to agreement, then introducing two models of motivated reasoning, one from STS scholar Daniel Sarewitz and one from philosopher of science Heather Douglas.1

In the first part of the post, we saw that science literacy and numeracy seem to increase disagreement, at least about climate change. This was exactly the opposite of what we had predicted, namely, that science literacy and numeracy would decrease disagreement, and it led to our gloomy conclusion that we are doomed to bad, motivated reasoning. But why did we expect science literacy and numeracy to have this effect? In other words, why did we expect highly science literate and numerate people to agree on what the evidence says about climate change?

Part of the answer, I think, is that we assumed that the reasoning involved — going from some evidence to accepting or rejecting a hypothesis — is unambiguous and certain. In other words, given the available evidence, it is clear whether the hypothesis should be accepted or rejected; and there is no reason to think that we could be wrong to accept or reject the hypothesis.

If the reasoning involved in, say, assessing the risks of climate change really is unambiguous and beyond reasonable doubt, then we would expect good reasoners to agree. But if one or the other of these assumptions is false, then the door is open for good reasoners to disagree.

Sarewitz and Douglas, respectively, start their analyses by rejecting these assumptions. Sarewitz points out that scientific evidence is often quite ambiguous, and Dougas starts by recognizing that inductive inferences can never be certain. In different ways, each goes on to argue that values have a role to play in recognizing these ambiguities and uncertainties.

But that means that motivated reasoning can be good reasoning. If motivated reasoning leads us to recognize when our best science is ambiguous and uncertain, and we respond to this ambiguity and uncertainty properly, then our reasoning can be good. Indeed, in this kind of case, if non-motivated reasoning would have led us to assume (incorrectly) that our findings are unambiguous and certain, then motivated reasoning would be better than non-motivated reasoning. (We’ll take a closer look at this possibility in part III.)

Let’s turn now to Sarewitz and Douglas for a little more detail. I’m going to stick with the example of climate change to illustrate things.

The computer simulations we use to study the global climate are enormously complex; arguably, some of them are the most complex things that human beings have ever created. But even these extraordinarily complex systems involve significant simplifications and approximations in the ways they represent the global climate. Choices have to be made about which parts of the system will be modeled in which ways, and which parts will be left out entirely. When we move from modeling the climate itself to modeling the social and economic effects of climate change, the choices ramify.

Consequently, Sarewitz argues,

nature itself — the reality out there — is sufficiently rich and complex to support a science enterprise of enormous methodological, disciplinary, and institutional diversity. I will argue that science, in doing its job well, presents this richness, through a proliferation of facts assembled via a variety of disciplinary lenses, in ways that can legitimately support, and are causally indistinguishable from, a range of competing, value-based political positions.

In other words, choices are unavoidable; “when cause-and-effect relations are not simple or well-established, all uses of facts are selective.” Then, once we see where a certain set of choices is taking us, we seem to be free to endorse those choices — if they agree with our values — or call them into question — if they don’t.2 Specifically, once we see the implications of the choices made by climate scientists, liberals are free to endorse those choices and conservatives are free to call them into question.

This doesn’t mean that all sets of choices are equally good. Rather, Sarewitz’ starting point is that no one set of choices is unambiguously the best. At this point, motivated reasoning can lead us go with one set rather than another, without our reasoning being flawed in any way whatsoever. Indeed, motivated reasoning can help us recognize that someone else’s findings depend on choices that they have made unconsciously.

Douglas’ model is built on the idea of inductive risk. When we accept or reject a general hypothesis or a prediction about the future based on limited evidence, there’s always a possibility that we’ve gotten things wrong — that our sample wasn’t representative of the whole population, that some unanticipated factor changed the way things turned out. Douglas points out that getting things wrong in this way can have negative downstream consequences. For example, if we accept the hypothesis that climate change will cause massive population displacements (due to sea level rise and desertification), make serious economic sacrifices to try to forestall these displacements, and then it turns out that the hypothesis was wrong, then our serious economic sacrifices were unnecessary. Similarly, if we reject this hypothesis, do nothing to forestall the displacements, and it turns out that we’re wrong, then we’ll have massive population displacements on our hands.

The values that we attach to the downstream consequences of a hypothesis can and should play a role in determining how much evidence we need to accept or reject the hypothesis. If the consequences of incorrectly accepting the hypothesis are relatively minor, then we should be satisfied with relatively little and weak evidence. But if the consequences are relatively major, then we should demand much more and more stringent evidence.

Because of this, when everyone can agree on the values of the various consequences, we can expect agreement on how much evidence is required to accept or reject the hypothesis, and so we can expect everyone to act the same way (that is, everyone accepts it or everyone rejects it). On the other hand, when people don’t agree on the values at stake, we expect disagreement about whether we have enough evidence.

This could help explain why climate change is politically polarized. Liberals generally think the economic consequences of doing something about climate change will be minor, while the social and ecological consequences of not doing something will be major. Conservatives (at least pro-capitalist conservatives) generally think exactly the opposite: the economic consequences are major and the social and ecological consequences are minor. So liberals are satisfied with the available evidence concerning climate change and conservatives want more and better evidence.

In this explanation of the controversy, both sides are using motivated reasoning. Indeed, on Douglas’ model, motivated reasoning is absolutely necessary. Without motivated reasoning — without taking into account the significance of the consequences — we have no way to make a non-arbitrary decision about whether we have enough evidence to accept the hypothesis. Good reasoning requires emotions and values.3

If this explanation is right, then the controversy over whether “the science is settled” (about climate change) is disingenuous, in two ways. First, we can never be certain about climate change, and in this sense the science can never be “settled.” It’s disingenuous for conservatives to demand this, and likewise disingenuous for liberals to claim that it has been achieved. The controversy is really over whether the evidence is sufficient to accept the key claims about climate change (humans are responsible, it will have specific bad consequences, and so on). But even this is disingenuous, because liberals and conservatives are working with different standards of sufficient evidence. Due to motivated reasoning, the evidence can be both sufficient for liberals and at the same time insufficient for conservatives.

So motivated reasoning is not necessarily bad reasoning. Because of ambiguity and uncertainty, emotions and values have a role to play in our reasoning. But does this mean that “reasoning” degenerates into an anything-goes free-for-all? No. That will be the topic of part III.

1. To be precise, Douglas and Sarewitz write more about “value-freedom” and “objectivity” than “motivated reasoning.” But they’re closely connected. In my research, I define value-freedom as the normative ideal or principle that ethical and political values should not play a role in accepting (or rejecting) a claim. Value-freedom is one way (but only one way) of understanding objectivity. Motivated reasoning — and cultural cognition specifically — often seems to violate value-freedom. Now, Douglas and Sarewitz are both arguing that non-value-free science can still be good science. If they’re right, then in the same way motivated reasoning can still be good reasoning.

2. “We are free to do X” is shorthand here for something like “good reasoning does not require that we do not-X.”

3. We might worry that, say, conservatives are putting too much weight on the economic consequences, and not enough on the social and ecological consequences. That is, we might worry that conservatives are working with wrong or unreasonable values. I think a weakness of both of these models is that they treat values as exogenous — values just sort of come in from beyond the scope of rational debate and disagreement.

# Is Motivated Reasoning Bad Reasoning? I

## The Pervasiveness of Motivated Reasoning

This is part I of a three-part series. This series will be posted simultaneously on Je Fais, Donc Je Suis, my personal blog, as well as the Rotman Institute Blog.

Social and political values predict your views on climate change: if you’re an egalitarian-communitarian (think: liberal, on the political left), chances are you think humans are responsible for climate change; if you’re a hierarchical-individualist (think: conservative, on the political right), chances are you think climate change is a natural phenomenon, or isn’t happening at all.

Social psychologist Dan Kahan argues that this is due to motivated reasoning, “the unconscious tendency of individuals to process information in a manner that suits some end or goal extrinsic to the formation of accurate beliefs.” Specifically, in the case of climate change (though not in the case of vaccines or genetically modified foods), Kahan argues that cultural cognition is at work: you accept or reject the belief that humans are responsible for climate change because you identify yourself as a member of a group (“liberals,” “conservatives”) that is committed to accepting or rejecting this belief. In other words, you believe humans are responsible for climate change because you’re a liberal and liberals believe humans are responsible for climate change.

Values and good reasoning are often assumed to be antagonistic. There’s a metaphor that goes back to Plato: we’re in a chariot, being pulled by two horses, reason and emotion. Reason tries to pull us towards truth; but emotion pulls us away from truth. If emotion isn’t restrained, it will ride roughshod over reason and truth. (That last bit muddles the metaphor, but you get the idea.) Kahan’s defintion of motivated reasoning seems to suggest this antagonism. The end or goal is, as he puts it, “extrinsic to accurate belief”; it’s external to, irrelevant to, perhaps even opposed to the truth.

On this antagonistic picture, motivated reasoning seems to be bad reasoning. Consider two cases: motivated reasoning leads us to accept a false claim; or it leads us to accept a true claim. In the first case, things have clearly gone wrong: we believe something that’s false. In the second case, we’ve gotten to the right conclusion (we accept a true claim), but in the wrong way (following emotion and values rather than evidence and logic). In philosophy-ese, motivated reasoning seems to lead to beliefs that are false, unjustified, or both.

Working within this antagonistic picture, you might think that we can avoid motivated reasoning by improving science literacy — how much people know about science — and numeracy — “not just mathematical ability but also [the] disposition to engage quantitative information in a reflective and systematic way and use it to support valid inferences” (6). To go with Plato’s metaphor: by making the reason horse strong and powerful, we will move towards truth, whichever direction the emotion horse happens to want to go. We will tend to get justified true beliefs by overwhelming the influence of emotion or values.

Kahan’s work suggests that this isn’t the case. In one line of research, he divides people into two groups: high science literacy/numeracy [high SLN] and low science literacy/numeracy [low SLN]. The antagonistic picture suggests that (a) people in high SLN group will tend to agree with each other — they’re all being moved towards truth by a relatively strong reason horse — while (b) people in the low SLN group will tend to disagree with each other — they’re being moved in all different directions by a relatively strong emotion horse.

This prediction gets things exactly backwards: polarization increases with science comprehension. Consider this image, from Kahan’s blog:

On the left is the prediction: as SLN increase (as we move from left to right in the graph) the two groups converge. On the right is actual survey data: as SLN increases, egalitarian-communitarians (liberals) become more worried about climate change while hierarchical-individualists (conservatives) become less worried. The two groups move further apart, not together!

These results suggest that motivated reasoning is pervasive. High science literacy and numeracy don’t help; indeed, they just seem to make things worse. In terms of Plato’s metaphor, it seems that we don’t have two horses, reason and emotion. It’s more like reasoning is the horse pulling the chariot, but emotion is the charioteer, the one who ultimately decides which direction reason is going to go. Kahan puts it less metaphorically:

When the data, properly construed, supported an ideological noncongenial result, high numerate subjects latched onto the incorrect but ideologically satisfying heuristic alternative to the logical analysis required to solve the problem correctly.

So it seems that we’re doomed to bad reasoning. Motivated reasoning leads us to false or unjustified beliefs, and motivated reasoning is pervasive.

I don’t think this is necessarily the case. Specifically, I don’t think that motivated reasoning necessarily leads us to false or unjustified beliefs. Certainly they do sometimes. But not in all cases. In other words, the antagonistic picture is wrong. And that’s what I’m going to argue in part II of this post.

# A Regex for Switching between Chicago Citation Styles

Suppose you’re a philosopher (or other humanist) who writes in LaTeX and uses the excellent biblatex-chicago package to handle citations. Suppose, further, that you like to write with note citations (since it’s easy to confirm that you’ve cited the correct thing) but need to switch to author-date citations when you submit your article. Perhaps you’re about to submit something to Philosophy of Science, for example.

One strength of biblatex-chicago is that you can change between note and author-date citations simply simply by changing a flag when you load the package and using \autocite for your citations. Note citations use the flag notes and author-date citations use authordate.

However, one thing biblatex-chicago can’t really do is switch around your punctuation placement. The footnotes for notes citations come after major punctuation, which can be inside quotation marks; author-date citations come before major punctuation, but outside quotation marks. In addition, with \autocites, you can have indefinitely many source-page citation pairs, each with its own series of brackets and braces. For example, in a paper I’m working on right now, one of my more complex citations is

\autocites[745]{Finger2011a}[21; 454 of the 721 studies deal with cotton]{Finger2011b}

To switch punctuation around properly, try a regular expressions search-and-replace with this search string:

([.,;?!])([']{0,2})(\\autocite[s]?([,; [:alpha:]\-0-9]*\{[\{\}, [:alpha:]\-0-9]*\})+)

and this replace string:

\2 \3\1

This ignores \autocites citations with parentheses, e.g., \autocites(See, for example)(){cite1}[15]{cite3}. But you probably don’t have many of those, and will want to adjust the citation by hand anyways. It also ignores the variation \autocite*.

To go the other directions, try the search string:

([']{0,2}) (\\autocite[s]?([,; [:alpha:]\-0-9]*\{[\{\}, [:alpha:]\-0-9]*\})+)([.,;?!])

and the replace string

\4\1\2

Filed under latex

# Aristotelean Anti-Paternalism

Last year, I wrote a post sketching an account of global justice and beneficence in terms of friendship. Sometime in the last couple of weeks, as I was prepping for my introduction to ethics course, it occurred to me that taking friendship as a paradigm might avoid some of the problems of paternalism faced by other accounts of global justice and beneficence.

Let’s take Peter Singer’s argument for the obligation to assist, since it’s quite familiar to philosophers and should be pretty easy to understand even if you’re not a philosopher. The argument goes like this:

1. If we can prevent something very bad from happening, without sacrificing anything of comparable moral significance, then we ought to do so.
2. Absolute poverty is very bad.
3. We can prevent some absolute poverty without sacrificing anything of comparable moral significance.
4. Hence, we ought to prevent some absolute poverty.

There are two kinds of paternalist worries here — worries that the attitudes and actions Singer is advocated are presumptive and patronizing. First, it’s presumptive and patronizing for us to go around declaring that the life situations of other people are very bad and ought to be changed. Second, given that absolute poverty is indeed very bad, it’s presumptive and patronizing for us to think that we know how to solve the problem — that giving people money, or free education, or building infrastructure, or whatever, will successfully prevent absolute poverty and won’t have significantly bad unforeseen consequences.

Both of those worries are raised by photographer Chris Arnade, in a Comment is Free on the Guardian this morning. Arnade is talking about drug addicts, rather than people living in absolute poverty per se. But it’s easy to imagine someone thinking that the life of a heroin addict is very bad and that it can be prevent without sacrificing etc, and so concluding that we ought to help heroin addicts turn their lives around. Arnade describes what sounds like an early attempt to do just that, and his subsequent realization:

That weekend I locked the doors of an apartment to keep her from ending the pains of her withdrawal with a needle. We both eventually caved. By Sunday evening, I was scouting for methadone from the streets to salve her pain, but she ran back to Hunts Point for cheaper illegal drugs. I returned to an empty apartment realizing I had locked that door to provide me with a positive story as much as to help Shelly.

I saw “saving someone” for what it is: an arrogant presumption that you know what is best for others. The only person one can save is oneself.

Eventually, I realized the best you can do is stand by and listen in a non-judgmental fashion, making yourself available should people decide they want help.

The first paragraph illustrates the second kind of paternalist worry: Arnade locks Shelly up, thinking that he knows best about how to help her get clean. It turns out he’s wrong. Indeed, since heroin withdrawal can be fatal, he could have accidentally killed Shelly.

The last two paragraphs raise the first kind of worry, and Arnade’s response sounds like individualist relativism: what’s right or wrong, good or bad for someone is up to that individual person, and so we’re not in a position to judge that the life of a drug addict is very bad.

Even if they feel the force of the paternalist worries, many philosophers strongly reject individualist relativism. Fortunately, there’s another way to understand Arnade’s response. Towards the end of the piece, he writes:

A former heroin addict who saw my work once commented to me:

When I was a dope addict, plenty of people offered to buy me lunch, nobody bothered to talk to me, to give a shit about me.

You can’t save somebody, but you can give a shit about them.

In this way, the addict’s friend avoids the two kinds of paternalistic worries. The friend might judge that the addict’s situation is very bad; but her basis for this judgment comes in large part from the addict himself. Likewise, anything that she does to help improve the addict’s situation is informed by a sophisticated understanding of the details of his particular life, and again this understanding comes in large part from him.

Sometimes the friend may have to take action over or against the addict’s own wishes. But a friend would do this only in an unusual situation and when less paternalistic options have failed or are unavailable.

Aristoteleanism has a reputation for being hierarchical and condescending, dismissing “natural slaves” who are incapable of exercising virtue. And probably Aristotle himself wouldn’t think much of heroin addicts. But, following MacIntyre, we today can recognize that Aristotle was mistaken here. And Aristotle’s views on friendship provide a framework for thinking about relations of dependence and care that avoid some of the paternalistic aspects of modern, bourgeois approaches to ethics.

Filed under ethics paternalism aristotle friendship

# Diversity and “the” Philosophical Tradition

A typical problem for introductory philosophy courses is that the list of readings is dominated by privileged authors — white men, members or adjuncts of the ruling classes of their respective societies, many of them able-bodied, often lifelong bachelors who have almost no experience interacting with children or the infirm, and so on. A typical proposed response — which fields like Literature adopted around 30 years ago — is to make the list of authors more diverse. But sometimes philosophers are hesitant to make this move:

Filed under metaphilosophy teaching

# Sexism, Philosophy, and the Reciprocity of Virtue

Sexism in philosophy has been on my mind lately, between my colleague Kerry McKenzie’s review of a disastrous attempt at philosophy of physics by notorious sexist philosopher Colin McGinn and a visit to our department last week by Jenny Saul. I’ve also been thinking a lot about virtue ethics, in part because I get to teach it for the first time next term. It seems like virtue ethics has some valuable insights for the problem of sexism in philosophy. In this post, I want to develop one small insight, starting with something that seems to be a challenge to a virtue ethical discussion of sexism in philosophy.

Filed under metaphilosophy higher ed

# The Climate Debate: Ignorance, and/or Complexity?

Why is the climate change debate so interminable? From the perspective of many scientists, we’ve had compelling data since the 1970s and more than enough reason to reduce greenhouse gas emissions since the 1980s. Today the IPCC will release the first part of their fifth Assessment Report. But no one really expects this document to settle the debate.

One very common explanation for the endlessness of the debate is that the public are ignorant. This might be because they haven’t learned much of anything at all about climate change; the historian of science Robert Proctor calls this “native state” ignorance. Or the public might be ignorant because a more-or-less organized group of people, the “climate skeptics” or “climate denialists,” are deliberately feeding them misinformation to protect fossil fuel interests; Proctor calls this “ignorance as a strategic ploy.”1

I agree that both kinds of ignorance play a role in drawing out the climate debate. But I don’t think it’s the complete explanation. So let me sketch a complementary one.

To begin, we need to understand how climate science works. There’s something that I call the “popular image” of climate science, according to which it’s supposed to work something like this:

1. Scientists measure the temperature and notice that things have been getting warmer since the industrial revolution.
2. Scientists infer that human greenhouse gas emissions are causing the warming trend.
3. Scientists make predictions about the future, concluding that temperature will increase by a certain amount by the year 2100 if humans continue to emit greenhouse gases.

The popular image portrays climate science as nice, neat, and fairly easy-to-understand. The problem is that it’s radically false. A more accurate image is below. If you get a bit cross-eyed trying to understand how it works, don’t worry, that’s kind of my point. You can skip to the next paragraph.

1. Scientists use statistical techniques to combine thermometer measurements (which only go back to about the mid-19th century) with physical measurements that indicate but don’t directly measure temperature (like the thickness of tree rings and pockets of air trapped in glaciers thousands of years ago).
2. Scientists build thousands of computer simulations to model the interactions of various factors, from human greenhouse gas emissions to increased solar activity to the chemical composition of the oceans. All of these simulations involve assumptions and simplifications that make it possible for computers to actually produce results in a reasonable amount of time.
Some of these simulations compare combinations of major influences to the temperature data produced in step 1. It’s relatively easy to get the simulations that include human greenhouse gas emissions to statistically match the general trend of the temperature data. It’s much harder to get the simulations that don’t include human greenhouse gas emissions to match this general trend. So scientists infer that human greenhouse gas emissions are one of the major causes of the warming trend. And scientists don’t even try to get the simulations to match the temperature data exactly.
3. Once scientists have simulations that do a reasonably good job of matching the general trend in the past, they run the simulations forward to about the year 2100. Since the simulations don’t usually agree on these future projections, they’re aggregated using more statistical techniques.

In short, climate science relies on simplifying assumptions and complex statistical techniques. The conclusion that humans are responsible for climate change is based on evidence, but it’s not the easy inference that the public image suggests. Indeed, that last point is more general: climate science is not nice, neat, and easy-to-understand, as the popular image presents it.

This mismatch between the popular image and the complex reality gives sophisticated climate skeptics two kinds of crucial openings. First, skeptics can point to particular complicated and messy elements — weird assumptions in the statistics or computer simulations, or the complicated relationship between temperature data and the design of computer simulations. Second, skeptics can mimic parts of the climate science process in ways that will seem, to many non-scientists, to be about the same as what climate scientists are doing, while getting radically different conclusions.2

These skeptical arguments work on two levels. On the technical level, they assert that there are problems deep within the complexities of climate science. On the popular level, by pointing out that climate science isn’t living up to what it’s supposed to be — according to the popular image — they assert that the whole enterprise of climate science is a sham. It’s supposed to be nice and simple, but — skeptics suggest — that’s all just smoke and mirrors.

Climate scientists and activists give good responses to these arguments on the technical level. But they don’t deal well with the popular level — they don’t take on the popular image of climate science as nice, neat, and easy-to-understand. Indeed, their simplified explanations for non-scientific audiences often reinforce this image.3 And this, I think, is one major reason why skeptical arguments are so durable, and so why the climate debate continues with no end in sight.

Cross-posted at the Rotman Institute of Philosophy blog.

1. Proctor explains this distinction in the opening essay of the collection Agnotology, edited with Londa Schiebinger. That book also includes an essay on the use of ignorance as a strategic ploy in the climate debate by historians Naomi Oreskes and Eric Conway.

2. For an example of the first, see here. For an example of the second, see here.

3. For example, compare the “Basic” and “Intermediate” responses to climate skeptics here.

Filed under (philosophy of) science climate science

# Objectivity is a Unicorn

Following up on my last post and a semi-related conversation with a new officemate, it seems to me that a lot of people might react something like this:

Well, no surprise that we can’t trust these scientists: They have ties to Monsanto. What we need are objective, disinterested scientists who aren’t dependent on industry sponsors.
In this post I’m going to criticize this response. As the title puts it, objectivity is a unicorn. It doesn’t exist, it’s a myth, and so it’s not going to help us solve the problem.

Filed under (philosophy of) science food

# How to Use Citations to Create Ignorance

I spent most of Monday working on some “deep literature analysis” for my research on genetically modified organisms for food, or GMOf. This meant, in practice, that I spent about two hours looking up the citations for a single article. It was quite dull work, but the results were very interesting from the perspective of agnotology, an emerging area of science studies that deals with the production of ignorance. In this post, I’m going to give you some background on the “feed the world argument” and agnotology, then present my findings.