This post is a précis and review of:
I'm making these notes and observations in order to better understand a new idea that I find intriguing. I have recently argued that the legacy of philosophical thought may be obscuring what is actually going on in our minds by imposing pre-scientific or non-scientific conceptual frameworks over the subject. I see this rethinking of reason as a case in point.
Mercier, Hugo & Sperber, Dan. 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. (2011) 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.
In this long article, Mercier & Sperber's contribution runs from pp.57-74. What follows are comments from other scholars titled "Open Peer commentary" (pp.74-101) and 10 pages of bibliography. The addition of commentary by peers (as opposed to silent peer review pre-publication) is an interesting approach. Non-specialists are given an opportunity to see how specialists view the thesis of the article.
The article begins with a few general remarks about reasoning. Since at least the Enlightenment it has been assumed that reasoning is a way to discover truth through applying logic. As another reviewer puts it:
Almost all classical philosophy—and nowadays, the “critical thinking” we in higher education tout so automatically—rests on the unexamined idea that reasoning is about individuals examining their unexamined beliefs in order to discover the truth. (The Chronicle of Higher Education. 2011)
Over a considerable period now, tests of reasoning capability have documented the simple but troubling fact that people are not very good discovering the truth through reasoning. We fail at simple logical tasks, commit "egregious mistakes in probabilistic reasoning", and we are subject to "sundry irrational biases in decision making". Our generally poor reasoning is so well established that it hardly needs insisting on. Wikipedia has a wonderful long list of logical fallacies and an equally long list of cognitive biases, though of course Mercier & Sperber cite the academic literature which has formally documented these errors. The faculty ostensibly designed to help us discover the truth, more often than not leads us to falsehood.
One thing to draw attention to, which is almost buried on p.71 is that "demonstrations that reasoning leads to errors are much more publishable that reports of its success." Thus all the results cited in the article may be accurate and yet still reflect a bias in the literature. The authors attempt to ameliorate this in their conclusions, but if you're reading the article (or this summary) this is something to keep in mind.
However, given that there is plenty of evidence that reason leads us to false conclusions, what is the point of reason? Why did we evolve reason if it's mostly worse that useless? The problem may well be in our assumptions about what reason is and does. The radical thesis of this article is that we do not reason in order to find or establish truth, but that we reason in order to argue. The argument is that viewed in its proper context--social arguments--reason works very well.
It has long been known that there are appear to be two mental processes for reaching conclusions: system 1 (intuition) in which we are not aware of the process; and system 2 (reasoning) in which we are aware of the process. Mercier & Sperber outline a variation on this. Inference is a process where representational output follows representational input; a process which augments and corrects information. Evolutionary approaches point to multiple inferential processes which work unconsciously in different domains of knowledge. Intuitive beliefs arise from 'sub-personal' intuitive inferential processes. Reflective beliefs arise from conscious inference, i.e. reasoning proper:
"What characterises reasoning proper is indeed the awareness not just of a conclusion but of an argument that justifies accepting that conclusion." (58).
That is to say, we accept conclusions on the basis of arguments. "All arguments must ultimately be grounded in intuitive judgements that given conclusions follow from given premises." (59) The arguments which provide the justification are themselves the product of a system 1 sub-personal inferential system. Thus even though we may reach a conclusion using reason proper, our arguments for accepting the conclusion are selected by intuition.
What this suggests to the authors it that reasoning is best adapted, not for truth seeking, but for winning arguments! They argue that this is its "main function" (60) which is to say the reason we evolved the faculty. Furthermore reasoning helps to make communication more reliable because arguments put forward for a proposition may be weak or strong, and counter arguments expose this. Reasoning used in dialogue helps to ensure communication is honest (hence I suppose we intuit that it leads towards truth - though truthfulness and Truth are different).
Of course this is counter intuitive claim and thus strong arguments must be evinced in its favour. Working with this idea is itself a test of the idea. Anticipating this the authors propose several features which reasoning ought to have if it evolved for the purpose of argumentation.
- It ought to help produce and evaluate arguments.
- It ought to exhibit strong confirmation bias.
- It ought to aim at convincing others rather than arriving at the best decision.
The authors set out to show that these qualities are indeed prevalent in reasoning through citing a huge amount of evidence from the literature the study of reasoning. This is where the peer evaluation provides an important perspective. If we are not familiar with the literature being cited, with its methods and conclusions, it is difficult to judge the competence of the authors, and the soundness of their conclusions. Even so most of us have to take quite a lot of what is said on trust. That it intuitively seems right is no reason to believe it.
1. Producing and Evaluating Arguments
On the first point we have already mentioned that reasoning is poor. However, because we see reasoning as an abstract faculty, testing reasoning is often done out of context. In studies of reasoning in pursuit of an argument, or trying to persuade someone, our reasoning powers improve dramatically. We are much more sensitive to logical fallacies when evaluating a proposition than when doing an abstract task. When we hear a weak argument we are much less likely to be convinced by it. In addition people will often settle for providing weak arguments if they are not challenged to come up with something better. If an experimenter testing someone's ability to construct an argument offers no challenge there is no motivation to pursue a stronger line of reasoning. This changes when challenges are offered. Reasoning only seems to really kick in when there is disagreement. The effect is even clearer in group settings. For a group to accept an argument requires that everyone be convinced or at least convinced that disagreeing is not in their interest. Our ability to reason well is strongly enhanced in these settings - known as the assembly bonus effect.
"To sum up, people can be skilled arguers, producing and evaluating arguments felicitously. This good performance stands in sharp contrast with the abysmal results found in other, nonargumentative settings, a contrast made clear by the comparison between individual and group performance." (62)
On the first point the literature of reasoning appears to confirm the idea that reason helps to produce and evaluate arguments. This does not prove that reasoning evolved for this reason or that arguing is the "main function" of reasoning, but it does show that reasoning works a great deal better in this setting than in the abstract.
2. Confirmation Bias
Confirmation bias is the most widely studied of all the cognitive biases and "it seems that everybody is affected to some degree, irrespective of factors like general intelligence or open-mindedness." (63). The authors say that in their model of reasoning confirmation bias is a feature.
Confirmation bias has been used in two different ways:
- Where we only seek arguments that support our own conclusion and ignore counter-arguments because we are trying to persuade others of our view.
- Where we test our own existing belief by only looking at positive inference. For example if I think I left my keys in my jacket pocket it makes more sense to look in my jacket pocket, than in my trouser pockets. "This is just trusting use of our beliefs, not a confirmation bias." (64) Later they call this "a sound heuristic" rather than a bias.
Thus the authors focus on the first situation since they don't see the second as a genuine case of confirmation bias. The theory being proposed makes three broad predictions about confirmation bias
- It should only occur in argumentative situations
- It should only occur in the production of arguments
- It is a bias only in favour of confirming one's own claims with a complementary bias against opposing claims or counter-arguments.
I confess that what follows seems to be a bit disconnected from these predictions. The evidence cited seems to support the predictions, but they are not explicitly discussed. This seems to be a structural fault in the article that an editor should have picked them up on. Having proposed three predictions they ought to have dealt with them more specifically.
In the Wason rule discovery task participants are presented with 3 numbers. They are told that the experimenter has used a rule to generate them and asked to guess that rule. They are able to test their hypothesis by offering another triplet of number. The experimenter will say whether or not it conforms to the rule. The overwhelming majority look for confirmation rather than trying to falsify their hypothesis. However the authors take this to be a sound heuristic rather than confirmation bias. The approach remains the same even when the participants are instructed to attempt to falsify their hypothesis. However if the hypothesis comes from another person, or from a weaker member of a group then participants are much more likely to attempt to falsify it and more ready to abandon it in favour of another. "Thus falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own." (64)
A similar effect is noted in the Wason selection task (the link enables you to participate in a version of this task) The participant is give cards marked with numbers and letters which are paired up on opposite sides of the card according to rules. The participant it given a rule and asked which card to turn over in order to test the rule. If the rule is phrased positively participants seek to confirm it, and if negatively to falsify it. Again this is an example of a "sound heuristic" rather the confirmation bias. However "Once the participant's attention has been drawn to some of the cards, and they have arrived at an intuitive answer to the question, reasoning is used not to evaluate and correct their initial intuition but to find justifications for it. This is genuine confirmation bias." (64)
One of the key observations the authors make is that participants of studies must be motivated to falsify. They draw out this conclusion by looking at syllogisms e.g. No C are B; All B are A; therefore some A are not C. Apparently the success rate of dealing with such syllogisms is about 10%. What seems to happen is that people go with their initial intuitive conclusion and do not take the time to test it by looking for counter-examples. Mercier & Sperber argue that this is simply because they are not motivated to do so. On the other hand if people are trying to prove something wrong--if for example we ask them to consider a statement like "all fish are trout"--they readily find ways to disprove this. Participants will spend an equal amount of time on the different tasks.
"If they have arrived at the conclusion themselves, or if they agree with it they try to confirm it. If they disagree with it then they try to prove it wrong." (65)
But doesn't confirmation bias lead to poor conclusions? Isn't this why we criticise it as faulty reasoning? It leads to conservatism in science for example and to the dreaded groupthink. Mercier& Sperber argue that confirmation bias in these cases is problematic because it is being used outside its "normal context: that is the resolution of a disagreement through discussion." (65) When used in this context confirmation bias works to produce the strongest, most persuasive arguments. Scholarship at its best ought to be like this.
The relationship of the most persuasive argument to truth is debatable, but the authors suppose that the truth will emerge if that is the subject of disagreement. If each person presents their best arguments, and the group evaluate them then this would seem to be an advantageous way of arriving at the best solution the group is capable of. Challenging conclusions leads people to improve their arguments, thus the small group may produce a better conclusion than the best individual in the group operating alone. Thus:
confirmation bias is a feature not a bug.
This is the result that seems to have most captured the imaginations of the reading public. However the feature only works well in the context of a small group of mildly dissenting (not polarised) members. The individual, the group with no dissent, and the polarised group with implacable dissent are at a distinct disadvantage in reasoning! Conformation bias works well for the production of arguments, but not so well for evaluation, though the later seemed less of a problem.
Does this fulfil the three broad predictions made about confirmation bias? We have seen that confirmation bias is not triggered unless these is a need to defend a claim (1). Confirmation bias does appear to be more prevalent when producing arguments than in evaluating them, and that we do tend to argue for our own claims and against the claims of others (2 & 3). However the predictions included the word only, and I'm not sure that they have, or could have, demonstrated the exclusiveness of their claims. More evidence emerges in the next section which deals (rather more obliquely with convincing others).
3. Convincing others.
Proactive Reasoning in belief formation.
The authors' thesis is that reasoning ought to aim at convincing others rather than arriving at the best decision. This section discusses the possibility that, while we do tend to favour our own argument, we may also anticipate objections. The latter is said to be the mark of a good scholar, though the article is looking at reasoning more generally. There is an interesting distinction here between beliefs we expect to be challenged and those which are not:
"While we think most of our beliefs--to the extent that we think about them at all--not as beliefs but just as pieces of knowledge, we are also aware that some of them are unlikely to be universally shared, or to be accepted on trust just because we express them. When we pay attention to the contentious nature of these beliefs we typically think of them as opinions." (66)
And knowing that our opinions might be challenged we may be motivated to think about counter-arguments and be ready for them with our own arguments. This is known as motivated reasoning. Interestingly from my point of view, because I think I have experienced this, one of the examples they give is: "Reviewers fall prey to motivated reasoning and look for flaws in a paper in order to justify its rejection when they don't agree with its conclusions." (66).
The point being that from the authors' perspective it seems that what people are doing in this situation is not seeking truth, but only seeking to justify an opinion.
"All these experiments demonstrate that people sometimes look for reasons to justify From an argumentative perspective, they do this not to convince themselves of the truth of their opinion but to be ready to meet the challenges of others." (66)
If we approach a discussion or a decision with an opinion, then we our goal in evaluating another's argument is often not to find the truth, but to show that the argument is wrong. The goal is argumentative rather than epistemic (seeking knowledge). We will comb through an argument looking for flaws for example, finding fault with study design, use of statistics or employing logical fallacies. Thus although there are benefits to confirmation bias in the production of arguments, confirmation bias in the evaluation of arguments can be a serious problem: it may lead to nitpicking, polarisation or strengthening of existing polarisation.
Two more effects of motivated reasoning are particularly relevant to my interests: belief perseverance and violation of moral norms. The phenomenon of belief perseverance (holding onto a belief despite evidence that a view is ill founded) is extremely common in religious settings. The argumentative theory sees belief perseverance as a form of motivated reasoning: when presented with counter-arguments the believer focuses on finding fault, and actively disregards information which runs counter to the belief. If the argument is particularly unconvincing--"not credible"--it can lead to further polarisation. And in the moral sphere reasoning is often used to come up with justifications for breaking moral precepts. Here reasoning can be clearly seen to be in service of argument rather than knowledge or truth.
Thus in many cases reasoning is used precisely to convince others rather than arriving at the best decision, even when this results in poor decisions or immoral behaviour. We use reason to find justifications for our intuitive beliefs or opinions.
Proactive Reasoning
The previous section was mainly concerned with defending opinions, while the next (and final) sections looks at how reason relates to decisions and actions more broadly. On the classical argument we expect reasoning to help us make better decisions. But this turns out not to be the case. Indeed in experiments people who spend time reasoning about their decisions consistently make decisions that are less consistent with their own previously stated attitudes. They also get worse at predicting the results of basketball games. "People who think too much are also less likely to understand other people's behavior." (69). A warning note is sounded here that some of the studies which showed that intuitive decisions were always better than thought out decisions have not be able to be replicated. So Malcolm Gladwell's popularisation of this idea in his book Blink may have over-stated the case. However the evidence suggests that reasoning does not necessarily confer advantage. Which to my mind is in line with what I would expect.
The argumentative theory suggests that reasoning should have most influence where our intuitions are weak - where we are not trying to justify a pre-formed opinion. One can then at least defend a choice if it proves to be unsatisfactory later. In line with research dating back to the 1980s this is called reason-based choice. reason-based choice is able to explain a number of unsound uses of reasoning noted by social psychologists: the disjunction effect, the sunk-cost fallacy, framing effects, and preference inversion.
The connecting factor is the desire to justify a choice or decision. We can see this in action in many countries today with the insistence on fiscal austerity as a response to economic crisis. Evidence is mounting that cutting government spending only causes further harm, but many governments remain committed to it. As long as they can produce arguments for the idea, they refuse to consider arguments against.
Conclusions
Some important contextualising remarks are made in the concluding section, many of which are very optimistic about reasoning. Reasoning as understood here makes human communication more reliable and more potent.
"Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or actions.... Human reasoning is not a profoundly flawed general mechanism: it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels" (72)
The authors stress the social nature of reasoning. Generally speaking it is groups of people that use reason to make progress not individuals, though a small number of individuals are capable of being their own critics. Indeed the skill can be learned, though only with difficulty and one only ever mitigates and does not eliminate the tendency towards justification. Thus though confirmation bias seems inevitable in producing arguments, it is balanced out in the evaluation by other people.
"To conclude, we note that the argumentative theory of reasoning should be congenial to those of us who enjoy spending endless hours debating ideas - but this, of course, is not an argument for (or against) the theory. (73)
~o~
Comments
It ought to come as no surprise that a faculty of a social ape is evolved to function best in small groups. The puzzle is why we ever thought of the individual as capable of standing alone and apart from their peers. It's a conceit of Western thinking that is going to come under increasing attack I think.
I think it's unlikely that we'll ever be able to say that we evolved x for the purposes of y except in a very general sense. Certainly eyes enable us to see, but it is simplistic to say that eyes evolved in order for us to see. We assume that evolution has endowed us with traits for a purpose, even when the purpose is unclear. And we observe that we have certain traits which serve to make us evolutionarily fit in some capacity. In this case the trait--reason--does not perform the function we have traditionally assigned to it. We are in poor at discovering the truth through reasoning alone, and much of the time not even looking. Therefore we must look again at what reason does. This is what Mercier and Sperber have done. Whether their idea will stand the text of time remains to be seen. My intuitive response is that they have noticed something very important in this paper.
My own interest in decision making stems from the work of Antonio Damasio, particular in Descartes's Error. My argument has been that decision making is unconscious and emotional and that reasons come afterwards. Mercier & Sperber are pursuing a similar idea at a different level. Damasio suggests that we make decisions using unconscious emotional responses to information and then justify our decision by finding arguments. And we can see the different parts of the process disrupted by brain injuries or abnormalities in specific locations. Thus neuroscience provides a confirmation of Mercier & Sperber's theory and correlates the behavioural observation with brain function. Neither cites the work of the other.
I presaged this review and my reading of this article in my essay The Myth of Subjectivity when I claimed that objectivity emerges from scientists working together. Mercier & Sperber confirm my intuition about how science works, including my note that scientists love to prove each other wrong. However they take it further and argue that this is the natural way that humans operate, and emphasise the social, interactional nature of progress in any field. And after all even Einstein went in search of support for his intuitions about the speed of light. He did not set out to disprove it. Thus we must reassess the role of falsification in science. It may be asking too much for any individual to seek to falsify their own work; but we can rely on the scientific community to provide evaluation and especially disagreement!
Those wishing to comment on this review should read Mercier & Sperber First. There's not much point in simply arguing with me. I've done my best to represent the ideas in the article, but I may have missed nuances, or got things wrong - I'm new to this subject. By all means let us discuss the article, or correct errors I have made, but let's do it on the basis of having read the article in question. OK?
~~oOo~~
Other reading.
My attention was draw to this article by an economist! Edward Harrison pointed to The Reason We Reason by Jonah Lehrer in Wired Magazine. (Be sure to read the comment from Hugo Mercier which follows the article). Amongst Lehrer's useful links was one to the original article.
Hugo Mercier's website, particularly his account of the Argumentative Theory of Reasoning; and on Academia.edu.
Dan Sperber's website, and on Academia.edu.
18 Aug 2016
Hugo Mercier has uploaded a new paper to academia.edu
The Argumentative Theory: Predictions and Empirical Evidence A Social Turn in the Study of Higher Cognition. Trends in Cognitive Science. September 2016, 20(9): 689-700. http://dx.doi.org/10.1016/j.tics.2016.07.001
Abstract
The argumentative theory of reasoning suggests that the main function of reasoning is to exchange arguments with others. This theory explains key properties of reasoning. When reasoners produce arguments, they are biased and lazy, as can be expected if reasoning is a mechanism that aims at convincing others in interactive contexts. By contrast, reasoners are more objective and demanding when they evaluate arguments provided by others. This fundamental asymmetry between production and evaluation explains the effects of reasoning in different contexts: the more debate and conflict between opinions there is, the more argument evaluation prevails over argument production, resulting in better outcomes. Here I review how the argumentative theory of reasoning helps integrate a wide range of empirical findings in reasoning research.