There's been quite a lot of talk of "meta-rationality" lately amongst the blogs I read. It is ironic that this emerging trend comes at a time when the very idea of rationality is being challenged from beneath. Mercier and Sperber, for example, tell us that empirical evidence suggests that reasoning is "a form of intuitive [i.e., unconscious] inference" (2017: 90); and that reasoning about reasoning (meta-rationality) is mainly about rationalising such inferences and our actions based on them. If this is true, and traditional ways of thinking about reasoning are inaccurate, then we all have a period of readjustment ahead.
It seems that we don't understand rationality or reasoning. My own head is shaking as I write this. Can it be accurate? It is profoundly counter-intuitive. Sure, we all know that some people are less than fully rational. Just look at how nation states are run. Nevertheless, it comes as a shock to realise that I don't understand reasoning. After all, I write non-fiction. All of my hundreds of essays are the product of reasoning. Aren't they? well, maybe. In this essay, I'm going to continue my desultory discussion of reason by outlining a result from experimental psychology from the year I was born, 1966. In their recent book, The Enigma of Reason, Mercier & Sperber (2017) describe this experiment and some of the refinements since proposed.
But first a quick lesson in Aristotelian inferential logic. I know, right? You're turned off and about to click on something else. But please do bear with me. I'm introducing this because, unless you understand the logic involved in the problem, you won't get the full blast of the 50-year-old insight that follows. Please persevere and I think you'll agree at the end that it's worth it.
It seems that we don't understand rationality or reasoning. My own head is shaking as I write this. Can it be accurate? It is profoundly counter-intuitive. Sure, we all know that some people are less than fully rational. Just look at how nation states are run. Nevertheless, it comes as a shock to realise that I don't understand reasoning. After all, I write non-fiction. All of my hundreds of essays are the product of reasoning. Aren't they? well, maybe. In this essay, I'm going to continue my desultory discussion of reason by outlining a result from experimental psychology from the year I was born, 1966. In their recent book, The Enigma of Reason, Mercier & Sperber (2017) describe this experiment and some of the refinements since proposed.
But first a quick lesson in Aristotelian inferential logic. I know, right? You're turned off and about to click on something else. But please do bear with me. I'm introducing this because, unless you understand the logic involved in the problem, you won't get the full blast of the 50-year-old insight that follows. Please persevere and I think you'll agree at the end that it's worth it.
~Logic~
For our purposes, we need to consider a conditional syllogism. Schematically it takes the form:
If P, then Q.
Say we posit: if a town has a police station (P), then it also has a courthouse (Q). There are two possible states for each proposition. A town has a police station (P); it does not have a police station (not P or ¬P); it has a courthouse (Q); it does not have a court house (¬Q). What we concerned with here is what we can infer from each of these four possibilities, given the rule: If P, then Q.
The syllogism—If P, then Q—in this case tells us that it is always the case that if a town has a police station, then it also has a courthouse. If I now tell you that the town of Wallop in Hampshire, has a police station, you can infer from the rule that Wallop must also have a courthouse. This is a valid inference of the type that Aristotle called modus ponens. Schematically:
If P, then Q.
P, therefore Q. ✓
P, therefore Q. ✓
What if I tell you that Wallop does not have a police station? What can you infer from ¬P? You might be tempted to say that Wallop has no courthouse. But this would be a fallacy (called denial of the antecedent). It does not follow from the rule that if a town does not have a police station that it also doesn't have a court house. It is entirely possible under the given rule that a town has a courthouse but no police station.
If P, then Q.
¬P, therefore ¬Q. ✕
¬P, therefore ¬Q. ✕
What if we have information about the courthouse and want to infer something about the police station. What can we infer if Wallop had a courthouse (Q)? Well, we've just seen that we cannot infer anything. Trying to infer something from the absence of the second part of the syllogism leads to false conclusions (affirmation of the consequent)
If P, then Q.
Q, therefore P. ✕
Q, therefore P. ✕
But we can make a valid inference if we know that Wallop has no courthouse (¬Q). If there is no courthouse and our rule is always true, then we can infer that there is no police station in Wallop. And this valid inference is the type called modus tollens by Aristotle.
If P, then Q.
¬Q, therefore ¬P. ✓
~Wason Selection Task~
¬Q, therefore ¬P. ✓
So, given the rule and information about one of the two propositions P and Q, we can make inferences about the other. But only in two cases can we make valid inferences, P and ¬Q.
rule | given | inference | validity |
If P, then Q. | P | Q | ✓ |
¬P | ¬Q | ✕ | |
Q | P | ✕ | |
¬Q | ¬P | ✓ |
Of course, there are even less logical inferences one could make, but these are the ones that Aristotle deemed sensible enough to include in his work on logic. This is the logic that we need to understand. And the experimental task, proposed by Peter Wason in 1966, tested the ability of people to use this kind of reasoning.
~Wason Selection Task~
You are presented with four cards, each with a letter and number printed on either side.
The rule is: If a card has E on one side, it has 2 on the other.
The question is: which cards must be turned over to test the rule, i.e., to determine if the cards follow the rule. You have as much time as you wish.
~o~
Wason and his collaborators got a shock in 1966 because only 10% of their participants chose the right answer. Having prided ourselves on our rationality for millennia (in Europe, anyway) the expectation was that most people would find this exercise in reasoning relatively simple. Only 1 in 10 got the right answer. This startling result led Wason and subsequent investigators to pose many variations on this test, almost always with similar results.
Intrigued, they began to ask people about the level of confidence in their methods before getting their solution. Despite the fact that 90% would choose the wrong answer, 80% of participants were 100% sure they had the right answer! So it was not that the participants were hesitant or tentative. On the contrary, they were extremely confident in their method, whatever it was.
The people taking part were not stupid or uneducated. Most of them were psychology undergraduates. The result is slightly worse than one would expect from random guessing, which suggests that something was systematically going wrong.
The breakthrough came more than a decade later when, in 1979, Jonathan Evans came up with a variation in which the rule was: if a card has E on one side, it does not have 2 on the other. In this case, the proportions of right and wrong answers dramatically switched around, with 90% getting it right. Does this mean that we reason better negatively?
What Evans found was that people turn over the cards named in the rule. Which is not reasoning, but since it is predicated on an unconscious evaluation of the information, not quite a guess, either. Which is why the success rate is worse than random guessing."This shows, Evans argued, that people's answers to the Wason task are based not on logical reasoning but on intuitions of relevance." (Mercier & Sperber 2017: 43. Emphasis added)
Which cards did you turn over? As with the conditional syllogism, there are only two valid inferences to be made here: Turn over the E card. If it has a 2 on the other side, the rule is true for this card (but may not be true for others); if it does not have a 2, the rule is falsified. The other card to turn over is the one with a seven on it. If it has E on the other side, the rule is falsified; if it does not have an E, the rule may still be true.
Turning over the K tells us nothing relevant to the rule. Turning over the 2 is a little more complex, but ultimately futile. If we find an E on the other side of the 2 we may think it validates the rule. However, the rule does not forbid a card with 2 on one side having any letter, E or another one. So turning over the 2 does not give us any valid inferences, either.
Therefore, it is only by turning over the E and 7 cards that we can make valid inferences about the rule. And, short of gaining access to all possible cards, the best we can do is falsify the rule. Note that the cards are presented in the same order as I used in explaining the logic. E = P, K = ¬P, 2 = Q, and 7 = ¬Q.
Did you get the right answer? Did you consciously work through the logic or respond to an intuition? Did you make the connection with the explanation of the conditional syllogism that preceded it?
I confess that I did not get the right answer, and I had read a more elaborate explanation of the conditional logic involved. I did not work through the logic, but chose the cards named in the rule.
I confess that I did not get the right answer, and I had read a more elaborate explanation of the conditional logic involved. I did not work through the logic, but chose the cards named in the rule.
The result has been tested in many different circumstances and variations and seems to be general. Humans, in general, don't use reasoning to solve logic problems, unless they have specific training. Even with specific training, people still get it wrong. Indeed, even though I explained the formal logic of the puzzle immediately beforehand, the majority of readers would have ignored this and chosen to turn over the E and 2 cards, because they used their intuition instead of logic to infer the answer.
~Reasons~
In a recent post (Reasoning, Reasons, and Culpability, 20 Jul 2017) I explored some of the consequences of this result. Mercier and Sperber go from Wason into a consideration of unconscious processing of information. They discuss and ultimately reject Kahneman's so-called dual process models of thinking (with two systems, one fast and one slow). There is only one process, Mercier and Sperber argue, and it is unconscious. All of our decisions are made this way. When required, they argue, we produce conscious reasons after the fact (post hoc). The reason we are slow at producing reasons is that they don't exist before we are asked for them (or ask ourselves - which is something Mercier and Sperber don't talk about much). It takes time to make up plausible sounding reasons; we have to go through the process of asking, given what we know about ourselves, what a plausible reason might be. And because of cognitive bias, we settle for the first plausible explanation we come up with. Then, as far as we are concerned, that is the reason.
It's no wonder there was scope for Dr Freud to come along and point out that people's stated motives were very often not the motives that one could deduce from detailed observation of the person (particularly paying attention to moments when the unconscious mind seemed to reveal itself).
It's no wonder there was scope for Dr Freud to come along and point out that people's stated motives were very often not the motives that one could deduce from detailed observation of the person (particularly paying attention to moments when the unconscious mind seemed to reveal itself).
This does not discount the fact that we have two brain regions that process incoming information. It is most apparent in situations that scare us. For example, an unidentified sound will trigger the amygdala to create a cascade of activation across the sympathetic nervous system. Within moments our heart rate is elevated, our breathing shallow and rapid, and our muscles flooding with blood. We are ready for action. The same signal reaches the prefrontal cortex more slowly. The sound is identified in the aural processing area, then fed to the prefrontal cortex which is able to override the excitation of the amygdala.
A classic example is walking beside a road with traffic speeding past. Large, rapidly moving objects ought to frighten us because we evolved to escape from marauding beasts. Not just predators either, since animals like elephants or rhinos can be extremely dangerous. But our prefrontal cortex has established that cars almost always stay on the road and follow predictable trajectories. Much more alertness is required when crossing the road. I suspect that the failure to switch on that alertness after suppressing it might be responsible for many pedestrian accidents. Certainly, where I live, pedestrians commonly step out into the road without looking.
It is not that the amygdala is "emotional" and the prefrontal cortex is "rational". Both parts of the brain are processing sense data, but one is getting it raw and setting off reactions that involve alertness and readiness, while the other is getting it with an overlay of identification and recognition and either signalling to turn up the alertness or to turn it down. And this does not happen in isolation, but is part of a complex system by which we respond to the world. The internal physical sensations associated with these systems, combined with our thoughts, both conscious and unconscious, about the situation are our emotions. We've made thought and emotion into two separate categories and divided up our responses to the world into one or the other, but in fact, the two are always co-existent.
Just because we have these categories, does not mean they are natural or reflect reality. For example, I have written about the fact that ancient Buddhist texts did not have a category like "emotion". They had loads of words for emotions, but lumped all this together with mental activity (Emotions in Buddhism. 04 November 2011). Similarly, ancient Buddhist texts did not see the mind as a theatre of experience or have any analogue of the MIND IS A CONTAINER metaphor (27 July 2012). The ways we think about the mind are not categories imposed on us by nature, but the opposite, categories that we have imposed on experience.
Just because we have these categories, does not mean they are natural or reflect reality. For example, I have written about the fact that ancient Buddhist texts did not have a category like "emotion". They had loads of words for emotions, but lumped all this together with mental activity (Emotions in Buddhism. 04 November 2011). Similarly, ancient Buddhist texts did not see the mind as a theatre of experience or have any analogue of the MIND IS A CONTAINER metaphor (27 July 2012). The ways we think about the mind are not categories imposed on us by nature, but the opposite, categories that we have imposed on experience.
Emotion is almost entirely missing from Mercier and Sperber's book. While I can follow their argument, and find it compelling in many ways, I think their thesis is flawed for leaving emotion out of the account of reason. In what I consider to be one of my key essays, Facts and Feelings, composed in 2012, I drew on work by Antonio Damasio to make a case for how emotions are involved in decision making. Specifically, emotions encode the value of information over and above how accurate we consider it.
We know this because when the connection between the prefrontal cortex and the amygdala is disrupted, by brain damage, for example, it can disrupt the ability to made decisions. In the famous case of Phineas Gage, his brain was damaged by a railway spike being drive through his cheek and out the top of his head. He lived and recovered, but he began to make poor decisions in social situations. In other cases, recounted by Damasio (and others) people with damage to the ventro-medial pre-frontal cortex lose the ability to assess alternatives like where to go for dinner, or what day they would like doctor's appointment on. The specifics of this disruption suggests that we weigh up information and make decisions based on how we feel about the information.
Take also the case of Capgras Syndrome. In this case, the patient will recognise a loved one, but not feel the emotional response that normally goes with such recognition. To account for this discrepancy they confabulate accounts in which the loved one has been replaced by a replica, often involving some sort of conspiracy (a theme which has become all too common in speculative fiction). Emotions are what tell us how important things are to us and, indeed, in what way they are important. We can feel attracted to or repelled by the stimulus; the warm feeling when we see a loved one, the cold one when we see an enemy. We also have expectations and anticipations based on previous experience (fear, anxiety, excitement, and so on).
Mercier and Sperber acknowledge that there is an unconscious inferential process, but never delve into how it might work. But we know from Damasio and others that it involves emotions. Now, it seems that this process is entirely, or mostly, unconscious and that when reasons are required, we construct them as explanations to ourselves and others for something that has already occurred.
Sometimes we talk about making unemotional decisions, or associate rationality with the absence of emotion. But we need to be clear on this: without emotions, we cannot make decisions. Rationality is not possible without emotions to tell us how important things are, where "things" are people, objects, places, etc.
Take also the case of Capgras Syndrome. In this case, the patient will recognise a loved one, but not feel the emotional response that normally goes with such recognition. To account for this discrepancy they confabulate accounts in which the loved one has been replaced by a replica, often involving some sort of conspiracy (a theme which has become all too common in speculative fiction). Emotions are what tell us how important things are to us and, indeed, in what way they are important. We can feel attracted to or repelled by the stimulus; the warm feeling when we see a loved one, the cold one when we see an enemy. We also have expectations and anticipations based on previous experience (fear, anxiety, excitement, and so on).
Mercier and Sperber acknowledge that there is an unconscious inferential process, but never delve into how it might work. But we know from Damasio and others that it involves emotions. Now, it seems that this process is entirely, or mostly, unconscious and that when reasons are required, we construct them as explanations to ourselves and others for something that has already occurred.
Sometimes we talk about making unemotional decisions, or associate rationality with the absence of emotion. But we need to be clear on this: without emotions, we cannot make decisions. Rationality is not possible without emotions to tell us how important things are, where "things" are people, objects, places, etc.
In their earlier work (See An Argumentative Theory of Reason) of 2011, Mercier and Sperber argued that we use reasoning to win arguments. They noted the poor performance on a test of reasoning like the Wason task and added the prevalence of confirmation bias. They argued that this could be best understood in terms of decision-making in small groups (which is, after all, the natural context for a human being). As an issue comes up, each contributor makes the best case they can, citing all the supporting evidence and arguments. Here, confirmation bias is a feature, not a bug. However, those listening to the proposals are much better at evaluating arguments and do not fall into confirmation bias. Thus, Mercier and Sperber concluded, humans only employ reasoning to decide issues when there is an argument.
The new book expands on this idea, but takes a much broader view. However, I want to come back and emphasise this point about groups. All too often, philosophers are trapped in solipsism. They try to account for the world as though individuals cannot compare notes, as though everything can and should be understood from the point of view of an isolated individual. So, existing theories of rationality all assume that a person reasons in isolation. But I'm going to put my foot down here and insist that humans never do anything in isolation. Even hermits have a notional relation to their community - they are defined by their refusal of society. We are social primates. Under natural conditions, we do everything together. Of course, for 12,000 years or so, an increasing number of us have been living in unnatural conditions that have warped our sensibilities, but even so, we need to acknowledge the social nature of humanity. All individual psychology is bunk. There is only social psychology. All solipsistic philosophy is bunk. People only reason in groups. The Wason task shows that on our own we don't reason at all, but rely on unconscious inferences. But these unconscious (dare I say instinctual) processes did not evolve for city slickers. They evolved for hunter-gatherers.
It feels to me like we are a transitional period in which old paradigms of thinking about ourselves, about our minds, are falling away to be replaced by emerging, empirically based paradigms that are still taking shape. What words like "thought", "emotion", "consciousness", and "reasoning" mean is in flux. Which means that we live in interesting times. It's possible that a generation from now, our view of mind, at least amongst intellectuals, is going to be very different.
It feels to me like we are a transitional period in which old paradigms of thinking about ourselves, about our minds, are falling away to be replaced by emerging, empirically based paradigms that are still taking shape. What words like "thought", "emotion", "consciousness", and "reasoning" mean is in flux. Which means that we live in interesting times. It's possible that a generation from now, our view of mind, at least amongst intellectuals, is going to be very different.
~~oOo~~
Bibliography
Mercier, Hugo & Sperber, Dan. (2011) 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.
Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.
See also my essay: Reasoning and Beliefs 10 January 2014