Showing posts with label Reason. Show all posts
Showing posts with label Reason. Show all posts

10 January 2014

Reasoning and Beliefs

In a desultory way I have been articulating a theory about religious belief over the last few years. As someone interested in factual accounts; as someone who's worldview has been changed by new facts on several occasions; and as someone who regularly spends a fair amount of time amongst credulous religious believers, I've been fascinated by the relationship between reason, factual information, and beliefs.

So for example, following Mercier and Sperber, I understand reasoning to be a function of groups seeking optimal solutions to problems. M & S argued that reasoning has confirmation bias as a feature when putting forward solutions to problems and that critical thinking, generally speaking, only works well when criticising someone else's proposed solution. The individual working in isolation to try to find the truth using reason is at a considerable disadvantage. Similarly, others have found that reason does not operate as predicted by mainstream accounts of it: "We’re assuming that people accept something or don’t accept it on a completely rational basis. Or, they’re part of a belief community that as a group accept or don’t accept. But the findings just made those simple answers untenable." (When it Comes to Accepting Evolution, Gut Feelings Trump Facts)

Furthermore I apply results published by Antonio and Hanna Damasio which suggest that emotions play a key role in decision making. (I outlined this idea in a blog called Facts and Feelings). Most real world problems are complex and making decisions about them, including deciding what we think is true, requires us to sift and weigh up a broad range of information. Before we can make a decision we have to assess the relevance of the information or category of information to the decision at hand, i.e. what is salient. Most of this process is unconscious and is based on emotional responses. Or in other words emotions function to help us decide what is important in any decision making situation. Decisions are then made by comparative weighing up of our emotional responses to the solutions we are aware of and have judged to be salient. Once a decision is made it is then rationalised to fit an existing personal narrative. This insight was also outlined at a library marketing seminar I attended almost 20 years ago.

The ability to unconsciously determine salience is what we often call our "gut feeling" or "intuition". This type of unconscious information processing seems to rely on pattern recognition and considering many options at once (parallel processing). The end result is decisions made with no conscious awareness of the process of thinking it through. Indeed the result often comes to us in a flash or after a period of sleep. The speed of this type of processing seems to contraindicate the usual cognitive processes of conscious problem solving, so that the answers that come via this route may not be fully integrated into the sense of self - spatially the answer comes from nowhere, or from outside us. Thus this kind of information process can coupled with views about metaphysical self that is not tied to the body and become "divine inspiration". (See Origin of the Idea of the Soul which relies on work by Thomas Metzinger).

The cognitive gap that opens up when we set aside information as non-salient is often filled by what neuroscientists call confabulation. Oliver Sachs poignantly described a man with no ability to make or retrieve memories (see Oliver Sach's Confabulating Butcher). Asked why he is present in the hospital or engaged in an activity he cannot say, but instead confabulates - he produces a plausible story and presents it as truth. There is no conscious lie, and the patient is not trying to deceive his interlocutors. He is presenting the most plausible account of himself that he has, despite being aware of inconsistencies, because he has no other account and the state of not being able to account for himself seems to be unacceptable at an unconscious level. Something similar happens whenever we have a flash of insight or intuition. The thought pops fully formed into our heads and then we confabulate a story about how it got there, and this generally speaking has nothing to do with how the mind or the brain works. Thus conscious thought is not a good paradigm for how the mind works. It is the just the tip of the iceberg.

Now this theory is still rather nascent and a bit vague. I'm still getting up to speed with the literature of evolutionary approaches to religion, though my views seem to have much in common with scholars like Ara Norenzayan. The theory does make an interesting prediction. It predicts that where people have strong existing views they will treat new contradictory information in a limited number of ways depending on how they feel about it. Where a view entails a major investment of identity and social status (e.g. a religious view) a person will tend to judge contradictory information as not salient and reason in such a way as to set aside the new information without having to consider the real implications of it. My idea was that this could be tested at some point on people with religious beliefs. On paper it does seem to account for some behaviours of religious people with respect to new information, for example the Christian fundamentalist confronting the facts of evolution.

With all this in mind I was fascinated to read an article by Steve Keen describing something very similar in the field of economics. Keen highlights a paper by Dan M. Kahan et al. "Motivated Numeracy and Enlightened Self-Government" Yale Law School, Public Law Working Paper No. 307. Keen, formerly Professor of Economics & Finance at the University of Western Sydney, is best known for his vehement polemics against the Neoclassical consensus in economics, epitomised in his book Debunking Economics. Neoclassical economics is what is taught to virtually all economics students at all levels across the world and has a monopoly over economics discourse that is disproportionate to its success as a body of theory.

Keen is one of a small number of economists who predicted the economic crisis that began in late 2007, and probably the only one who did so on the basis of mathematical modelling. One of his main criticisms of Neoclassical economists is that they ignore debt in their macro-economic models because aggregate debt cancels out: if I borrow £1 and a bank lends me £1 then the balance is zero. On the face of it this seems reasonable because our view of banks is that they lend out deposits. But in fact banks lend orders of magnitude more money than their actual deposits. When they lend they, in effect, create money at the same time. Problems occur when too much debt builds up and the repayments become a burden. For example private debt in the UK soared to 500% of GDP or five times the annual economic output of the whole country. Conditions may change and render debtors incapable of repaying the debt, which is what happened on a huge scale in 2007 and the sub-prime mortgage scandal. The levels of debt at that point meant that banks started to become insolvent as their income from interest payments plummeted and their own ability to service debts was compromised. From their the crisis spread like toppling dominoes.

Thus banks and debt are far from neutral in the economy. Perhaps in a post-crash world in which the role of banks in creating the crisis through a massive over-expansion of the money supply is public knowledge, the theory might be expected to change? But it has not. A global economic crisis has not caused any great soul searching amongst macro-economists who did not see it coming. Tweaking is the main result. 


Keen predicted the crash on the basis of the rate of change of debt. As we take on debt (private rather than public debt) growth ensues and , for example, employment levels grow. As shown in this graph there is a tight correlation (0.96) between changes in debt and the employment rate. 

Thus when the rate of change of debt began to fall sharply in 2006 it was a harbinger of collapse in the economy. In the USA employment levels fell from 95% to 90% and have yet to fully recover. 

This mathematical analysis ought to have been of interest to people trying to predict the behaviour of economies. Especially in post-crash hindsight it ought to be interesting to those whose job was to predict how the economy would perform and utterly failed to see the worst economic disaster in a century coming. As late as mid-2007, just months before sub-prime began to kick off, the OECD were still predicting strong economic performance in their member countries for the foreseeable future. However Keen's work, and the work of other economists who were successful in predicting a major recession/depression has been roundly ignored. Keen also argues that mainstream economic models are incapable of predicting a recession because it is not a possible state in those models, whereas his models do allow for recession. 

Not a man to mince words, Keen has been highly critical of the mainstream of economics. But now he puts that failure to react in the context of a theory of belief and decision making similar to the one outlined above. In the paper by Kahan et al, the participants were given tasks to assess their "ability to draw valid causal inferences from empirical data." The results were counter-intuitive and surprising. Numeracy – skill in understanding numbers – was a negative predictor of performance on these tasks if they conflicted with existing beliefs.
"It seems that when an issue is politically neutral, a higher level of numeracy does correlate with a higher capacity to interpret numerical data correctly. But when an issue is politically charged – or the numerical data challenges a numerate person’s sense of self – numeracy actually works against understanding the issue. The reason appears to be that numerate people employed their numeracy skills to evade the evidence, rather than to consider it." Steve Keen (emphasis in the original)
This is consistent with Mercier & Sperber's account of confirmation bias as a feature of reasoning. And it is consistent with my Damasio derived theory about the role of emotion in decision-making, if we read "politically charged" as signifying strongly held political beliefs, and associate that in turn with strong emotional responses to the issue. The authors tied this in with personal and social psychology:
Individuals, on this account, have a large stake – psychically as well as materially – in maintaining the status of, and their personal standing in, affinity groups whose members are bound by their commitment to shared moral understandings. If opposing positions on a policy-relevant fact – e.g., whether human activity is generating dangerous global warming – came to be seen as symbols of membership in and loyalty to competing groups of this kind, individuals can be expected to display a strong tendency to conform their understanding of whatever evidence they encounter to the position that prevails in theirsSteve Keen (emphasis added).
Kahan et al. extend the problem of salience of information to the social setting. Professed beliefs are often explicit markers of group membership and being well versed in group jargon and able to articulate group beliefs is part of what determines one's status in the group. In an economic setting, mainstream economists are able to ignore facts (such as a very high correlation of the rate of change of debt to the employment rate) that might change their worldview (particularly the way they view the role of debt in economics) because their status as members of a group requires them to conform to norms which are in part defined by holding a particular worldview. They are blind to facts which challenge their views. Keen points out that this is not a new observation and that some years ago that the great physicist Max Planck, who had struggled to have his work accepted by his peers, quipped that knowledge progresses "one funeral at a time".

This result reinforces the limitations of thinking of human beings in terms of individual psychology. It's a hard habit to break in the West. We are influenced by Freud, the Romantics, and the various revolutionary thinkers who championed the rights of individuals. Of course to some extent we are individuals, but not as much as we make out. Much of the inner life that appears to make up our individuality, is in fact determined by conditioning in various groups (family, peers, nation, religion, education) and by our place within these groups. We simply do not exist in isolation. 

Any philosophy of consciousness, mind, or morality which sees individuals as the main subject for study is of limited value. And the practice of trying to make valid inferences from the individual to the group is less likely to be accurate than the other way around. At the very least individuals exist in a series of overlapping gestalts with various groups. 

This view of people will most likely conflict with what we know. Most of us are convinced that we are individuals, who make our own decisions and think our own thoughts. We live in a society which highly values a narrative about "reason" and about what a reasoning individual is capable of. However, very few people are convinced by facts because that's not how reasoning works. That view of reason is isolated from other aspects of humanity such as emotions and our behaviour as social primates.

Those people who bombard us with facts fail to convince. By contrast the advertising profession has long understood that in order to change minds and behaviour one must change how people feel about the facts. Half the ads I see nowadays have almost no intellectual or factual content. It's all about brand recognition (familiarity) and a positive emotional response. One might say that for advertising the facts are now irrelevant. And so, liberals campaigning for protection of the environment, say, often fail to convince a sizeable proportion of the population or indeed anyone that disagrees with them to start with. Meanwhile advertising is a multi-billion pound industry, consumerism is rampant, and the environment is daily degraded in the direction of being unable to sustain human life.

Most people cannot be reasoned with, because neither people nor reason work the way they are popularly conceived to work. Those of us who want to make the world a better place must pay close attention to these issues of how people's minds actually work. If we want to convince people that we have a better solution it cannot be through facts alone. They must feel that what we say is salient in the context of their existing values. And even then, if what we say conflicts with strongly held beliefs then we can expect to be ignored. We tend to get so carried away in our enthusiasm for our own values that we fail to empathise with those whose minds we really need to change in order to change the world: i.e. political, military and business leaders. 
.

~~oOo~~

10 May 2013

An Argumentative Theory of Reason

This post is a précis and review of:
Mercier, Hugo & Sperber, Dan. 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. (2011)  34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.
I'm making these notes and observations in order to better understand a new idea that I find intriguing. I have recently argued that the legacy of philosophical thought may be obscuring what is actually going on in our minds by imposing pre-scientific or non-scientific conceptual frameworks over the subject. I see this rethinking of reason as a case in point. 

In this long article, Mercier & Sperber's contribution runs from pp.57-74. What follows are comments from other scholars titled "Open Peer commentary"  (pp.74-101) and 10 pages of bibliography. The addition of commentary by peers (as opposed to silent peer review pre-publication) is an interesting approach. Non-specialists are given an opportunity to see how specialists view the thesis of the article.

The article begins with a few general remarks about reasoning. Since at least the Enlightenment it has been assumed that reasoning is a way to discover truth through applying logic. As another reviewer puts it:
Almost all classical philosophy—and nowadays, the “critical thinking” we in higher education tout so automatically—rests on the unexamined idea that reasoning is about individuals examining their unexamined beliefs in order to discover the truth. (The Chronicle of Higher Education. 2011)
Over a considerable period now, tests of reasoning capability have documented the simple but troubling fact that people are not very good discovering the truth through reasoning. We fail at simple logical tasks, commit "egregious mistakes in probabilistic reasoning", and we are subject to "sundry irrational biases in decision making". Our generally poor reasoning is so well established that it hardly needs insisting on. Wikipedia has a wonderful long list of logical fallacies and an equally long list of cognitive biases, though of course Mercier & Sperber cite the academic literature which has formally documented these errors. The faculty ostensibly designed to help us discover the truth, more often than not leads us to falsehood.

One thing to draw attention to, which is almost buried on p.71 is that "demonstrations that reasoning leads to errors are much more publishable that reports of its success." Thus all the results cited in the article may be accurate and yet still reflect a bias in the literature. The authors attempt to ameliorate this in their conclusions, but if you're reading the article (or this summary) this is something to keep in mind. 

However, given that there is plenty of evidence that reason leads us to false conclusions, what is the point of reason? Why did we evolve reason if it's mostly worse that useless? The problem may well be in our assumptions about what reason is and does. The radical thesis of this article is that we do not reason in order to find or establish truth, but that we reason in order to argue. The argument is that viewed in its proper context--social arguments--reason works very well. 

It has long been known that there are appear to be two mental processes for reaching conclusions: system 1 (intuition) in which we are not aware of the process; and system 2 (reasoning) in which we are aware of the process. Mercier & Sperber outline a variation on this. Inference is a process where representational output follows representational input; a process which augments and corrects information. Evolutionary approaches point to multiple inferential processes which work unconsciously in different domains of knowledge. Intuitive beliefs arise from 'sub-personal' intuitive inferential processes. Reflective beliefs arise from conscious inference, i.e.  reasoning proper:
"What characterises reasoning proper is indeed the awareness not just of a conclusion but of an argument that justifies accepting that conclusion." (58).
That is to say, we accept conclusions on the basis of arguments. "All arguments must ultimately be grounded in intuitive judgements that given conclusions follow from given premises." (59) The arguments which provide the justification are themselves the product of a system 1 sub-personal inferential system. Thus even though we may reach a conclusion using reason proper, our arguments for accepting the conclusion are selected by intuition.

What this suggests to the authors it that reasoning is best adapted, not for truth seeking, but for winning arguments! They argue that this is its "main function" (60) which is to say the reason we evolved the faculty. Furthermore reasoning helps to make communication more reliable because arguments put forward for a proposition may be weak or strong, and counter arguments expose this. Reasoning used in dialogue helps to ensure communication is honest (hence I suppose we intuit that it leads towards truth - though truthfulness and Truth are different).

Of course this is counter intuitive claim and thus strong arguments must be evinced in its favour. Working with this idea is itself a test of the idea. Anticipating this the authors propose several features which reasoning ought to have if it evolved for the purpose of argumentation.
  1. It ought to help produce and evaluate arguments.
  2. It ought to exhibit strong confirmation bias.
  3. It ought to aim at convincing others rather than arriving at the best decision.
The authors set out to show that these qualities are indeed prevalent in reasoning through citing a huge amount of evidence from the literature the study of reasoning. This is where the peer evaluation provides an important perspective. If we are not familiar with the literature being cited, with its methods and conclusions, it is difficult to judge the competence of the authors, and the soundness of their conclusions. Even so most of us have to take quite a lot of what is said on trust. That it intuitively seems right is no reason to believe it.


1. Producing and Evaluating Arguments

On the first point we have already mentioned that reasoning is poor. However, because we see reasoning as an abstract faculty, testing reasoning is often done out of context. In studies of reasoning in pursuit of an argument, or trying to persuade someone, our reasoning powers improve dramatically. We are much more sensitive to logical fallacies when evaluating a proposition than when doing an abstract task. When we hear a weak argument we are much less likely to be convinced by it. In addition people will often settle for providing weak arguments if they are not challenged to come up with something better. If an experimenter testing someone's ability to construct an argument offers no challenge there is no motivation to pursue a stronger line of reasoning. This changes when challenges are offered. Reasoning only seems to really kick in when there is disagreement. The effect is even clearer in group settings. For a group to accept an argument requires that everyone be convinced or at least convinced that disagreeing is not in their interest. Our ability to reason well is strongly enhanced in these settings - known as the assembly bonus effect
"To sum up, people can be skilled arguers, producing and evaluating arguments felicitously. This good performance stands in sharp contrast with the abysmal results found in other, nonargumentative settings, a contrast made clear by the comparison between individual and group performance." (62) 
On the first point the literature of reasoning appears to confirm the idea that reason helps to produce and evaluate arguments. This does not prove that reasoning evolved for this reason or that arguing is the "main function" of reasoning, but it does show that reasoning works a great deal better in this setting than in the abstract.


2. Confirmation Bias

Confirmation bias is the most widely studied of all the cognitive biases and "it seems that everybody is affected to some degree, irrespective of factors like general intelligence or open-mindedness." (63). The authors say that in their model of reasoning confirmation bias is a feature.

Confirmation bias has been used in two different ways:
  • Where we only seek arguments that support our own conclusion and ignore counter-arguments because we are trying to persuade others of our view. 
  • Where we test our own existing belief by only looking at positive inference. For example if I think I left my keys in my jacket pocket it makes more sense to look in my jacket pocket, than in my trouser pockets. "This is just trusting use of our beliefs, not a confirmation bias." (64) Later they call this "a sound heuristic" rather than a bias.
Thus the authors focus on the first situation since they don't see the second as a genuine case of confirmation bias. The theory being proposed makes three broad predictions about confirmation bias
  1. It should only occur in argumentative situations
  2. It should only occur in the production of arguments
  3. It is a bias only in favour of confirming one's own claims with a complementary bias against opposing claims or counter-arguments. 
I confess that what follows seems to be a bit disconnected from these predictions. The evidence cited seems to support the predictions, but they are not explicitly discussed. This seems to be a structural fault in the article that an editor should have picked them up on. Having proposed three predictions they ought to have dealt with them more specifically.


In the Wason rule discovery task participants are presented with 3 numbers. They are told that the experimenter has used a rule to generate them and asked to guess that rule. They are able to test their hypothesis by offering another triplet of number. The experimenter will say whether or not it conforms to the rule. The overwhelming majority look for confirmation rather than trying to falsify their hypothesis. However the authors take this to be a sound heuristic rather than confirmation bias. The approach remains the same even when the participants are instructed to attempt to falsify their hypothesis. However if the hypothesis comes from another person, or from a weaker member of a group then participants are much more likely to attempt to falsify it and more ready to abandon it in favour of another. "Thus falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own." (64)


A similar effect is noted in the Wason selection task (the link enables you to participate in a version of this task) The participant is give cards marked with numbers and letters which are paired up on opposite sides of the card according to rules. The participant it given a rule and asked which card to turn over in order to test the rule. If the rule is phrased positively participants seek to confirm it, and if negatively to falsify it. Again this is an example of a "sound heuristic" rather the confirmation bias. However "Once  the participant's attention has been drawn to some of the cards, and they have arrived at an intuitive answer to the question, reasoning is used not to evaluate and correct their initial intuition but to find justifications for it. This is genuine confirmation bias." (64)

One of the key observations the authors make is that participants of studies must be motivated to falsify. They draw out this conclusion by looking at syllogisms e.g. No C are B; All B are A; therefore some A are not C. Apparently the success rate of dealing with such syllogisms is about 10%. What seems to happen is that people go with their initial intuitive conclusion and do not take the time to test it by looking for counter-examples. Mercier & Sperber argue that this is simply because they are not motivated to do so. On the other hand if people are trying to prove something wrong--if for example we ask them to consider a statement like "all fish are trout"--they readily find ways to disprove this. Participants will spend an equal amount of time on the different tasks.
"If they have arrived at the conclusion themselves, or if they agree with it they try to confirm it. If they disagree with it then they try to prove it wrong." (65) 
But doesn't confirmation bias lead to poor conclusions? Isn't this why we criticise it as faulty reasoning? It leads to conservatism in science for example and to the dreaded groupthink. Mercier& Sperber argue that confirmation bias in these cases is problematic because it is being used outside its "normal context: that is the resolution of a disagreement through discussion." (65) When used in this context confirmation bias works to produce the strongest, most persuasive arguments. Scholarship at its best ought to be like this.

The relationship of the most persuasive argument to truth is debatable, but the authors suppose that the truth will emerge if that is the subject of disagreement. If each person presents their best arguments, and the group evaluate them then this would seem to be an advantageous way of arriving at the best solution the group is capable of. Challenging conclusions leads people to improve their arguments, thus the small group may produce a better conclusion than the best individual in the group operating alone. Thus:
confirmation bias is a feature not a bug
This is the result that seems to have most captured the imaginations of the reading public. However the feature only works well in the context of a small group of mildly dissenting (not polarised) members. The individual, the group with no dissent, and the polarised group with implacable dissent are at a distinct disadvantage in reasoning! Conformation bias works well for the production of arguments, but not so well for evaluation, though the later seemed less of a problem.

Does this fulfil the three broad predictions made about confirmation bias? We have seen that confirmation bias is not triggered unless these is a need to defend a claim (1). Confirmation bias does appear to be more prevalent when producing arguments than in evaluating them, and that we do tend to argue for our own claims and against the claims of others (2 & 3). However the predictions included the word only, and I'm not sure that they have, or could have, demonstrated the exclusiveness of their claims. More evidence emerges in the next section which deals (rather more obliquely with convincing others).


3. Convincing others.


Proactive Reasoning  in belief formation.


The authors' thesis is that reasoning ought to aim at convincing others rather than arriving at the best decision. This section discusses the possibility that, while we do tend to favour our own argument, we may also anticipate objections. The latter is said to be the mark of a good scholar, though the article is looking at reasoning more generally. There is an interesting distinction here between beliefs we expect to be challenged and those which are not:
"While we think most of our beliefs--to the extent that we think about them at all--not as beliefs but just as pieces of knowledge, we are also aware that some of them are unlikely to be universally shared, or to be accepted on trust just because we express them. When we pay attention to the contentious nature of these beliefs we typically think of them as opinions." (66) 
And knowing that our opinions might be challenged we may be motivated to think about counter-arguments and be ready for them with our own arguments. This is known as motivated reasoning. Interestingly from my point of view, because I think I have experienced this, one of the examples they give is: "Reviewers fall prey to motivated reasoning and look for flaws in a paper in order to justify its rejection when they don't agree with its conclusions." (66).

The point being that from the authors' perspective it seems that what people are doing in this situation is not seeking truth, but only seeking to justify an opinion.
"All these experiments demonstrate that people sometimes look for reasons to justify  From an argumentative perspective, they do this not to convince themselves of the truth of their opinion but to be ready to meet the challenges of others." (66)
If we approach a discussion or a decision with an opinion, then we our goal in evaluating another's argument is often not to find the truth, but to show that the argument is wrong. The goal is argumentative rather than epistemic (seeking knowledge). We will comb through an argument looking for flaws for example, finding fault with study design, use of statistics or employing logical fallacies. Thus although there are benefits to confirmation bias in the production of arguments, confirmation bias in the evaluation of arguments can be a serious problem: it may lead to nitpicking, polarisation or strengthening of existing polarisation.

Two more effects of motivated reasoning are particularly relevant to my interests: belief perseverance and violation of moral norms. The phenomenon of belief perseverance (holding onto a belief despite evidence that a view is ill founded) is extremely common in religious settings. The argumentative theory sees belief perseverance as a form of motivated reasoning: when presented with counter-arguments the believer focuses on finding fault, and actively disregards information which runs counter to the belief. If the argument is particularly unconvincing--"not credible"--it can lead to further polarisation. And in the moral sphere reasoning is often used to come up with justifications for breaking moral precepts. Here reasoning can be clearly seen to be in service of argument rather than knowledge or truth.

Thus in many cases reasoning is used precisely to convince others rather than arriving at the best decision, even when this results in poor decisions or immoral behaviour. We use reason to find justifications for our intuitive beliefs or opinions.


Proactive Reasoning

The previous section was mainly concerned with defending opinions, while the next (and final) sections looks at how reason relates to decisions and actions more broadly. On the classical argument we expect reasoning to help us make better decisions. But this turns out not to be the case. Indeed in experiments people who spend time reasoning about their decisions consistently make decisions that are less consistent with their own previously stated attitudes. They also get worse at predicting the results of basketball games. "People who think too much are also less likely to understand other people's behavior." (69). A warning note is sounded here that some of the studies which showed that intuitive decisions were always better than thought out decisions have not be able to be replicated. So Malcolm Gladwell's popularisation of this idea in his book Blink may have over-stated the case. However the evidence suggests that reasoning does not necessarily confer advantage. Which to my mind is in line with what I would expect.

The argumentative theory suggests that reasoning should have most influence where our intuitions are weak - where we are not trying to justify a pre-formed opinion. One can then at least defend a choice if it proves to be unsatisfactory later. In line with research dating back to the 1980s this is called reason-based choice. reason-based choice is able to explain a number of unsound uses of reasoning noted by social psychologists: the disjunction effect, the sunk-cost fallacy, framing effects, and preference inversion.

The connecting factor is the desire to justify a choice or decision. We can see this in action in many countries today with the insistence on fiscal austerity as a response to economic crisis. Evidence is mounting that cutting government spending only causes further harm, but many governments remain committed to it. As long as they can produce arguments for the idea, they refuse to consider arguments against.


Conclusions

Some important contextualising remarks are made in the concluding section, many of which are very optimistic about reasoning. Reasoning as understood here makes human communication more reliable and more potent.
"Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or actions.... Human reasoning is not a profoundly flawed general mechanism: it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels" (72) 
The authors stress the social nature of reasoning. Generally speaking it is groups of people that use reason to make progress not individuals, though a small number of individuals are capable of being their own critics. Indeed the skill can be learned, though only with difficulty and one only ever mitigates and does not eliminate the tendency towards justification. Thus though confirmation bias seems inevitable in producing arguments, it is balanced out in the evaluation by other people.
"To conclude, we note that the argumentative theory of reasoning should be congenial to those of us who enjoy spending endless hours debating ideas - but this, of course, is not an argument for (or against) the theory. (73)
~o~ 

Comments

It ought to come as no surprise that a faculty of a social ape is evolved to function best in small groups. The puzzle is why we ever thought of the individual as capable of standing alone and apart from their peers. It's a conceit of Western thinking that is going to come under increasing attack I think.

This review is also sort of a follow up to an earlier blog Thinking it Through sparked by a conversation with Elisa Freschi in comments on her blog post: Hindu-Christian and interreligious dialogue: has it any religious value? I think Mercier & Sperber raise some serious questions about this issue. Reasoning does work well in polarising environments. And religious views tend to be mutually exclusive. 

I think it's unlikely that we'll ever be able to say that we evolved x for the purposes of y except in a very general sense. Certainly eyes enable us to see, but it is simplistic to say that eyes evolved in order for us to see. We assume that evolution has endowed us with traits for a purpose, even when the purpose is unclear. And we observe that we have certain traits which serve to make us evolutionarily fit in some capacity. In this case the trait--reason--does not perform the function we have traditionally assigned to it. We are in poor at discovering the truth through reasoning alone, and much of the time not even looking. Therefore we must look again at what reason does. This is what Mercier and Sperber have done. Whether their idea will stand the text of time remains to be seen. My intuitive response is that they have noticed something very important in this paper.

My own interest in decision making stems from the work of Antonio Damasio, particular in Descartes's Error. My argument has been that decision making is unconscious and emotional and that reasons come afterwards. Mercier & Sperber are pursuing a similar idea at a different level. Damasio suggests that we make decisions using unconscious emotional responses to information and then justify our decision by finding arguments. And we can see the different parts of the process disrupted by brain injuries or abnormalities in specific locations. Thus neuroscience provides a confirmation of Mercier & Sperber's theory and correlates the behavioural observation with brain function. Neither cites the work of the other.

I presaged this review and my reading of this article in my essay The Myth of Subjectivity when I claimed that objectivity emerges from scientists working together. Mercier & Sperber confirm my intuition about how science works, including my note that scientists love to prove each other wrong. However they take it further and argue that this is the natural way that humans operate, and emphasise the social, interactional nature of progress in any field. And after all even Einstein went in search of support for his intuitions about the speed of light. He did not set out to disprove it. Thus we must reassess the role of falsification in science. It may be asking too much for any individual to seek to falsify their own work; but we can rely on the scientific community to provide evaluation and especially disagreement!

Those wishing to comment on this review should read Mercier & Sperber First. There's not much point in simply arguing with me. I've done my best to represent the ideas in the article, but I may have missed nuances, or got things wrong - I'm new to this subject. By all means let us discuss the article, or correct errors I have made, but let's do it on the basis of having read the article in question. OK?

~~oOo~~


Other reading. 
My attention was draw to this article by an economist! Edward Harrison pointed to The Reason We Reason by Jonah Lehrer in Wired Magazine. (Be sure to read the comment from Hugo Mercier which follows the article). Amongst Lehrer's useful links was one to the original article. 
Hugo Mercier's website, particularly his account of the Argumentative Theory of Reasoning; and on Academia.edu.
Dan Sperber's website, and on  Academia.edu.

25 May 2012

Facts and Feelings

WESTERN PHILOSOPHERS have pondered the questions of 'what is knowledge?', and 'what is truth?' for centuries. Without, it must be said, coming to any kind of consensus. And without, it seems to me, acknowledging that the inability to come to a consensus after many centuries says there is something terribly wrong with the whole enterprise of philosophy! The question of why two philosophers can never agree is part of a larger question that interests me. On the surface there seem to be very different modes of knowing and processing knowledge. We distinguish intellect from feelings for instance, and reasoning from intuition. We have always insisted that the differences are important and have often valued one over the others. The classic contest is between reason and emotion. But some research (now decades old in fact) raises the question of whether these are even valid categories when it comes to knowledge.

I've already mentioned, several times, a case study cited by Antonio Damasio in which he meets a patient with damage to his ventro-medial pre-frontal cortex (red in the image). This part of the brain is involved in the regulation of emotions. Emotional impulses typically come from deeper brain structures in the so-called Limbic System; a series of related structures in the lower and mid-brain. However emotions are also processed and regulated by our neo-cortex. In the patient mentioned by Damasio his awareness of emotions is extremely attenuated. Asked to describe his journey to the appointment his emotional tone was flat, even when describing the traffic accident he witnessed along the way. The emotions do not register. But his narrative shows that his powers of observation and understanding are not impeded; for example his recall of the trip is detailed and the facts are accurately related. He understands cause and effect. What is missing is the emotional response. And this shows when the patient is asked whether an appointment on Tuesday or Thursday next week would suit him better. He has a complete grasp of the facts relating to the choice - his and other's schedules, traffic conditions at different times etc, and he understands the task: but after 30 minutes of reviewing the facts he cannot come to a decision. The facts appear to be evenly weighted in his mind. Each fact is as important as every other fact. So that he has no basis on which to make a decision. (Descartes' Error. p.192ff.) (see also Grabenhorst & Rolls, van den Bos & Güroğlu).

This points to a very important conclusion: that facts alone are not the basis of how we make decisions. We need to know the relative value of each fact, and this information comes from the emotional response we have in relation to the fact. When we consider the facts we don't just decide what we believe to be true. In any given situation there are likely to be hundreds of true facts. We need to decide, given the context, which facts are relevant and important, i.e. salient. Facts make for sense, and emotions make for salience. We simply cannot make decisions without both.

Salience is terribly important. I recently learned for instance that schizophrenia is going to be renamed salience syndrome in the DSM-V (The DSM Diagnostic and Statistical Manual of Mental Disorders, 5th ed. due out next year) The current thinking is that a person with the disorder assigns the wrong level of salience to their experience which leads to delusions. Cause and effect can become confused or disconnected, and coincidences start to take on far too much salience. Inner experience can seem as though it is connected to external events in ways that only the sufferer can detect. An urge to act may not be felt as coming from 'me' so must be coming from outside. And so on. Schizophrenia means 'split mind' (though the etymology of Greek phren 'mind' is unknown), which is not at all descriptive of the disorder and has often lead to confusion amongst lay people. Correctly assessing the salience of our experience is virtually a definition of sanity, though the definition has a broad and ill-defined boundary.

So part of the reason that philosophers (or people generally) cannot agree on things is that we have different notions of value and salience, and since these are primarily emotional they are difficult to articulate. In fact we tend to unconsciously absorb the values and notions of salience from the people around us. Values are strongly conditioned by relatives, friends, race, region, and religion (i.e. by all the various groups we are members of). Attempts to articulate universal values have so far failed to convince everyone. The problem of inarticulate values is exacerbated by those aiming at what they call 'rationality' or 'objectivity' since this usually involves consciously suppressing emotional responses.

Incidentally, this shows that unless the Vulcans of Star Trek were wired very differently from humans, that Mr Spock & co. would have been unable to make decisions. Without a way to assign value to facts they would all be just like Damasio's brain damaged patient.

Intuition and Reason


People I know have been using the term 'intuition' a lot lately and have consistently failed to respond to my request to know what they mean by it. I think I'm in a position to offer a definition which demystifies the word. Let's start with reasoning. In reasoning, as I have indicated, we don't just manipulate facts to make sense. In reasoning we tap into emotions to give value to facts, and then compare the relative values to decide which is salient, or which is most salient. Saliency is a much more fuzzy concept than truth. We know that two intelligent people can reasonably come to opposing conclusions given a set of facts. This is the basis of of arguments in politics and well as philosophy for example. It's so much a part of our daily lives that it hardly needs an example, but the classic illustration is between conservative and liberal politics. Given identical facts, right and left leaning people will come to completely different conclusions about appropriate courses of action, because each assesses the salience of competing facts differently. (The different values of left and right are summarised very well in a diagram produced by David McCandless and Stefanie Posavec for the Information is Beautiful website. See also McCandless on TED) The irreconcilability of left and right rests not on the facts per se, but on what each side considers to be the most salient facts. This is true of the irreconcilability of philosophies, ideologies, and religions also.

The champions of reason initially saw it as a way of freeing us from superstition. The great discovery of the Enlightenment was facts that were apparently independent of belief systems (though geometry was known to have this property since antiquity). Gravity affects the Atheist and the Christian in precisely the same way. If we measure the acceleration due to gravity anywhere on the earth then it is about 10 ms-2 with a variation proportional to our distance from the centre of the earth, and the density of the material directly under us, and a margin of error. In a world where most conflicts are based on mutually antagonistic belief systems this revelation from science seemed to be incredibly valuable. The hope was that we had discovered a reliable way to make decisions, and there were things we could all agree on! Some people still see science in this light, but most of us now acknowledge that values play a role in science as well. Though of course some religieux still fail to acknowledge facts that conflict with their (highly valued) belief systems.

Reason came to be associated with the conscious manipulation of these facts divorced from emotional involvement. And the Romantics (over) reacted to this by revalorising emotions at the expense of reason (leading Romantics tended to break with the values of the society around them). Unfortunately there is a great deal of difference between a value independent fact (like gravity), and value independent thinking (which amounts to suppressing one's awareness of emotions and therefore empathy). We still have to decide what facts are relevant to any situation, and all too often empathy is left out of rational equations. Cold reason has caused atrocities every bit as wicked as unchecked emotions have.

So reason, it seemed, could free us from superstition. Obviously it has failed to do so. Why? It could only have succeeded if the supernatural had low salience for us. In fact supernatural thinking tends to have a high value, and therefore high salience for many people I know (C.f. On Credulity), and though they are a bit credulous they are by no means cretinous. The survival of the supernatural is partly due to the pernicious influence of the Romantics who celebrated the irrational, but of course they only tapped into something that already existed in the hearts and minds of people. There is a very great reluctance to abandon the supernatural, many of us value it, and continue to find it salient in understanding our experience. People who rail against religion (often to a highly irrational extreme, marked by very strong emotions) on the whole seem to be ignorant of this dynamic, making their criticisms unhelpful (and I'm specifically thinking of Richard Dawkins here).

Which brings us to intuition. Unlike reasoning where we try to consciously compare the values we have assigned to facts, intuition is the same process undertaken unconsciously. Experientially it seems as if we leap to a conclusion, or the answer to a problem appears as if from nowhere. We tend to be quite naive about this and since we don't see a process, we assume that one doesn't exist or that it is a bit magical. Intuition then becomes mystified. All that is happening is that we are weighing the value of facts subconsciously and coming to an unconscious decision. It may also be that our phenomenal ability to detect patterns operates better at an unconscious level, since it it something we developed early in evolutionary terms (other animals also use pattern recognition to help them survive).

It may even feel as though trying to think consciously about a problem is counter-productive. Perhaps this is because we cast the net too widely and overload our judgement of salience with too many facts; or perhaps our intellectual (or ideological) values actually conflict with our unconscious values; or perhaps we are just alienated from our body and emotions which makes are values difficult to access. In any case often we solve a problem by allowing ourselves to work on it unconsciously. Many of the great advances in science have come through allowing the problem to mull over unconsciously. Breakthroughs often come after a night's sleep and have even come in dreams (like the structure of the benzene molecule). There's nothing very mysterious about this process, and in many ways it is simply the same as "reasoning" - connecting facts and/or experience, to emotions and values, to decide what makes the most sense of the given facts under the circumstances.

It seems to me that a number of fallacies about how we think persist in spite of new evidence which is constantly emerging. Folk ideas about the mind are still in the process of assimilating the ideas of the 19th century psycho-analytic movement and it's more popular spawn, let alone the insights of neuroscience. As I understand it there is no fundamental difference between reason and intuition, they are the same process operating at different levels of awareness. There is nothing magical about intuition (I frequently rely on it), though the unconscious nature of it does lend itself to magical explanations.

In a sense the magical explanations of intuition are rather egocentric: 'I' am the owner of all that I'm aware of in my mind and body, and since intuition is unconscious it must be 'not I'. And being both 'I', in that the inputs and outputs happen in my mind, and not 'I', in that I am unaware of the process of producing the output from the input: then something super-natural must be happening.

Embodied Cognition

Another fallacy about reasoning is that it is wholly abstract and divorced from experience. George Lakoff and Mark Johnson have showed that this is not so. Lakoff and Johnson have showed for instance that when we think abstractly we employ metaphors which draw on our physical experience of being embodied. We employ metaphors like UP IS GOOD/DOWN IS BAD. So if the stock market 'rises', then that is a good thing. But the stock market is only a notional entity and is not able to move about in space. Our reasoning here depends on the experience we have of moving about in space with our bodies. UP IS GOOD is likely related to an upright position being consistent with life, and lying down with death. And note that UP is not always GOOD. When we have a "high" fever this is a negative. Temperature being "high" or "low" is also a spatial metaphor (perhaps related to the position of the sun in the sky).

If I say "a thought just came into my head" I am performing quite a complex metaphorical translation. I am employing a range of metaphors: thoughts are objects, thoughts are agents, my head is my awareness, my head is a container--therefore awareness is a container). I'm relying on my experience of placing objects into containers, without which the sentence would not make sense. I'm also placing my first-person perspective inside that same container. The thought has to enter the same container, because containers can also hide objects. The unconscious is a container I cannot see into for instance. And note that the thought is an autonomous agent - it comes into my head, without me willing it (c.f. my previous statements on intuition). Although we all have an experience like this, the expression is metaphorical. Even apparently simple statements of fact are often couched in terms which rely on a complex interlocking system of metaphors that ultimately depend on how we physically interact with the world.

This argument from linguistics is confirmed from a neuroscience angle by the existence of mirror and canonical neurons, which form part of the motor cortex. When we do an action, say clenching a fist, parts of the motor cortex are active. Mirror neurons are active when we see someone else perform an action. Canonical neurons are active when we are presented with an object, or an image of an object, and we imagine how we might manipulate it. It is an unsurprising conclusion that we relate to the world in terms of how we might interact with it or manipulate it. However these same interactions form the basis of the metaphors that we use in abstract thought, which is not generally recognised.

Reason, then, is very much embodied and abstract thought depends on metaphors arising from our physical interactions with the world. Reason relies on assessing the salience of a fact by connecting it to our emotions, which we experience as bodily sensations. Reason also relies on metaphors and abstractions which are based in how we physically interact with the world. When we consider the nature of belief we need to keep all this in mind. A belief is a proposition that we have decided is not only true, but which has great salience. To shift a belief by offering alternative truths is ineffective. One can only shift a belief by changing the relative importance of the facts - that is by addressing salience. Indeed if we hold something to be highly salient, then the "fact" that it is untrue might not be salient - and we can comfortably and tenaciously believe untrue propositions. I would say that it is frequently the case with fundamentalist religious beliefs. In a future essay I want to look at how scientists have failed to communicate the salience of evolution, and allowed some religious people to continue to deny it despite the "facts". This is paralleled by Buddhist responses to the demonstrable fact that rebirth is factually implausible.

~~oOo~~

Bibliography

  • Damasio, Antonio. Descarte's Error. London: Vintage Books, 2006.
  • Grabenhorst, Fabian & Rolls, Edmund T. 'Value, pleasure and choice in the ventral prefrontal cortex.' Trends in Cognitive Sciences. 1 February 2011 (Vol. 15, Issue 2, pp. 56-67) doi:10.1016/j.tics.2010.12.004
  • Lakoff, George & Johnson, Mark. Metaphors We Live By. University of Chicago Press.
  • van den Bos, Wouter & Güroğlu, Berna. 'The Role of the Ventral Medial Prefrontal Cortex in Social Decision Making.' The Journal of Neuroscience, June 17, 2009 • 29(24):7631–7632. DOI:10.1523/JNEUROSCI.1821-09.2009
  • van Os J. 'Salience syndrome' replaces 'schizophrenia' in DSM-V and ICD-11: psychiatry's evidence-based entry into the 21st century? Acta Psychiatr Scand. 2009 Nov;120(5):363-72.

Comments are open again, let's see how I go.
Related Posts with Thumbnails