Showing posts with label Reason. Show all posts
Showing posts with label Reason. Show all posts

10 May 2019

We Need to Talk About Reason

About a year ago, the Politico website noted a new phenomenon. Young male American conservatives have begun referring to themselves as "classical liberals". Many were aping a notorious academic turned lifestyle guru but, given how obviously illiberal their agenda seems to be, I wondered how they could identify with the term "liberal". It seemed doubly weird, given that conservative Americans are so openly hostile towards "liberals", and use the word as invective. My few interactions with people who claim to be classical liberals suggest that they don't know much, if anything, about classical liberalism. Most are just naively repeating slogans. 

Clearly, liberalism has delivered us many freedoms for which we may be grateful. It is also true that, had classical liberalism prevailed, these freedoms would have remained the preserve of the elite. While classical liberals wrested power from kings for the elite, it was the new liberals, the "bleeding heart liberals" who wrested power away from the classical liberal elite (the bourgeoisie), for the people, if only briefly. It was the new liberals who ended the slave trade, and slavery as an institution, for example. They were the first to see that if liberty were to have any meaning, then it had to apply to all. 

In the previous essay I covered the background to liberalism and the confusion between the different applications of the term. In this essay and several to follow I will pick apart some of the fundamental beliefs of liberalism and show that they are anachronistic, at best. I begin with the classical view of reason; thence to a discussion of the ideology of utilitarianism; through the negative impacts of neoclassical liberalism on democracy; and I will finish up with the most egregious products of liberalism, runaway global warming and mass extinction.

The ideas of reasoning and rational thought are central to the liberal conception of human beings. Arguably, then, to understand the liberal ideology we need to understand how they conceived of rationality. The problem is that we've known since the 1960s that the ideas of rationality they relied on were wrong. And I mean, obviously, comically wrong, like someone's idea of how we ought to be, without reference to any actual human beings. And if liberalism is based on a delusion, then what would it look like with a accurate theory of reasoning? 


The Classical Account of Reason

The sapiens in our Latin binomial classification, coined in 1758 by the Swedish taxonomer, Linnaeus, means "wise". It comes from the Latin sapientia "good taste, good sense, discernment; intelligence, wisdom." It refers to the Enlightenment belief that men were uniquely capable of reasoning. Again, "men" here accurately reflects the classical view that women were not capable of reasoning. This is not my view, but the fact that it was the classical liberal view is very important to keep in mind.  

Classically, reasoning is a specific conscious mental process by which we apply logic to problems and arrive at knowledge of the truth, which then guides our decisions. In this view, actions guided by truth are good, while actions guided by falsehood are evil. This view of reasoning is thus linked to  concerns of metaphysics (truth), epistemology (how we know things), and morality (good and evil)

For much of history, reason coexisted with faith, which supposedly revealed truths that were inaccessible to reason. Until the enlightenment, philosophers employed deductive logic to explain the existence of God, the problem of evil, and other religious ideas. However, deductive logic has a flaw; it tends to reproduce one's starting axioms, or the propositions that are held a priori to be unquestionably true. All of the unspoken beliefs of the thinker influence the selection of valid deductions. So, if a logician believes in God, then at some point they will unconsciously accept a deduction as valid based on this belief. This leads them to the "logical" conclusion that God exists. And they assert that their belief in God is based on reason. 

The initial contrast and demarcation between reason and faith become more of a conflict and contest until, during the Enlightenment, reason combined with empiricism became the weapon of choice for intellectuals to undermine and destroy faith. This was done in the name of liberating people from superstition and the oppressive rule of the Church. And of course liberty is the central theme in liberalism. In the Enlightenment, reason was virtually deified. Natural philosophers, soon to be re-christened as "scientists" were the priests of this new cult. This coincided with the peak of materialism: a reaction against the superstitions of religion, which brought everything down to earth. The contrast and conflict between faith and reason is still one of the defining issues of modernity. 

Reason was what separated man from the beasts. For classical liberals it also separated the elite from the common man, and men from women. The elite reasoned that only they were truly rational, and as they defined rationality as good, then it made sense to them that they, as the only people capable of goodness, should be in charge of everything and everyone. Indeed, had they not ruled, then the irrational masses might have fallen back into superstition and religion. Liberals knew that they had to rule in such a way as those capable of reason obtained the maximum liberty while those incapable were at least not able to harm the capable. It was a difficult job, but someone had to take it on and the classical liberal elite stepped up. Of course, it was only fair that they be well compensated for their efforts on our behalf. And of course it was tiresome having to deal with the lower classes, so the best of them were put in charge of the day to day business of telling the peasants what to do and reporting profits back to their masters. These middlemen were imaginatively called the middle classes. Thus began the era of what David Graeber has called "the bullshitization of work".

We can already begin to see how classical understanding of reasoning was flawed.


Free Will

This ability to reason, free from any non-conscious irrationality, is linked to free will and, in particular, what we call contra-causal free will; i.e., free will in which only reason is exercised and there is no influence from emotion, intuition, any unconscious process, or external influence such as peer pressure. Anyone with a modern view of the mind has to realise that contra-causal free will could simply never exist, because all of our thought processes are influenced by all of these other factors all of the time. Reason as classically defined never happens and we actually have proof of this, but let me continue for now on the theme of free will. 

Free will is, of course, closely tied to issues of morality. The Christian answer to the problem of evil is that God gave Adam and Eve the choice to obey, they disobeyed, evil got a foothold, and they were thus cast out of Eden to lead lives of suffering. Even though, as an omnipotent, omnipresent, and omniscient supreme being, God created it all and could foresee all outcomes, Christians insist that it is not God's fault that we suffer. It is our fault. Buddhists also highlight the wilfulness (cetanā) of humanity as the cause of evil. 

As we have seen, most liberals also blamed humanity for the problem of evil and linked this to inherent flaws in the human character, or psyche. According to the classical liberals, humans are by nature variously bellicose, aggressive, competitive, acquisitive, and/or just plain selfish, although we are also supposed to be rational and the inherent antimony between selfishness and rationality seems to go unnoticed; i.e., it is not rational for a social species to be selfish because it will cause a break down in reciprocity and they will die out. 

In this view, therefore, morality is linked to reasoning. Only those who use reason to guide their actions can be moral. This is to say that, for classical liberals, morality is solely linked to reasoning and thus it becomes the province of rich European men. The bourgeoisie push out the church as arbiters of morality and temporal courts eventually gain jurisdiction even over the Church (thank God).

In reality, no one reasons in this way. Almost everything about this liberal discourse is wrong. The understanding of reasoning, of humanity as a social animal, of women, and of morality are all wrong. And these false ideas continue to dominate the thinking of the bourgeois elite. Before reviewing how we do reason, I want to sketch out some related ideas. 


Madness

Losing one's reason is seen with increasing alarm as the modern world emerges. Whereas the mad were largely harmless and left to themselves up to late medieval times, especially in Europe, madness gradually becomes a  moral issue, which at that time falls under the purview of the Church. Christians begin to see madness as a sign of sinfulness; the mad must be morally compromised or they would not be mad (deductive logic again). No distinctions are made in terms of the organic causes or etiology of madness until much later.

Michael Foucault notes that leprosy was not treated the same as madness. Of course, people were afraid of contagion (though they had no idea how leprosy spread). But they did not see lepers as morally compromised. Indeed, apart from fear of contagion, lepers were seen relatively positively: their suffering now would free them to go directly to heaven at death. Churchs would have places where lepers could observe services through a window, for example. 

According to Foucault, the confinement and punishment of the mad begins just as leprosy was disappearing from Europe, leaving the sanatoriums empty. The lazar houses where lepers has been quarantined soon became lunatic asylums. Since physicians ran the lazar houses they also inherited the care of lunatics.

Thereafter the loss of reason followed the trends of the medical profession. At first ithey treated madness an an imbalance of the humours. Melancholia, for example is an excess of black bile; whereas mania is an excess of blood. When doctors began to be interested in "psychology", treatment of madness moved from physical medicine to psychological medicine. The loss of reason was ascribed to repressed sexual urges or other psychological complexes. Then as antipsychotic drugs emerged, it was ascribed to chemical imbalance. And so on.

Throughout this period of change from, say, 1500 to 2000 the definitions of reasoning and rational hardly changed. Reasoning was an abstract ability possessed only by humans. It has to be exercised consciously. It is completely separate from and superior to other types of mental activity, excluding emotions in particular. It is almost synonymous with the use of logic. The rational human being is typified by the objective, emotionless man of science. They are contrasted with the hedonistic, irrational, emotional peasant man.  

Friedrich Nietzsche describes two opposing ideals in society: Apollonian, associated with logic, order, rules, rule following; and Dionysian with emotion, chaos, spontaneity, and creativity. Freud thought he saw similar tendencies fighting for dominance in the psyche of every man. This trope lives on in the pseudo-scientific description of the left-brain and right-brain in what are effectively Apollonian and Dionysian terms.

But we may say that the classical liberals saw themselves as rational. Despite the fact that they wrested power from traditional sources against the tide of conservatism, they invested it in certain, rational, individuals. And they were terrified of the great unwashed masses who might (and sort of did) do the same to them. Thus we see the double standards of the class system: freedom to the point of hedonism for the elite, combined with strict authoritarian rule and puritanism for the workers.

Thomas Jefferson rails against the institution of slavery throughout his political career, but continues to own hundreds of slaves the whole time because he feels he must take responsibility for them and that they cannot do so for themselves. The liberal elite decide what freedom is and who gets to enjoy it. Liberty for the few and slavery for the rest.

 
Romanticism

There was a significant rebellion against the materialist, rationalist, Apollonian view of humanity  that emerged from the Enlightenment and dominated European and colonial circles for a time. It gave rise to the Dionysian movement we call Romanticism. They turned materialism on its head: they valued emotion over reason, subjectivity over objectivity. And so on. However, materialists and romantics agreed one one thing: the primacy of the individual.

In England, romanticism resulted in an outpouring of emotional poetry from upper-class layabouts high on opium, but it also left a lasting sentimental imprint in attitudes to "nature". In Germany, things took a more philosophical turn, towards forms of Idealism that denied the very reality of the material world and posited that everything was simply one's own subjectivity.

Emerging from this German-speaking milieu was a new theory about madness in both its florid aspect of what we now call psychopathy (a disease of the psyche) and the more everyday irrationalities we call neurosis (an abnormal condition -osis of the nerves neuro). The new idea was that our conscious mind was only the tip of the iceberg and that lurking below the surface were many mental processes and "complexes" which could, and did, hijack our will. By far the most influential of these new doctors of the mind was Sigmund Freud. 

Freud's theory was that sexual urges were so strong that they governed every aspect of our lives, from birth to death. He was able to reinterpret everything in terms of sexual urges acted on or repressed. In this view, repressed sexual urges simply become acted on unconsciously, causing aberrant behaviour. Freud shared the generally dim view that classical liberals have of humanity:
"Man is revealed as 'a savage beast to whom consideration towards his own kind is something alien.'" — cited in Rifkin, J. (2009) The Empathic Civilisation. Polity Press.
Freud's views on women were even more aggressively regressive than those of his English contemporaries. All these guys were certainly the products of their times, but there's only so much apologising for stupidity of people who are hailed as the leading intellectuals of their day. Freud was a fucking idiot whose puerile theories should have rung alarm bells for anyone paying the least bit of attention to humanity. But he lived in a time when abstract theories about people thrived in contradiction to the practice if empiricists observing nature.

Despite the obvious lunacy of his "theories", Freud and his followers became incredibly influential on modern society. The language of psychoanalysis and psychology was co-opted by popular culture so that we now glibly speak of ego, the subconscious, neurosis, Oedipus complexes, and so on. We have no problem imagining emotions having an agency all of their own, so that when repressed they behave like wayward pixies and make us do and say naughty things. 

The focus on subjectivity found a happy home in post-war France where philosophers also asserted the primacy of subjectivity and began an assault on all expressions of objectivity. This was not in the spirit of a scientific revolution, but more of a tearing down the idols of the bourgeoisie and destroying their authority. French philosophers attacked all forms of authority and all attempts to legitimate it. In some ways we can see this as a libertarian project with echos of the French Revolution, which saw the aristocracy guillotined in their hundreds. To the extent that it was a reaction to early 20th Century modernism, the new French movement could accurately be called "post-modern", though in my view this is something of a red herring. 

Summing up the ever more complex history of ideas across the European and colonial world over a few centuries in such a short essay is quixotic at best. I'm highlighting just a few of the major features on the map and suggesting connections that might not be entirely obvious to all. The result is a sketch of a terrain from which the reader, drawing on their own detailed knowledge of history and philosophy, can imagine the background against which I will now paint a contrasting figure. 


Modern Views on Reason

It has been clear for at least fifty years that this is not how humans make decisions, is not how we think, is not how we reason, and that this is not how reason works. I've written at length on this subject, drawing on work by Hugo Mercier and Dan Sperber in particular, so I don't want to go over it all again in detail. However, having listened to Antonio Damasio's podcast discussion with Sean Carroll, I might need to modify my presentation of this material, but I want to get this out and so I'll have to review it in the future.

Suffice it to say that all liberals, espousing all forms of liberalism, have been completely wrong about the role that reason plays in our lives. Despite the classical view of reason being untenable, and widely known to be untenable, it is still the dominant view outside certain branches of academia. Economists, journalists, and activists all presume that humans are rational in their theories (and most add that we are self-interested, a stupid claim that I will deal with separately).

What we now know, and seems obvious in retrospect, is that humans are capable of using reason in narrowly defined situations that don't typically include making economic and moral choices. We do not use reasoning to make choices at all; rather, we use reasoning to justify choices in retrospect; i.e., to produce post hoc reasons. We make choices using unconscious processes of inference that in all cases involve felt responses to knowledge that we possess. Emotions play a pivotal role in how we assess the salience of any given fact. So, presented with the same facts, and both agreeing that they are true, two people may come to entirely different decisions based on what they perceive (through felt sensations) as most salient amongst the facts.

The other time we use reasoning is in social situations when we are assessing the ideas of others. When making decisions and presenting options to others in this situation we do not use reason, we use other inferential processes. In this social setting it pays for each proponent of an idea to present the best case possible, meaning that confirmation bias (which is virtually universal in such situations) is a feature, not a bug.  

Michael Taft has quipped that "beliefs are emotions about ideas". And as Cordelia Fine puts it, emotions are physiological arousal combined with emotional thoughts. In other words, what we believe, and most of us believe we are a little more rational than the people around us, is emotional.  Not in the Romantic sense, not elevating emotions to revealing the truth better than reason, but simply stating a fact. Emotions colour how we assess the salience of information, which we know from studying people with damage to their ventromedial prefrontal cortex. When we are unable to link information with the feelings that tell us how salient the information is, then we lose ability to make decisions.


Free Will

In this view, we can see that an individual's decisions are still important. However, it is also clear that contra-causal free will is irrelevant. An individual human cannot be considered apart from their social context, because we are social primates who live a social lifestyle. In fact, isolation can make us mentally and physically ill. 

The question becomes, "Under what conditions to we have conscious choices and what is the extent of our ability to choose?" A social mammal cannot just decide, "Fuck it, I'm not sharing my food with anyone," because that isolates them and they die during the first general food shortage because no one will share with them. In a social group, refusal to share most likely brings immediate repercussions in the form of active punishment from the group. In chimps, for example, groups round on and beat up any member that displays overt selfishness. The selfish individual weakens the group.

As urbanised humans we have a problem in that in our set-up selfish people can rapidly become so rich and powerful that we cannot easily punish them. And enough other selfish individuals are normalising and rewarding this behaviour that our disapproval and anger as a group don't seem to matter much. Classical liberalism was always about preventing the group from punishing individuals who display sociopathic levels of selfishness. By the way, sociopathy is defined as a pervasive and pronounced pattern of disregard for and deliberate violation of the rights of other people (viz slavery, genocide, and expropriation). 

There are many different views on the question of free will in the light of modern science. Many argue that because determinism is seen to apply at some levels of reality it must apply at all. Determinists argue that there can be no freewill in any meaningful sense. Morality in this view is not even a subject because no one can be held culpable for actions they did not choose. Deterministas frequently cite experiments by Benjamin Libet, as I explained in Freewill is Back on the Menu (11 March 2016); Libet's interpretation of his results was questioned by his colleagues at the time and has been quite thoroughly debunked now. Psychologists don't cite Libet, but many physicists still do - because of confirmation bias.

Others, myself included, hold that while determinism does apply at some levels it does not apply at all and that this allows for some freedom of will for animals. This is called compatibilism.

There are a dozen more variations on this question, but all of them call into question basic assumptions made by Enlightenment thinkers and particularly by liberals. If we call into question the very notion of freedom, then the ideology that deals with liberty loses all traction. What can liberty mean if no one is truly free? 

In fact, I believe this issue is clouded by confusion surrounding the meaning of free will. Most people seem to take it to be synonymous with contra-causal free will. But we've already ruled out contra-causal free will as a useful idea. No one ever had contra-causal free will.

At the very least, we can say that we experience ourselves making decisions. When called upon we formulate reasons for our actions. People around us hold us accountable for the decisions and the reasons we give for those decisions. But we are social animals whose behaviour is strongly influenced by our social milieu. So is there a better framework to discuss this? I think there is, and it emerges from the world of primatologist Frans de Waal.


The Evolution of Morality

Growing up we absorb a worldview—a complex web of beliefs (i.e., emotions) about the world and people and ourselves. We unconsciously absorb, through empathy, how others feel about the topics they are discussing and also about topics that are taboo. Many of us never question the basic assumptions we make because when we hear statements that agree with our belief we feel good about it and about ourselves. This is how we navigate the moral landscape.

In the language of John Searle, rather than consciously following moral rules, we develop unconscious competencies that guide our actions to be within the rules most of the time. We have agency, but in a prosocial animal it is delimited by what contributes to the survival of the group because that is how social species survive. All social animals have a dual nature as individuals and members of groups. 

We also, mostly unconsciously, modify our behaviour all the time based on ongoing social feedback. As social animals we are attuned, through empathy, to the disposition of other members of our group. And we also keep track of the network reciprocity amongst our group. We know, and love to discuss, who is sleeping with whom, who is in debt, who likes/hates their job, who has kids and what they are doing. This all creates a sense of belonging which is essential to good mental health in social mammals. Of course the modern industrialised world has disrupted this pattern on an unprecedented scale and we're still not sure what the result of that will be. But we have a sneaking suspicion that it is tied to the rise in mental health problems we are seeing across the industrialised world. 

The combination of empathy and reciprocity, which comes from the work on chimps and bonobos by Frans de Waal and his group, gives us the basis for the evolution of morality. The social lifestyle puts us in a situation were we know how other members of our group feel and we know the extent of our interrelationship with them: we know the extent of our obligations. From obligations come the idea of rights and duties. Thus, morality evolved as a deontological dimension to social life. And from this we can derive notions of virtue; virtue is primarily fulfilling or going beyond the requirements of obligation. Similar consequentialist accounts rely on an understanding of the expectations that come with obligation. And outcome is not good if it harms others, but this assumes an obligation not to harm.

This framing of agency and decision making as part of being a social primate embedded in networks of mutual obligation gives us a much better sense of the kinds of decisions we have to make as social primates. Legacy concepts like free will and the classical view of reasoning seem to have little relevance here. We are both individuals and social. Choices are always emotional, always with reference to our milieu. We are not isolated, selfish, or rational. Indeed, "rational" really requires a completely new definition.

As organisms we aim for homeostasis; i.e., to maintain our bodies within the limits that make continued life possible. Societies also have something like homeostasis, a kind of dynamic equilibrium, or set of chaotic oscillations through a range of possibilities consistent with the continued existence of the group. But now we scale the group up to millions of people crammed into tiny spaces. And this defies our evolutionary adaptations, very often leaving us to navigate by our wits rather than relying on our natural sociability.

I want to finish this essay on reason with a word on those who seek to grab our attention and subvert our decision making processes.


Propaganda

Many political activists are still fixated on putting the facts before the people and letting rational self-interest do its work. They haven't realised how humans make decisions. I find it difficult myself. In trying to persuade people that liberalism has run its course and that we need a new socio-political paradigm based on mutual obligations, I'm mainly using facts. Of course I'm trying also to construct a narrative, but it's mainly for other people who do like facts and who might be persuaded by a factual narrative.

We already know that few liberals or neoliberals will be persuaded by the narrative I am relating here.  A proper cult does not crumble at the first hint of criticism and liberalism is a couple of centuries old now. I feel the frustration of this. I feel that I want to break out of the faux formalism of essay writing and get someone excited about a new world through some creative story writing. I write non-fiction because I find it valuable in many ways.

Those who have really internalised the reality that humans are not rational are the modern propaganda industries; i.e.. journalism, advertising, public relations; spin doctors, speech writers, press secretaries, copy writers, lobbyists, etc. These are the people who know how we really make decisions and how to exploit that for profit or to gain power.

This is why the UK is doing a volt-face on Europe: through a targeted campaign of disinformation; using millions of profiles illegally obtained from Facebook to create illegally-funded attack ads on Facebook, the radical British nationalists hijacked the referendum and then exploited a very narrow majority of voters on the day (actually just a third of the electorate) to force us out of our most important international relationship, with our biggest trading partner (and the biggest single export market), voiding trade deals with every major trading bloc, and all for what? So a few British sociopaths could tell the rest of us what to do without interference from the sociopaths in the EU.

And even with all of these facts in the public domain, the process carries on with, if anything, even greater momentum. It really is completely mad.

The modern propaganda machine was helped by the sideways shift that psychologists took from psychotherapy to mass manipulation. They were led by Edward Bernays, Sigmund Freud's nephew and student, who used knowledge gained from psychology to orchestrate a campaign of manipulation to break through the American taboo on women smoking in the 1920s. He thus doubled the profits of American tobacco and condemned millions of women to death from cancer, emphysema, and other diseases associated with smoking. 

Why do the activists advocating for action on global warming and mass extinction have such a hard time getting their message across? At least partly because they erroneously believe it is simply a matter of putting the facts before the people and waiting for them to do the rational thing. But this has never worked, because reason does not work like this. We believe in such strategies for purely ideological reasons.

Against us are massed the propaganda corps of a hundred industry groups who employ top psychology and business PhDs to work in think tanks and lobby groups to target law-makers with disinformation.

Because we are working on out-of-date information we are extremely vulnerable to propaganda. Whole generations are now growing up saturated with propaganda.


Conclusion

We know that the classical account of reason is wrong.  Evidence has been stacking up on this since the mid-1960s. I found reading Mercier and Sperber's The Enigma of Reason profoundly shifted my understanding of reasoning and rationality. But I don't think I've internalised it yet. 

The classical account of reason is hard to shift partly because of the ways in which it is wrong. It is persuasive precisely because the false impression it creates is one that we want to believe - we like thinking of ourselves that way. The truth is much less glamorous, but worse we also have negative narratives about the truth. We feel strongly about reason and the supposed role that reason plays in our lives. And, for many of us, our aspiration to a cool, unemotional rationality still defines our identity. Many people, for example, admire Jordan Peterson because he is never emotional when under attack and he knows how to provoke emotional responses in other people. And in the classical paradigm this means he is rational and his emotional opponents are irrational. And because rationality is explicitly linked to morality, he appears to have the moral high-ground. 

But look at this another way. Someone who is unemotional when attacked is generally speaking alienated from their emotions. If your train in martial arts you have to learn to suppress emotions in order to stay focussed and fight. Samurai undertook Zen meditation techniques the better stay calm in combat; to be more effective killers.

We evolved emotions and the ability to read emotions in others to help us deal with intra-group conflicts. To conceal your emotional state gives you an advantage in a conflict. Being able to easily manipulate other people into expressing emotions, makes for a strong contrast. One is saying, "I am in control of myself and that other person is not in control of themselves". The emotional person is under the control of hostile forces. 

In the classical view, reasoning, thoughts, are voluntary and under our control. We are free to the extent we can suppress our emotions and employ logic. Emotions by contrast are also called passions. A passion is something involuntary that overtakes you. Art depicting Jesus being crucified by the Romans, is often called "The Passion of Christ." In this view, allowing yourself to be overcome by emotion is a form of weakness. And part of this narrative, of course, is that women, who are freer with their emotional displays precisely because they do not view social interactions as combat, are weak. This is the patriarchal argument that is used to oppress women. 

I grew up hating soccer because of the emotional reactions of English players to scoring a goal - they would become visibly elated, hug each other, and run about wildly. In the 1970s, when the game was still played by amateurs, my heroes, the New Zealand rugby team, would never celebrate scoring against the opposition. The goal scorer would simply turn around and quietly walk back to their position, along with teammates. Scoring was a team effort and no individual could or would take credit. Showing off, let alone rubbing the opposing team's face in it, was deeply frowned on. That was my ideal. Soccer players seemed effete and lacked humility or dignity. The British do like to get in your face when they win. 

On the other hand, men's uncontrolled rage, often towards women, is justified as a form of righteousness. As a man, one may not lose control and cry, for example, but one may lose control and punch someone who has offended you. There is a trendy term for this dynamic, but I don't use it, because we have enough problems without the additional stigma of labels. 

Popular culture likes to imagine large external threats, be it aliens, zombies, gangs, or killer bees. And humans usually survive these potential catastrophes by combining our two strengths: individual genius and working together as a team. In the movies, someone figures out how to survive the crisis, they are charismatic enough to convince everyone to try it their way (perhaps after token resistance), and then everyone works together to implement the plan that liberates us from the threat. 

There is a reason for this trope. As smart social primates, this is how we survive: full stop. The smart ones amongst us come up with clever plans. The persuasive ones get everyone on board and organised. But then everyone pulls their weight. Except that in wild primates, the greater one's capacity as a leader, the more obligation one carries to the members who are led. 

However we came to the classical account of reason (and I suspect nefarious intent), we now know that is it wrong. A central pillar of liberalism is rotten and has to be replaced. Liberalism will have to change as a result. Liberty is certainly an admirable goal but is has been used to avoid obligations and responsibilities. For example, the narrative of liberty has been used to continue to pollute our air, water, and land because  environmental legislation has been treated as an unjustified infringement on the free enterprise system. And yet, clearly, to poison the air I breathe or the water I drink is to deprive me of liberty. 

The advisory body Public Health England told me in an email that they estimate between 28,000 and 36,000 deaths each year can be attributed to air pollution. Try to imagine that a group of insurgents are going around shooting 30,000 people per year and what the government response would be.  In 2018, 272 people were killed by assailants wielding knives and there is an ongoing public outcry. But 30,000 deaths from air pollution hardly raises an eyebrow. This has to change, too.

Humans are not rational. We are so not rational. And this has nothing to do with making good or bad decisions (or how we define good and bad). We all need to take this on board and start rethinking morality, society, politics, economics, and pretty much everything else. 


~~oOo~~






25 August 2017

Rationality

There's been quite a lot of talk of "meta-rationality" lately amongst the blogs I read. It is ironic that this emerging trend comes at a time when the very idea of rationality is being challenged from beneath. Mercier and Sperber, for example, tell us that empirical evidence suggests that reasoning is "a form of intuitive [i.e., unconscious] inference" (2017: 90); and that reasoning about reasoning (meta-rationality) is mainly about rationalising such inferences and our actions based on them. If this is true, and traditional ways of thinking about reasoning are inaccurate, then we all have a period of readjustment ahead.

It seems that we don't understand rationality or reasoning. My own head is shaking as I write this. Can it be accurate? It is profoundly counter-intuitive. Sure, we all know that some people are less than fully rational. Just look at how nation states are run. Nevertheless, it comes as a shock to realise that I don't understand reasoning. After all, I write non-fiction. All of my hundreds of essays are the product of reasoning. Aren't they? well, maybe. In this essay, I'm going to continue my desultory discussion of reason by outlining a result from experimental psychology from the year I was born, 1966. In their recent book, The Enigma of Reason, Mercier & Sperber (2017) describe this experiment and some of the refinements since proposed.

But first a quick lesson in Aristotelian inferential logic. I know, right? You're turned off and about to click on something else. But please do bear with me. I'm introducing this because, unless you understand the logic involved in the problem, you won't get the full blast of the 50-year-old insight that follows. Please persevere and I think you'll agree at the end that it's worth it.


~Logic~

For our purposes, we need to consider a conditional syllogism. Schematically it takes the form:

If P, then Q.

Say we posit: if a town has a police station (P), then it also has a courthouse (Q). There are two possible states for each proposition. A town has a police station (P); it does not have a police station (not P or ¬P); it has a courthouse (Q); it does not have a court house (¬Q). What we concerned with here is what we can infer from each of these four possibilities, given the rule: If P, then Q.

The syllogism—If P, then Q—in this case tells us that it is always the case that if a town has a police station, then it also has a courthouse. If I now tell you that the town of Wallop in Hampshire, has a police station, you can infer from the rule that Wallop must also have a courthouse. This is a valid inference of the type that Aristotle called modus ponens. Schematically:

If P, then Q.
P, therefore Q. ✓

What if I tell you that Wallop does not have a police station? What can you infer from ¬P? You might be tempted to say that Wallop has no courthouse. But this would be a fallacy (called denial of the antecedent). It does not follow from the rule that if a town does not have a police station that it also doesn't have a court house. It is entirely possible under the given rule that a town has a courthouse but no police station.

If P, then Q.
¬P, therefore ¬Q. ✕

What if we have information about the courthouse and want to infer something about the police station. What can we infer if Wallop had a courthouse (Q)? Well, we've just seen that we cannot infer anything. Trying to infer something from the absence of the second part of the syllogism leads to false conclusions (affirmation of the consequent)


If P, then Q.
Q, therefore P. ✕

But we can make a valid inference if we know that Wallop has no courthouse (¬Q). If there is no courthouse and our rule is always true, then we can infer that there is no police station in Wallop. And this valid inference is the type called modus tollens by Aristotle.

If P, then Q.
¬Q, therefore ¬P. ✓

So, given the rule and information about one of the two propositions P and Q, we can make inferences about the other. But only in two cases can we make valid inferences, P and ¬Q.

rulegiveninferencevalidity
If P, then Q.PQ
¬P¬Q
QP
¬Q¬P


Of course, there are even less logical inferences one could make, but these are the ones that Aristotle deemed sensible enough to include in his work on logic. This is the logic that we need to understand. And the experimental task, proposed by Peter Wason in 1966, tested the ability of people to use this kind of reasoning.


~Wason Selection Task~

You are presented with four cards, each with a letter and number printed on either side.

The rule is: If a card has E on one side, it has 2 on the other.
The question is: which cards must be turned over to test the rule, i.e., to determine if the cards follow the rule. You have as much time as you wish.
~o~

Wason and his collaborators got a shock in 1966 because only 10% of their participants chose the right answer. Having prided ourselves on our rationality for millennia (in Europe, anyway) the expectation was that most people would find this exercise in reasoning relatively simple. Only 1 in 10 got the right answer. This startling result led Wason and subsequent investigators to pose many variations on this test, almost always with similar results.

Intrigued, they began to ask people about the level of confidence in their methods before getting their solution. Despite the fact that 90% would choose the wrong answer, 80% of participants were 100% sure they had the right answer! So it was not that the participants were hesitant or tentative. On the contrary, they were extremely confident in their method, whatever it was.

The people taking part were not stupid or uneducated. Most of them were psychology undergraduates. The result is slightly worse than one would expect from random guessing, which suggests that something was systematically going wrong.

The breakthrough came more than a decade later when, in 1979, Jonathan Evans came up with a variation in which the rule was: if a card has E on one side, it does not have 2 on the other. In this case, the proportions of right and wrong answers dramatically switched around, with 90% getting it right. Does this mean that we reason better negatively?
"This shows, Evans argued, that people's answers to the Wason task are based not on logical reasoning but on intuitions of relevance." (Mercier & Sperber 2017: 43. Emphasis added)
What Evans found was that people turn over the cards named in the rule. Which is not reasoning, but since it is predicated on an unconscious evaluation of the information, not quite a guess, either. Which is why the success rate is worse than random guessing.

Which cards did you turn over? As with the conditional syllogism, there are only two valid inferences to be made here: Turn over the E card. If it has a 2 on the other side, the rule is true for this card (but may not be true for others); if it does not have a 2, the rule is falsified. The other card to turn over is the one with a seven on it. If it has E on the other side, the rule is falsified; if it does not have an E, the rule may still be true.

Turning over the K tells us nothing relevant to the rule. Turning over the 2 is a little more complex, but ultimately futile. If we find an E on the other side of the 2 we may think it validates the rule. However, the rule does not forbid a card with 2 on one side having any letter, E or another one. So turning over the 2 does not give us any valid inferences, either.

Therefore, it is only by turning over the E and 7 cards that we can make valid inferences about the rule. And, short of gaining access to all possible cards, the best we can do is falsify the rule. Note that the cards are presented in the same order as I used in explaining the logic. E = P, K = ¬P, 2 = Q, and 7 = ¬Q.

Did you get the right answer? Did you consciously work through the logic or respond to an intuition? Did you make the connection with the explanation of the conditional syllogism that preceded it?

I confess that I did not get the right answer, and I had read a more elaborate explanation of the conditional logic involved. I did not work through the logic, but chose the cards named in the rule. 

The result has been tested in many different circumstances and variations and seems to be general. Humans, in general, don't use reasoning to solve logic problems, unless they have specific training. Even with specific training, people still get it wrong. Indeed, even though I explained the formal logic of the puzzle immediately beforehand, the majority of readers would have ignored this and chosen to turn over the E and 2 cards, because they used their intuition instead of logic to infer the answer.


~Reasons~

In a recent post (Reasoning, Reasons, and Culpability, 20 Jul 2017) I explored some of the consequences of this result. Mercier and Sperber go from Wason into a consideration of unconscious processing of information. They discuss and ultimately reject Kahneman's so-called dual process models of thinking (with two systems, one fast and one slow). There is only one process, Mercier and Sperber argue, and it is unconscious. All of our decisions are made this way. When required, they argue, we produce conscious reasons after the fact (post hoc). The reason we are slow at producing reasons is that they don't exist before we are asked for them (or ask ourselves - which is something Mercier and Sperber don't talk about much). It takes time to make up plausible sounding reasons; we have to go through the process of asking, given what we know about ourselves, what a plausible reason might be. And because of cognitive bias, we settle for the first plausible explanation we come up with. Then, as far as we are concerned, that is the reason.

It's no wonder there was scope for Dr Freud to come along and point out that people's stated motives were very often not the motives that one could deduce from detailed observation of the person (particularly paying attention to moments when the unconscious mind seemed to reveal itself). 

This does not discount the fact that we have two brain regions that process incoming information. It is most apparent in situations that scare us. For example, an unidentified sound will trigger the amygdala to create a cascade of activation across the sympathetic nervous system. Within moments our heart rate is elevated, our breathing shallow and rapid, and our muscles flooding with blood. We are ready for action. The same signal reaches the prefrontal cortex more slowly. The sound is identified in the aural processing area, then fed to the prefrontal cortex which is able to override the excitation of the amygdala.

A classic example is walking beside a road with traffic speeding past. Large, rapidly moving objects ought to frighten us because we evolved to escape from marauding beasts. Not just predators either, since animals like elephants or rhinos can be extremely dangerous. But our prefrontal cortex has established that cars almost always stay on the road and follow predictable trajectories. Much more alertness is required when crossing the road. I suspect that the failure to switch on that alertness after suppressing it might be responsible for many pedestrian accidents. Certainly, where I live, pedestrians commonly step out into the road without looking.

It is not that the amygdala is "emotional" and the prefrontal cortex is "rational". Both parts of the brain are processing sense data, but one is getting it raw and setting off reactions that involve alertness and readiness, while the other is getting it with an overlay of identification and recognition and either signalling to turn up the alertness or to turn it down. And this does not happen in isolation, but is part of a complex system by which we respond to the world. The internal physical sensations associated with these systems, combined with our thoughts, both conscious and unconscious, about the situation are our emotions. We've made thought and emotion into two separate categories and divided up our responses to the world into one or the other, but in fact, the two are always co-existent.

Just because we have these categories, does not mean they are natural or reflect reality. For example, I have written about the fact that ancient Buddhist texts did not have a category like "emotion". They had loads of words for emotions, but lumped all this together with mental activity (Emotions in Buddhism. 04 November 2011). Similarly, ancient Buddhist texts did not see the mind as a theatre of experience or have any analogue of the MIND IS A CONTAINER metaphor (27 July 2012). The ways we think about the mind are not categories imposed on us by nature, but the opposite, categories that we have imposed on experience. 

Emotion is almost entirely missing from Mercier and Sperber's book. While I can follow their argument, and find it compelling in many ways, I think their thesis is flawed for leaving emotion out of the account of reason. In what I consider to be one of my key essays, Facts and Feelings, composed in 2012, I drew on work by Antonio Damasio to make a case for how emotions are involved in decision making. Specifically, emotions encode the value of information over and above how accurate we consider it.

We know this because when the connection between the prefrontal cortex and the amygdala is disrupted, by brain damage, for example, it can disrupt the ability to made decisions. In the famous case of Phineas Gage, his brain was damaged by a railway spike being drive through his cheek and out the top of his head. He lived and recovered, but he began to make poor decisions in social situations. In other cases, recounted by Damasio (and others) people with damage to the ventro-medial pre-frontal cortex lose the ability to assess alternatives like where to go for dinner, or what day they would like doctor's appointment on. The specifics of this disruption suggests that we weigh up information and make decisions based on how we feel about the information.

Take also the case of Capgras Syndrome. In this case, the patient will recognise a loved one, but not feel the emotional response that normally goes with such recognition. To account for this discrepancy they confabulate accounts in which the loved one has been replaced by a replica, often involving some sort of conspiracy (a theme which has become all too common in speculative fiction). Emotions are what tell us how important things are to us and, indeed, in what way they are important. We can feel attracted to or repelled by the stimulus; the warm feeling when we see a loved one, the cold one when we see an enemy. We also have expectations and anticipations based on previous experience (fear, anxiety, excitement, and so on).

Mercier and Sperber acknowledge that there is an unconscious inferential process, but never delve into how it might work. But we know from Damasio and others that it involves emotions. Now, it seems that this process is entirely, or mostly, unconscious and that when reasons are required, we construct them as explanations to ourselves and others for something that has already occurred.

Sometimes we talk about making unemotional decisions, or associate rationality with the absence of emotion. But we need to be clear on this: without emotions, we cannot make decisions. Rationality is not possible without emotions to tell us how important things are, where "things" are people, objects, places, etc. 

In their earlier work (See An Argumentative Theory of Reason) of 2011, Mercier and Sperber argued that we use reasoning to win arguments. They noted the poor performance on a test of reasoning like the Wason task and added the prevalence of confirmation bias. They argued that this could be best understood in terms of decision-making in small groups (which is, after all, the natural context for a human being). As an issue comes up, each contributor makes the best case they can, citing all the supporting evidence and arguments. Here, confirmation bias is a feature, not a bug. However, those listening to the proposals are much better at evaluating arguments and do not fall into confirmation bias. Thus, Mercier and Sperber concluded, humans only employ reasoning to decide issues when there is an argument. 

The new book expands on this idea, but takes a much broader view. However, I want to come back and emphasise this point about groups. All too often, philosophers are trapped in solipsism. They try to account for the world as though individuals cannot compare notes, as though everything can and should be understood from the point of view of an isolated individual. So, existing theories of rationality all assume that a person reasons in isolation. But I'm going to put my foot down here and insist that humans never do anything in isolation. Even hermits have a notional relation to their community - they are defined by their refusal of society. We are social primates. Under natural conditions, we do everything together. Of course, for 12,000 years or so, an increasing number of us have been living in unnatural conditions that have warped our sensibilities, but even so, we need to acknowledge the social nature of humanity. All individual psychology is bunk. There is only social psychology. All solipsistic philosophy is bunk. People only reason in groups. The Wason task shows that on our own we don't reason at all, but rely on unconscious inferences. But these unconscious (dare I say instinctual) processes did not evolve for city slickers. They evolved for hunter-gatherers.

It feels to me like we are a transitional period in which old paradigms of thinking about ourselves, about our minds, are falling away to be replaced by emerging, empirically based paradigms that are still taking shape. What words like "thought", "emotion", "consciousness", and "reasoning" mean is in flux. Which means that we live in interesting times. It's possible that a generation from now, our view of mind, at least amongst intellectuals, is going to be very different. 

~~oOo~~



Bibliography

Mercier, Hugo & Sperber, Dan. (2011) 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.

Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.

See also my essay: Reasoning and Beliefs 10 January 2014





20 July 2017

Reasoning, Reasons, and Culpability.

My worldview has undergone a few changes over the years. Not just because of religious conversion or obvious things like that. It has usually been a book that has shifted my perspective in an unexpected direction. Take, for example, Mercier and Sperber's book The Enigma of Reason: A New Theory of Human Understanding.

We all just assume that actions are explained by reasons. If actions are baffling then we seek out reasons to explain them. What is the reason that someone acted the way they did? Given a reason, we think the action has been explained. But has it? How?

Furthermore, when discussing someone's actions we assume that particular kinds of internal motivations are sufficient to explain the actions. We almost never consider external factors, like, say, peer-pressure. It's not that we're not aware of peer pressure, but that we don't see it as a reason.
So, if person P does action A, we expect to find a simple equation, P did A for reason R. R is likely to be expressed as a desire to bring about some kind of goal G, call this R(G). So the calculus of our lives is something like this:

P did A for R(G)

But this is not how reasoning works and it is not how people decide to do things. Most decisions, even the ones that feel conscious are, in fact, unconscious. The decision-making machinery is emotional and operates below our conscious radar - the result that pops into consciousness is preprocessed and preformed. Essentially, it is what feels right, on an unconscious level.

Having decided, we may either just do it with a conscious sense of it feeling right (so-called "feeling types") and only produce reasons after the fact (post hoc) when asked; or we may first seek a reason (so-called "thinking types") and then act. Both kinds of reasons are post hoc - the decision to act comes first, then we come up with reasons to support that decision. The number of times that someone asks "why did you do that?" and you come up with nothing is a sign of this.

The most extreme examples of this occur in people with no memories due to brain damage. Oliver Sachs described the case of a man who when asked "What are you doing here?" never knew, because he could not remember. But the part of his brain that still worked would conjure up a likely reason, and since it fit the criteria of a reason, that's what we would say. But he would not remember saying it and asked again, might come up with another equally plausible answer. He was only ever accurate by accident. He was not consciously lying but, not understanding the deficit caused by his injury, was speaking what popped into his head.

We are very far from assiduous in generating and selecting reasons. For a start, we all suffer from confirmation bias. We typically only look for reasons to support and justify our decision. Ethics is partly about realising that our actions are not always justified and admitting that. Not only this, but we are also lazy. Once we come up with one reason that fits our criteria, we just stop looking. We typically take the first reason, not the best one, then, having settled on it, will defend it as the best reason.

Of course, we can train to overcome the cognitive biases, but most of us are still bought into the paradigm of P did A for R (G). It's transparent. We don't see it. I know about it and I don't usually see it. It's only when I'm being deliberately analytical that I can retrospectively see the nature of my reasoning. And it is not what we have taken it to be all these centuries. 

I'm never been very convinced by so-called post-modernism. They make the mistake that I would now call an ontological fallacy - they mistake experience for reality. But the mistake is so common amongst intellectuals that they cannot be singled out. This idea about reasoning might well be the kind of epistemic break that would really constitute our either leaving modernity behind or, more likely, finally becoming truly modern. The idea that modernity represents a break with medieval superstition, is also clearly not quite right because our reasons are no better than superstition in most cases. 

And, of course, some of us are able to see more complex networks of cause and effect. We see political complexities, or sociological complexities, for example. These produce more sophisticated reasons, but even these tend to get boiled down into generalisations or interpreted from ideological points of view. And ideologies make sense to people because of reasons

The whole 2010 UK general election was fought on the basis of a single idea: Labour borrowed too much money. This falsified the situation in a dozen different ways but because it offered a reason for the disastrous economic crash in the UK in 2008 and, because Labour could not offer a similar simple reason, it won the day. A lot of the political right appears to be convinced that this explains everything. So the whole world has the same economic problems, and the economies are incredibly complex, but it all boils down to Labour borrowed too much money. And this—this simplistic, fake fact— is widely considered to be plausible. The UK is leaving the EU for reasons. And so on. 

But here's the thing. Reasons, on the whole, do not explain behaviour. They are just post hoc rationalisations of decisions made unconsciously on the basis of the value we give to experiences and memories, which are encoded as emotions. The reasons you give for your own actions, let alone the reasons you give for mine, do not explain anything. And as I have said, we simply ignore some of the more obvious reasons that any social primate does what it does (because of social norms). It's not a matter of deliberate deception. After all, we all believe that the reasons we give sufficiently explain our actions and that we can accurately gauge the kinds of reasons that are applicable (and we believe this for reasons). The problem is more that we don't understand reasons or reasoning.

How does this affect the issue of culpability? 

Any student of Shakespeare will be familiar with the problem of people being puzzled by their own actions. Shakespeare might have been the first depth psychologist. But if we are discussing the issue of culpability, then things get really difficult. One could write a book on the actions for which Hamlet might be culpable and to what degree (probably someone has!). 

The whole notion of culpability has taken a beating, lately. Advocates for the non-existence of contra-causal freewill are persuasive because metaphysical reductionism is a mainstream paradigm of reasoning. One hopes that the flaws in such arguments will eventually be exposed—contra-causal freewill isn't relevant or interesting; structure is real; reductionism is less than half the story of reality; etc.—but until they are, discussions of culpability are likely to remain confused. 

Mercier and Sperber's argument about the nature of, and the relationship between, reasoning and reasons is a deeper challenge. Because we now know that even if we get a sincere answer to the question "Why did you do that?", very few of us are even aware that the reasons we give are simply post hoc rationalisations and that they are not sufficient to explain any action. Clearly, our will is always involved in deliberate actions, but we ourselves may not understand the direction our will takes. We generate reasons on demand because society has taught us to do so... for reasons. But at root, most of us are mystified by our own actions most of the time. 

Legal courts still represent a pragmatic approach to culpability. Did P factually do A? Yes or no? If yes, then punish P in the way mandated by the legislative branch of government. As readers may know, George Lakoff has analysed this dynamic in terms of metaphors involving debts and bookkeeping. If action A incurs a debt to society, then P is expected to repay it We still largely operate on the basis that the best way to repay a social debt is to suffer pain, but we have created "more humane" ways to make people suffer that are, on-the-whole, less gross but also more drawn out than physical punishment. Indeed, we consider inflicting physical harm as barbaric. And why? Oh, you know, for reasons

If you're going to make someone suffer, it's better to inflict psychological suffering on them―through extended social isolation, for example, or enforced cohabitation with unsavoury strangers―than to inflict physical harm. Because of reasons. If my choice was between years of incarceration with criminals and being beaten senseless one time, I might well opt for the latter (well, I wouldn't but some might). Quite a lot of people are beaten and raped in prison, anyway, and a majority are psychologically damaged by the experience, so a one-off payment in suffering might make more sense. It's more economical. Just because you are squeamish about beating me, but not about psychologically torturing me by imprisoning me, doesn't make your squeamishness more ethical. You are still seeking to inflict harm on me in the belief that it will balance out my culpability for acting against the laws of society... for reasons

Then again, if I am an Afghani, fighting for my homeland against a foreign invader, you might just choose to drop a bomb on me from 40,000 ft, killing me and my entire family, because of reasons

What happens to justice when reasons are exposed as fraudulent? And they may as well be fraudulent because they're only relevant by accident. We see this happening all the time. The UK no longer has the death penalty; not because British people don't like killing (Britain has been almost constantly fighting wars it has initiated or encouraged for 1000 years!). Rather, we realised that we killed a few too many falsely convicted innocents. That means we have created a debt for which we ought to suffer. D'oh! 

We're for or against capital punishment for reasons. We vote left or right for reasons. We are for or against, this or that for reasons. We love, marry, fight, work, take on religious views and practices, choose our haircut, our friends, etc... for reasons. Good reasons! Sound reasons. Thought out reasons. Wait! We can explain. And you have to take our reasons seriously, because of... other reasons. Don't you see? It all makes sense... doesn't it? 

In other words, our whole lives are based on post hoc rationalisations of decisions we do not understand and cannot explain, but which we are convinced that we do understand and can explain. Not to put too fine a point on it, it's fucked up.

So, how confident should anyone be about their reasons? 

We so often seem very confident indeed (because of reasons), but if there is one other rational person who disagrees with us, then we ought to be at best 50% certain. If it's just a matter of reasons... then 50% seems optimistic, because chances are that neither party has any real idea of why they believe what they do. On most social matters one can usually find a dozen rational opinions based on reasons, and we believe our own reasons (for reasons), or we are persuaded of a different view for other reasons.

What does any of this amount to?

And more to the point, how can we tell what is of value, if reasons are not a reliable guide?

I think Frans de Waal has got the right idea (for reasons). Ethics (i.e., social values) are based on empathy and reciprocity, capacities we and all social mammals evolved in order to make living in big groups possible and tolerable. It all builds from there. Other rational opinions are available, but for reasons, I like this one. I still have no idea what gives something an aesthetic value, but I do believe (for other reasons) that we experience that value as an emotional response. Again, other rational opinions are available.

I cannot help but think that my view, cobbled together from other people's views, makes more sense than any other view I've come across. But then, everyone thinks this already. So then the question is, how do some opinions become popular? And I think Malcolm Gladwell has some interesting things to say on that matter in The Tipping Point. In his terms, I'm a "maven", but not a persuader or connector. 


~~oOo~~

17 February 2017

Experience and Reality

"Our relation to the world is not that of a thinker to an object of thought"
—Maurice Merleau Ponty. The Primacy of Perception and Its Philosophical Consequences.

Introduction

In this essay and some to follow, I want to look an an error that many philosophers and most meditators seem to make: the confusion of epistemology and ontology; i.e., the mixing up of experience and reality. This essay will outline and give examples of a specific version of this confusion in the form of the mind projection fallacy.

I agree with those intellectuals who think that we do not ever experience reality directly. This is where I part ways with John Searle who, for reasons I cannot fathom, advocates naïve realism, the view that reality is exactly as we experience it. On the other hand, I also disagree with Bryan Magee that reality is utterly different from what we experience and we can never get accurate and precise knowledge about it. He takes this view to be a consequence of transcendental idealism, but I think it's a form of naïve idealism.

The knowledge we get via inference is not complete, but we can, and do, infer accurate and precise information about objects. This makes a mind-independent reality seem entirely plausible and far more probable than any of the alternatives. So, we are in a situation somewhere between naïve realism and naïve idealism. 

This distinction between a mind-independent reality and the mind is not ontological, but epistemological. The set of reality includes all minds. However, the universe would exist, even if there were no beings to witness is. The universe is not dependent on having conscious observers. So by "reality" I just mean the universe generally; i.e., the universe made up from real matter-energy fields arranged into real structures that have emergent properties, one of which is conscious states. And by "mind" I specifically mean the series of conscious states that inform human beings about the universe. 

What I don't mean is reality in the abstract. I'm deeply suspicious of abstractions at present. For the same reason, I avoid talking about conscious states in the abstract as "consciousness". Things can be real without there necessarily being an abstract reality. Reality is the set of all those things to which the adjective "real" applies. Things are real if they exist and have causal potential. Members of this set may have no other attributes in common. Unfortunately, an abstract conception of reality encourages us to speculate about the "nature of reality", as though reality were something more than  a collection of real things, more than an abstraction. Being real is not magical or mystical.

I'm not making an ontological distinction between mental and physical phenomena. I think an epistemological distinction can be made because, clearly, our experience of our own minds has a different perspective to our experience of objects external to our body, but in the universe there are just phenomena. This is a distinct position from materialism, which privileges the material over the mental. What I'm saying is that what we perceive as "material" and "mental" are not different at the level of being.  

When we play the game of metaphysics and make statements about reality, they arise from inferences about experience. There are three main approaches to this process:
  • we begin with givens and use deduction to infer valid conclusions.
  • we begin with known examples and use induction to infer valid generalisations.
  • we begin with observations and use abductions to infer valid explanations.
We can and do make valid inferences about the universe from experience. The problem has always been that we make many invalid inferences as well. And we cannot always reliably tell valid from invalid.

For example, we know that if you submerge a person in water they will drown. That tells us something about reality. However, for a quite a long time, Europeans believed that certain women were in league with the devil. They believed that witches could not be drowned. So they drowned a lot of women to prove they were not witches; and burned the ones who didn't drown. The central problem here being that witches, as understood by the witch-hunters, did not exist. The actions of some women were interpreted through an hysterical combination of fear of evil and fear of women, and from this witches were inferred to be real. It was a repulsive and horrifying period of our history in which reasoning went awry. But it was reasoning. And it was hardly an isolated incident. Reasoning very often goes wrong. Still. And that ought to make us very much more cautious about reasoning than most of us are.

One of the attractions of the European Enlightenment is that it promised that reason would free us from the oppression of superstition. This has happened to some extent, but superstition is still widespread. Confusions about how reason actually works are only now being unravelled. And this meant that the early claims of the Enlightenment were vastly overblown. If our views about the universe are formed by reasoning, then we have to assume that we're wrong most of the time, unless we have thoroughly reviewed both our view and our methods, and compared notes with others in an atmosphere of critical thinking, which combines competition and cooperation. The latter is science at its best, though admittedly scientists are not always at their best. 

Into this mix comes Buddhism with its largely medieval worldview, modified by strands of modernism. Buddhists often claim to understand the "true nature of reality"; aka The Absolute, The Transcendental, The Dhamma-niyāma, śūnyatā, tathatā,  pāramārthasatya, prajñāpāramitā, nirvāṇa, vimokṣa, and so on. Reality always seems to boil down to a one word answer. And this insight into "reality" is realised by sitting still with one's eyes closed and withdrawing attention from the sensorium in order to experience nothing. Or by imagining that one is a supernatural being in the form of an Indian princess, or a tame demon, or an idealised Buddhist monk, etc. Or any number of other approaches that have in common that seem to take the approach of trying to develop a kind of meta-awareness of our experience.To experience ourselves experiencing.

It's very common to interpret experience incorrectly. As we know the lists of identified cognitive biases and logical fallacies, which each have over one hundred items. From these many problems I want to highlight one. When we make inferences about reality we are biased towards seeing our conclusions, generalisations, and explanations as valid, and to believing that our interpretation is the only valid interpretation. This is the mind projection fallacy.


The Sunset Illusion

An excellent illustrative example of the mind projection fallacy is the sunset. If I stand on a hill and watch the sunset, it seems to me that the the hill and I are fixed in place and the sun is moving relative to me and the hill. Hence, we say "the sun is setting". In fact, we're known for centuries that the sun is not moving relative to the earth, but instead the hill and I are pivoting away on an axis that goes through the centre of the earth. So why do we persist in talking about sunsets?

The problem is that I have internal sensors that tell me when I'm experiencing acceleration: proprioception (sensing muscle/tendon tension) kinaesthesia (sensing joint motion and acceleration) and the inner-ear's vestibular system (orientation to gravity and acceleration). I can also use my visual sense to detect whether I am in motion relative to nearby objects. A secondary way of detecting acceleration is the sloshing around of our viscera creating pressure against the inside of our body.

My brain integrates all this information to give me accurate and precise knowledge about whether my body is in motion. And standing on a hill, watching a sunset, my body is informing me, quite unequivocally, that I am at rest.

I'm actually spinning around the earth's axis of rotation at ca. 1600 km/h or about 460 m/s. That's about Mach 1.5! And because velocity is a vector (it has both magnitude and direction) moving in a circle at a uniform speed is acceleration, because one is constantly changing direction. So why does it not register on our senses? After all, being on a roundabout rapidly makes me dizzy and ill; a high speed turn in a vehicle throws me against the door. It turns out that the acceleration due to going moderately fast in very large circle, is tiny. So small that it doesn't register on any of our onboard motion sensors. The spinning motion does register in the atmosphere and oceans where it creates the Coriolis effect.

Everyone watching a sunset experiences themselves at rest and the sun moving. It is true, but counterintuitive, to suggest that the sun is not moving. Let's call this the sunset illusion.

I'm not sure where it comes from, but in the Triratna Order we often cite four authorities for believing some testimony: it makes sense (reason), it feels right (emotion), it accords with experience (memory), and it accords with the testimony of the wise. Before about 1650, seeing ourselves as stationary and the sun and moving, made sense, it felt right, it accorded with experience, and it accorded with the testimony of the wise. The first hint that the sunset illusion is an illusion came when Galileo discovered the moons of Jupiter in January 1610.

Even knowing, as I do, that the sunset illusion is an illusion, doesn't change how it seems to me because my motion senses are unanimously telling me I'm at rest. This is important because it tells us that this is not a trivial or superficial mistake. It's not because I am too stupid to understand the situation. I know the truth and have known for decades. But I also trust my senses because I have no choice but to trust them.

The sunset illusion is sometimes presented as a 50:50 proposition, like one of those famous optical illusions where whether we see a rabbit or a duck depends on where we focus. The assertion is that we might just as easily see the sun as still and us moving. This is erroneous. Proprioception, kinesthesia, the vestibular organ, and sight make it a virtual certainty that we experience ourselves at rest and conclude that the sun moving. It takes a combination of careful observation of the visible planets and an excellent understanding of geometry to upset the earth-centric universe. If some ancient cultures got this right, it was a fluke.

The sunset illusion exposes an important truth about how all of us understand the world based on experience. Experience and reality can be at odds.

And note that we are not being irrational when we continue to refer to the sun "setting". Given our sensorium, it is rational to think of ourselves at rest and the sun moving. It's only in a much bigger, non-experiential framework that the concept becomes irrational. For most of us, the facts of cosmology are abstract; i.e., they exist as concepts divorced from experience. Evolution has predisposed us to trust experience above abstract facts.


Mind Projection Fallacy

The name of this fallacy was coined by physicist and philosopher E.T. Jaynes (1989). He defined it like this:
One asserts that the creations of [their] own imagination are real properties of Nature, and thus, in effect, projects [their] own thoughts out onto Nature. (1989: 2)
I think it's probably more accurately described as a cognitive bias, but "fallacy" is the standard term. Also, instead of imagination, I would argue that we should say "interpretation". The problem is not so much that we imagine things and pretend they are real, though this does happen, but that we have experiences and interpret them as relating directly to reality (naïve realism).

The sunset illusion tells us that reality is not always as we experience it. 

We all make mistakes, particularly these kinds of cognitive mistakes. We actually evolved in such a way as to make these kinds of mistakes inevitable. However, reading up on cognitive bias, I was struck by how some of the authors slanted their presentation of the material to belittle people. I don't think this is helpful. Our minds are honed by evolution for survival a particular kind of environment, but almost none of us live in that environment any more. So if we are error-prone, it is because our skill-set is not optimised for the lifestyles we've chosen to live. 

This fallacy can occur in a positive and a negative sense, so that it can be stated in two different ways:
  1. My interpretation of experience → real property of nature
  2. My own ignorance → nature is indeterminate
David Chapman has pointed out that there has been considerable criticism of Jaynes' approach in the article I'm citing and has summarised why. He suggests, ironically, that Jaynes suffered from the second kind of mind projection fallacy when it came to logic and probability. But the details of that argument about logic and probability are not relevant to the issue I'm addressing in this essay. It's the fallacy or bias that concerns us here. 


Interpreting Experience
    A problem like the sunset illusion emerges when we make inferences about reality based on interpreting our experience. For example, when we make deductions from experience to reality, they invariably reflect the content of our presuppositions about reality. For example, a given for most of us is "I always know when I am moving". In the sunset illusion, I know I am at rest because motions sensors and vision confirm that it is so. The experience is conclusive: it must be the sun must be moving. My understanding of how the universe works and my understanding of my own situation as regards movement are givens in this case. We don't consciously reference them, but they predetermine the outcome of deductive reasoning. This means the deduction is of very limited use to the individual thinking about reality.

    If I watch a dozen sunsets and they all have this same character, then I can generalise from this (inductive reasoning) that the sun regularly rises, travels in an (apparent) arc across the sky and sets. All the while, I am not moving relative to earth. What's more, I've experienced dozens of earthquakes in my lifetime, so I also know what it is like when the earth does move! From my experiential perspective, the earth does not move, but the sun does move. Given our experience of the situation, this is the most likely explanation (abductive reasoning).

    So here we see that a perfectly logical set of conclusions, generalisations, and explanations follow from interpreting experience, which are, nonetheless, completely wrong. I am not at rest, but moving at Mach 1.5. The earth is not at rest. The sun is at the centre of our orbit around it, but it also is moving very rapidly around the centre of the galaxy. Our galaxy is accelerating away from all other galaxies. The error occurs because our senses evolved to help us navigate through woodlands, in and out of trees, and swimming in water. And we're pretty good at this. When it comes to inferring knowledge about the cosmos, human senses are the wrong tool to start with!

    A common experience for Buddhists is to have a vision of a Buddha during meditation. And it is common enough for that vision to be taken as proof that Buddhas exist. But think about it. A person is sitting alone in a suburban room, their eyes are closed, their attention withdrawn from the world of the senses, they've attenuated their sense experience to focus on just one sensation and have focussed their attention on it. They undergo a self-imposed sensory deprivation. They've also spent a few years intensively reading books on Buddhism, looking at Buddhist art, thinking about Buddhas, and discussing Buddhas with other Buddhists. We know that sensory deprivation causes hallucinations. And someone saturated in the imagery of Buddha is more likely to hallucinate a Buddha. This is no surprise. But does it really tell us that Buddhas exist independently of our minds, or does it just tell us that in situations of sensory deprivation Buddhists hallucinate Buddhas? 

    The Buddhist who has the hallucination feels that this is a sign; it feels important, meaningful, and perhaps even numinous (in the sense that they felt they were in the presence of some otherworldly puissance). They are immersed in Buddhist rhetoric and imagery, as are all of their friends. As I have observed before, hallucinations are stigmatised, whereas visions are valorised. So if you see something that no one else sees, then your social milieu and your social intelligence will dictate how you interpret and present the experience. If you mention to your comrades in religion that you saw a Buddha in your meditation, you are likely to get a pat on the back and congratulations. It will be judged an auspicious sign. And all those people who haven't had "visions" will be quietly envious. If you mention it to your physician, they may well become concerned that you have suffered a psychotic episode. On the other hand, in practice, psychotic episodes are rather terrifying and chaotic, and not all hallucinations are the result of psychosis. 

    Not only do we have the problem of our own reasoning leading us to erroneous inferences, we have social mechanisms to reinforce particular interpretations of experience, especially in the case of our religiously inspired inferences. Our individual experience is geared towards a social reality. One of the faults of humans thinking about reality is to think that reality somehow reflects our social world. A common example is the nature of heaven. Many cultures see heaven as an idealised form of their own social customs, usually with the slant towards male experiences and narratives. Medieval Chinese intellectuals saw heaven as an idealised Confucian bureaucracy, for example. If we take Christian art as any indication, then Heaven is an all male club. The just-world fallacy probably comes about because we expect the world to conform to our social norms in which each member is responsive to the others in a hierarchy where normative behaviour is rewarded and transgressive behaviour is punished.

    So, given the way our senses work, given the pitfalls of cognitive bias and logical fallacies, given the pressure to conform to social norms, the mind projection fallacy can operate freely. As we know, challenging the established order can be difficult to the point of being fatal. And understanding the power of something like the sunset illusion is important. Facts don't necessarily break the spell. Yes, we know the earth orbits the sun. But standing on a hill watching the sunset, that is just not how we experience it (our proprioception and vision tell us a different story that we find more intuitive and credible, even though it is wrong). And this applies to a very wide range of situations where we are reasoning from experience to reality.


    If I Don't Understand It...

    The second form of this fallacy was rampant in 19th century scholarship. In the first form, one erroneously concludes that one understands something and projects private experience as public reality. Mistaking the sunset as resulting from the movement of the sun, because our bodies tell us that we are at rest. This leads to false claims about reality.

    In the second case there is also a false claim about reality, but in this case it emerges from a failure to understand and the assumption that this is because the experience or feature of reality cannot be understood. This is a problem which is particularly acute for intellectuals. Intellectuals are often over‑confident about their ability to understand everything. These days it is less plausible, but 150 years ago it was plausible for one intellectual to be well informed about more or less every field of human knowledge. So, if such an intellectual comes across something they don't understand, then they deduce that it cannot be understood by anyone. 

    A common assertion, for example, is that we will never understand consciousness from a third person perspective (leaving aside the problematic abstraction for a moment). Very often such theories are rooted in an ontological mind/body dualism, which may or may not be acknowledged. Many Buddhists who are interested in the philosophy of mind, for example, cannot imagine that we will ever understand conscious states through scientific methods. They argue that no amount of research will ever help us understand. So they don't follow research into the mind and don't see any progress in this area. On the other hand, they hold that through mediation we do come to understand conscious states and the nature of them. Many go far beyond this and claim that we will gain knowledge of reality, in the sense of a transcendent ideal reality that underlies the apparent reality that our senses inform us about. In other words, meditation takes us beyond phenomena to noumena

    Another common argument is that scientists don't understand 95% of the world because they don't understand dark matter and dark energy. People take this to mean that scientists don't understand 95% of what goes on here on earth. But this is simply not true. Scale is important, and being ignorant at one scale (the scale that effects galaxies and larger structures) does not mean that we don't understand plate tectonics, the water cycle, or cell metabolism, at least in principle. The popular view of science often seem to point towards a caricature that owes more to the 19th century than the 21st. Criticism of science often goes along with an anti-science orientation and very little education in the sciences. 

    The basic confusion in both cases is mistaking what seem obvious to us, for what must be the case for everyone else, either positively or negatively. 


    The Confusion
    "It's not that one gains insight into reality, but that one stops mistaking one's experience for reality"
    The basic problem here is a confusion between what we know about the world (epistemology) with what the world is (ontology). In short, we mistake experience for reality. And this problem is very widespread amongst intellectuals in many fields.

    The problem can be very subtle. Another illuminating example is the idea that sugar is sweet. We might feel that a statement like "sugar is sweet" is straightforward. Usually, no one is going to argue with this, because the association between sugar and sweetness is so self-evident. But the statement it is false. Sugar is not sweet. Sugar is a stimulus for the receptors on our tongues that register as "sweet". We experience the sensation sweet whenever we encounter molecules that bind with these receptors. But sweet is an experience. It does not exist out in the world, but only in our own conscious states. Sugar is not sweet. Sugar is one of many substances that cause us to experience sweet when they come into contact with the appropriate receptors on our tongue. Equally, there is no abstract quality of sweet-ness, despite the effortless ease with which we can create abstract nouns in English. Sucrose, for example has nothing much in common with aspartame at a chemical level. And yet both stimulate the experience of sweet. Indeed, aspartame is experienced as approximately 200 times as sweet as sucrose, but this does not mean that it contains 200 times more sweetness. There is no sweet-ness. The experience of sweet evolved to alert us to the high calorific value of certain types of foods and the enjoyable qualities of sweet evolved to motivate us to seek out such foods. 

    For Buddhists, the application of this fallacy comes from experiencing altered states of mind in and out of meditation. Meditators may experience altered states of mind that they judge to be more real than other kinds of states, causing them to divide phenomena into more real and less real. And they manage to convince people that this experience of theirs reflects a reality that ordinary mortals cannot see -- a transcendent reality that is obscured from ordinary people. 

    The problem is that an experience is a mental state; and a mental state is just a mental state. No matter how vivid or transformative the experience was, we must be careful when reasoning from private experiences (epistemology) to public reality (ontology) because we usually get this wrong. I've covered this in many essays, including Origin of the Idea of the Soul (11 Nov 2011) and
     Why Are Karma and Rebirth (Still) Plausible (for Many People)? (15 Aug 2015), etc.

    Most of us are really quite bad at reasoning on our own. This is because humans suffer from an inordinate number of cognitive biases and easily fall into logical fallacies. There are dozens of each and, without special training and a helpful context, we naturally and almost inevitably fall into irrational patterns of thought. The trouble is that we too often face situations where there is too much information and we cannot decide what is salient; or there is too little information and we want to fill the gaps. 

    Our minds are optimised for survival in low-tech hunter-gatherer situations, not for sophisticated reasoning. The mind helps us make the right hunting and gathering decisions, but in most cases it's just not that good at abstract logic or reasoning. Of course, some individuals and groups are good at it. Those who are good at it have convinced us that it is the most important thing in the world. But, again, this is probably just a cognitive bias on their part. 


    Conclusion

    The whole concept of reason and the processes of reasoning are going through a reassessment right now. This is because it has become clear that very few people do well at abstract reasoning. Most of the time, we do not reason, but rely on shortcuts known as cognitive biases. A lot of the time our reasoning is flawed by logical fallacies. Additionally, we are discovering that most mammals and birds are capable of reasoning to some extent. 

    In this essay, I have highlighted a particular problem in which one mistakes experience for reality. Using examples (sunset, visions, sweetness) I showed how such mistakes come about. Unlike others who highlight these errors, I have tried to avoid the implication that humans are thereby stupid. For example, I see the sunset illusion because my senses are telling me that I am definitely at rest, because they tune out sensations that are too small to affect my body. Social conditioning is a powerful shaping force in our lives, and visions are valuable social currency in a religious milieu.

    In terms of our daily lives the sunset illusion or the sweetness illusion hardly matter. It's not like the mistakes cost us anything. Such problems don't figure in natural selection because our lives don't depend on them. We know what we need to know to survive. Although our senses and minds are tuned to survival in pre-civilisation environments, we are often able to co-opt abilities evolved for one purpose to another one. 

    But truth does matter. For example, when one group claims authority and hegemony based on their interpretation of experience, then one way to undermine them is to point out falsehoods and mistakes. When the Roman Church in Europe was shown to be demonstrably wrong about the universe, the greater portion of their power seeped away into the hands of the Lords Temporal, and then into the hands of captains of industry. For ordinary people, this led to more autonomy and better standards of living (on average). Democracy is flawed, but it is better than feudalism backed by authoritarian religion.

    But as Noam Chomsky has said:
    “The system protects itself with indignation against a challenge to deceit in the service of power, and the very idea of subjecting the ideological system to rational inquiry elicits incomprehension or outrage, though it is often masked in other terms.”
    In subjecting Buddhism to rational inquiry, I do often elicit incomprehension or outrage. And sometimes it's not masked at all. There are certainly Buddhists on the internet who see me as an enemy of the Dharma, as trying to do harm to Buddhism. As I understand my own motivations, my main concern is to recast buddhism for the future. I think the urge of the early British Buddhists to modernise Buddhism and, particularly, to bring it into line with rationality was a sensible one. However, as our understanding of rationality changes so Buddhism will have to adapt to continue being thought of as rational. But also we have to move beyond taking Buddhism on its own terms and to consider the wider world of knowledge. The laws of nature apply in all cases.

    Whilst Buddhism is largely influenced by people who mistake experience for reality, Buddhism will be hindered in its spread and development. This particular error is one that we have to make conscious and question closely. Just because it makes sense, feels right, and accords with experience doesn't mean that it is true. The sunset illusion makes sense, but is wrong. It feels right to say that sugar is sweet, but it isn't. It accords with experience that meditative mental states are more real than normal waking states. But they are not. The testimony of the wise is demonstrably a product of culture, and varies across time and space.

    ~~oOo~~

    Related Posts with Thumbnails