20 July 2017

Reasoning, Reasons, and Culpability.

My worldview has undergone a few changes over the years. Not just because of religious conversion or obvious things like that. It has usually been a book that has shifted my perspective in an unexpected direction. Take, for example, Mercier and Sperber's book The Enigma of Reason: A New Theory of Human Understanding.

We all just assume that actions are explained by reasons. If actions are baffling then we seek out reasons to explain them. What is the reason that someone acted the way they did? Given a reason, we think the action has been explained. But has it? How?

Furthermore, when discussing someone's actions we assume that particular kinds of internal motivations are sufficient to explain the actions. We almost never consider external factors, like, say, peer-pressure. It's not that we're not aware of peer pressure, but that we don't see it as a reason.
So, if person P does action A, we expect to find a simple equation, P did A for reason R. R is likely to be expressed as a desire to bring about some kind of goal G, call this R(G). So the calculus of our lives is something like this:

P did A for R(G)

But this is not how reasoning works and it is not how people decide to do things. Most decisions, even the ones that feel conscious are, in fact, unconscious. The decision-making machinery is emotional and operates below our conscious radar - the result that pops into consciousness is preprocessed and preformed. Essentially, it is what feels right, on an unconscious level.

Having decided, we may either just do it with a conscious sense of it feeling right (so-called "feeling types") and only produce reasons after the fact (post hoc) when asked; or we may first seek a reason (so-called "thinking types") and then act. Both kinds of reasons are post hoc - the decision to act comes first, then we come up with reasons to support that decision. The number of times that someone asks "why did you do that?" and you come up with nothing is a sign of this.

The most extreme examples of this occur in people with no memories due to brain damage. Oliver Sachs described the case of a man who when asked "What are you doing here?" never knew, because he could not remember. But the part of his brain that still worked would conjure up a likely reason, and since it fit the criteria of a reason, that's what we would say. But he would not remember saying it and asked again, might come up with another equally plausible answer. He was only ever accurate by accident. He was not consciously lying but, not understanding the deficit caused by his injury, was speaking what popped into his head.

We are very far from assiduous in generating and selecting reasons. For a start, we all suffer from confirmation bias. We typically only look for reasons to support and justify our decision. Ethics is partly about realising that our actions are not always justified and admitting that. Not only this, but we are also lazy. Once we come up with one reason that fits our criteria, we just stop looking. We typically take the first reason, not the best one, then, having settled on it, will defend it as the best reason.

Of course, we can train to overcome the cognitive biases, but most of us are still bought into the paradigm of P did A for R (G). It's transparent. We don't see it. I know about it and I don't usually see it. It's only when I'm being deliberately analytical that I can retrospectively see the nature of my reasoning. And it is not what we have taken it to be all these centuries. 

I'm never been very convinced by so-called post-modernism. They make the mistake that I would now call an ontological fallacy - they mistake experience for reality. But the mistake is so common amongst intellectuals that they cannot be singled out. This idea about reasoning might well be the kind of epistemic break that would really constitute our either leaving modernity behind or, more likely, finally becoming truly modern. The idea that modernity represents a break with medieval superstition, is also clearly not quite right because our reasons are no better than superstition in most cases. 

And, of course, some of us are able to see more complex networks of cause and effect. We see political complexities, or sociological complexities, for example. These produce more sophisticated reasons, but even these tend to get boiled down into generalisations or interpreted from ideological points of view. And ideologies make sense to people because of reasons

The whole 2010 UK general election was fought on the basis of a single idea: Labour borrowed too much money. This falsified the situation in a dozen different ways but because it offered a reason for the disastrous economic crash in the UK in 2008 and, because Labour could not offer a similar simple reason, it won the day. A lot of the political right appears to be convinced that this explains everything. So the whole world has the same economic problems, and the economies are incredibly complex, but it all boils down to Labour borrowed too much money. And this—this simplistic, fake fact— is widely considered to be plausible. The UK is leaving the EU for reasons. And so on. 

But here's the thing. Reasons, on the whole, do not explain behaviour. They are just post hoc rationalisations of decisions made unconsciously on the basis of the value we give to experiences and memories, which are encoded as emotions. The reasons you give for your own actions, let alone the reasons you give for mine, do not explain anything. And as I have said, we simply ignore some of the more obvious reasons that any social primate does what it does (because of social norms). It's not a matter of deliberate deception. After all, we all believe that the reasons we give sufficiently explain our actions and that we can accurately gauge the kinds of reasons that are applicable (and we believe this for reasons). The problem is more that we don't understand reasons or reasoning.

How does this affect the issue of culpability? 

Any student of Shakespeare will be familiar with the problem of people being puzzled by their own actions. Shakespeare might have been the first depth psychologist. But if we are discussing the issue of culpability, then things get really difficult. One could write a book on the actions for which Hamlet might be culpable and to what degree (probably someone has!). 

The whole notion of culpability has taken a beating, lately. Advocates for the non-existence of contra-causal freewill are persuasive because metaphysical reductionism is a mainstream paradigm of reasoning. One hopes that the flaws in such arguments will eventually be exposed—contra-causal freewill isn't relevant or interesting; structure is real; reductionism is less than half the story of reality; etc.—but until they are, discussions of culpability are likely to remain confused. 

Mercier and Sperber's argument about the nature of, and the relationship between, reasoning and reasons is a deeper challenge. Because we now know that even if we get a sincere answer to the question "Why did you do that?", very few of us are even aware that the reasons we give are simply post hoc rationalisations and that they are not sufficient to explain any action. Clearly, our will is always involved in deliberate actions, but we ourselves may not understand the direction our will takes. We generate reasons on demand because society has taught us to do so... for reasons. But at root, most of us are mystified by our own actions most of the time. 

Legal courts still represent a pragmatic approach to culpability. Did P factually do A? Yes or no? If yes, then punish P in the way mandated by the legislative branch of government. As readers may know, George Lakoff has analysed this dynamic in terms of metaphors involving debts and bookkeeping. If action A incurs a debt to society, then P is expected to repay it We still largely operate on the basis that the best way to repay a social debt is to suffer pain, but we have created "more humane" ways to make people suffer that are, on-the-whole, less gross but also more drawn out than physical punishment. Indeed, we consider inflicting physical harm as barbaric. And why? Oh, you know, for reasons

If you're going to make someone suffer, it's better to inflict psychological suffering on them―through extended social isolation, for example, or enforced cohabitation with unsavoury strangers―than to inflict physical harm. Because of reasons. If my choice was between years of incarceration with criminals and being beaten senseless one time, I might well opt for the latter (well, I wouldn't but some might). Quite a lot of people are beaten and raped in prison, anyway, and a majority are psychologically damaged by the experience, so a one-off payment in suffering might make more sense. It's more economical. Just because you are squeamish about beating me, but not about psychologically torturing me by imprisoning me, doesn't make your squeamishness more ethical. You are still seeking to inflict harm on me in the belief that it will balance out my culpability for acting against the laws of society... for reasons

Then again, if I am an Afghani, fighting for my homeland against a foreign invader, you might just choose to drop a bomb on me from 40,000 ft, killing me and my entire family, because of reasons

What happens to justice when reasons are exposed as fraudulent? And they may as well be fraudulent because they're only relevant by accident. We see this happening all the time. The UK no longer has the death penalty; not because British people don't like killing (Britain has been almost constantly fighting wars it has initiated or encouraged for 1000 years!). Rather, we realised that we killed a few too many falsely convicted innocents. That means we have created a debt for which we ought to suffer. D'oh! 

We're for or against capital punishment for reasons. We vote left or right for reasons. We are for or against, this or that for reasons. We love, marry, fight, work, take on religious views and practices, choose our haircut, our friends, etc... for reasons. Good reasons! Sound reasons. Thought out reasons. Wait! We can explain. And you have to take our reasons seriously, because of... other reasons. Don't you see? It all makes sense... doesn't it? 

In other words, our whole lives are based on post hoc rationalisations of decisions we do not understand and cannot explain, but which we are convinced that we do understand and can explain. Not to put too fine a point on it, it's fucked up.

So, how confident should anyone be about their reasons? 

We so often seem very confident indeed (because of reasons), but if there is one other rational person who disagrees with us, then we ought to be at best 50% certain. If it's just a matter of reasons... then 50% seems optimistic, because chances are that neither party has any real idea of why they believe what they do. On most social matters one can usually find a dozen rational opinions based on reasons, and we believe our own reasons (for reasons), or we are persuaded of a different view for other reasons.

What does any of this amount to?

And more to the point, how can we tell what is of value, if reasons are not a reliable guide?

I think Frans de Waal has got the right idea (for reasons). Ethics (i.e., social values) are based on empathy and reciprocity, capacities we and all social mammals evolved in order to make living in big groups possible and tolerable. It all builds from there. Other rational opinions are available, but for reasons, I like this one. I still have no idea what gives something an aesthetic value, but I do believe (for other reasons) that we experience that value as an emotional response. Again, other rational opinions are available.

I cannot help but think that my view, cobbled together from other people's views, makes more sense than any other view I've come across. But then, everyone thinks this already. So then the question is, how do some opinions become popular? And I think Malcolm Gladwell has some interesting things to say on that matter in The Tipping Point. In his terms, I'm a "maven", but not a persuader or connector. 

Related Posts with Thumbnails