23 August 2019

Bad Free Will Philosophy

Philosophy is an important activity. Ideally, philosophy helps us to make sense of the world, ourselves, and our place in the world. Unfortunately, philosophy, at least on the level that I engage with it, is plagued with unhelpful legacy concepts from the Victorian period. Victorian accounts of subjects like reason, consciousness, and free will are all anachronistic and contradicted by a weight of evidence. It's not clear that these terms offer any advantage as starting places for discussions about the world, ourselves, or our place in the world. Also, virtually all philosophy seems to be solipsistic. whereas we human beings are social animals and we make sense of our world in a social setting. 

Free will is one of the most aggravating subjects to be interested in because the whole discussion is poorly framed: bad definitions, bad methods, and bad theoretical frameworks. Of course, the three coincide in some cases to make for spectacularly bad philosophy, but it only takes one to spoil the whole enterprise. In this essay, I'll walk through what seem to me to be the most egregious aspects of bad free will philosophy.

Bad Definitions

Almost no one starts off a discussion of free will by defining what they mean by free will. And, don't laugh, I'm not going to, either (well, maybe a bit, later on). It is seldom clear what any commentator means by free will, what kind of evidence they think is relevant to the discussion, or what they would consider a valid source of knowledge on the subject. And it gets worse, because not only is free will not defined, but neither are "free" or "will". Much of the time it's not even clear why we need to talk about free will.

Of course, one may sometimes infer from what someone says what assumptions they are making after a while, but this is a very inefficient way to communicate. Worst of all, after one has suffered through enough nonsense to collect sufficient information to triangulate what they actually mean, they usually mean some form of contra-causal free will.

Contra-causal free will is the view (almost universal amongst physicists) that our decisions are not caused by anything. And by "anything" we include everything physical, visceral, and social. So for example, if our own emotions are involved in decision making, then we have no free will. Also, it doesn't count if the decision we make is unconscious. Our state of knowledge tells us that our emotions are involved in every decision, every choice, and every evaluative thought we have because we encode the value or salience of information as feelings. And it seems very likely that all decisions rely on unconscious inferential processes. Ergo, physicists argue that we don't have free will, meaning, we don't have contra-causal free will. But so what? Contra causal free will is a nonsense idea to start with.

Coming back to the problem of bad definitions, the people who are talking about contra-causal free will almost never use the words "contra-causal"; they may not ever have heard the words "contra-causal" (I hadn't, until recently). So, while they appear to be talking about the same thing as other people, they are not, and they probably don't really know what they are talking about, and don't know that they don't know.

A major problem with all these kinds of discussion is that people conclude what they believed at the outset. Deduction from axioms only reproduces the axioms in the end. Assume that we have free will and you can deduce that we must have it because at some point we will judge a proposition to be true on the basis of our belief in the axiom. Assume that we don't have free will and an equally valid line of reasoning will deduce that we cannot have it. This is a built-in flaw of deductive reasoning. We ought to know better by now, but one of the basic assumptions about free will debates is that we don't need to examine our starting assumptions before giving our opinion.

And the reason is obvious. By the time most people have given an accurate account of what they axiomatically believe about free will, it's apparent that they are not interested in having a discussion about it, they merely wish to assert a more or less elaborate belief system. Either that or, by spelling out their assumptions, they realise how stupid the subject is and give up before attempting to communicate it. Most of what makes it into the public domain is ipso facto stupid.

The most egregious examples of this are the ones that grant that I feel myself making decisions, but assert that because the equations that govern the movement of atoms are deterministic, that my decisions are an illusion. In other words, yes, decisions do get made, but we cannot think of them as decisions because that contradicts the model (in which the axiom is "we don't make decisions").

Moreover, the mythical "rational faculty" that is supposed to be the deciding faculty for free will really doesn't exist. This is explained in Mercier and Sperber's book The Enigma of Reason, which looks at the data on how people use reason and shows that we don't. At least, we don't use it for solving problems. 90% of people fail at simple tests of logic, though 80% of us state that we are 100% confident about our answer. All of us do better at solving problems in small groups. What we call "reason" is, in fact, used to propose reasons for things that have already happened. We make decisions using unconscious inference, then, when we need to know why, then reasoning kicks in and produces a reason.

It would be helpful is everyone could spend some time identifying what they believe and why they believe it before contributing to a discussion. 

Bad Methods

Almost everyone who still argues against free will relies at some point on the opinion of Benjamin Libet, which has been proven wrong by his peers. I comprehensively debunked Libet in a blog post called Free Will is Back on the Menu, so I don't really want to go over this ground again. Really, I suppose all I did was repeat the many ways in which other people, Libet's colleagues, debunked his opinion about his results. Libet wasn't exactly a fraud, he just misinterpreted the data based on a faulty model. The intellectual frauds are all the people (mainly physicists) over whom Libet exercised a powerful confirmation bias and who have been uncritically repeating his opinion ever since, without ever looking at the literature within which it is embedded.

Included in the data on human decision making we ought to include all the tests like the Wasson Selection Test that show we don't use reasoning to solve puzzles.

And again, if someone sets out to study decision making, but they take as axiomatic that there is no contra-causal free will, then they are much more likely to design experiments to show this. And again, so what? Contra-causal free will is not a useful way of thinking about human experience. 

Bad Theories

Almost everyone I've come across who denies free will does so either on the basis of a metaphysical commitment to reductionism or a metaphysical commitment to absolute being. So let's look at these.

Metaphysical Reductionism. 

Metaphysical reductionists believe that only the finest possible layers of the universe are real. The search for the nature of reality is the application of a conceptual microtome, slicing the universe so thinly that it cannot be sliced any thinner: atomic means "uncut, indivisible." Obviously, the atom is very cuttable, but we're stuck with calling atoms "atoms" even though our search for the truly atomic continues. This connection with the thinnest layer is why some people link quantum physics and reality.

What's more, they assert that the properties of the atomic entities that exist on that smallest scale are the defining properties of the whole universe. Thus, because they believe it is accurate to describe the universe on the smallest scale as deterministic, then everything, on every scale, is deterministic.

However, there are huge problems with this view. As Sean Carroll will explore in his new book and has talked about in several recent podcasts and various blog posts, we don't know what the world is like on that scale. Of course, we know how to manipulate the equations to predict what kinds of effects we can expect to manifest at a macro-level, but we have no idea what this connotes in terms of physical reality. How does the quantum Hilbert Space relate to reality? No one knows. We don't know what is real at this level and this is the level at which reductionists decide what is real. So... at present we know nothing about reality on those terms.

A majority of physicists have come out against the Copenhagen interpretation of the measurement problem, which in simple terms is the idea that the universe behaves one way when our back is turned and another way when we look at it, which is trickier than it sounds in a system where everything interacts with everything else. But they cannot agree on what does happen. Are there hidden variables that determine how the universe unfolds? Or does each quantum event cause the universe to split into different versions? Are their quantum pilot waves that push the particles around? No one knows. And at present, no one is sure whether we can know. There may be an epistemic horizon beyond which reality exists but we cannot know it or say anything about it. But right now, there is an epistemic horizon and we don't know what lies beyond it or if we ever will. 

Part of the epistemic problem is that we may be able to solve the quantum equations for a single hydrogen atom, but we cannot do so for a deuterium atom, not even in principle. Three particles in a  quantum system make it impossible to provide a precise mathematical description. We have to introduce some pretty gross simplifying assumptions. These assumptions give answers that are pleasingly accurate and precise. When we're already unclear about what the unsimplified equations tell us about reality, how does adding a series of increasingly gross assumptions help get us in touch with reality? Adding simplifications to make the math work takes us further away from reality (if we take the reductionist view). Why is anyone in quantum physics talking about reality

Here's the thing. Metaphysical reductionism is just a bad theory. It ignores the role that structure plays in the universe. It's all very well saying that water is really one atom of oxygen and two of hydrogen; but if you have a litre of water and atomise it, you now have no water. You cannot slake your thirst by drinking liquid oxygen (-219 °C) or liquid hydrogen (-259 °C) or any mixture of the two. If water is not real, over and above the existence of its component parts, then the whole category of "real" is nonsense.


The idea of absolute being manifests in many different ways and has a very long and varied history. It was very popular and became highly developed and differentiated in India. And it is still very popular in Advaita Vedanta circles. 

In this idea we are all just manifestations of a larger entity which is characterised by absolute being: it transcends notions of time and space and causation (and all that other metaphysical stuff). Our individuality is an illusion, our "being" as separate from universal being is an illusion, and especially our sense of having free will is an illusion. Absolute being demands strict determinism. So the irony is that even if human beings have free will, God is wholly deterministic. 

However, there is no need to take seriously any theory of absolute being. They are all figments of our imagination. It is not a theory that rests on evidence or makes any testable predictions. Indeed, the  very idea that a spatio-temporal being can experience the Absolute is nonsensical. This is why religieux have to keep making up ad hoc supernatural entities (like a soul or ātman) that are a little bit of the absolute in us; allowing us to bridge the unbridgeable gap between absolute and temporal. Nonsense compounded by more nonsense.

So much for the arguments against free will. However, rather than argue for some version of free will, I want to try to outline the kind of philosophical discussion I find useful. 

Is There A Way Out?

Back in 2016, I wrote a long three-part essay on reality called A Layered Approach to Reality. I was influenced mainly by Richard H. Jones and John Searle. But also by other philosophers and scientists. My small contribution has been a new way to think about the ancient philosophical problem of the Ship of Theseus. In the Layered Approach essay, I argued that reductionism is fine for discovering knowledge about substances, i.e., what the universe and things are made of. And I argued that a universe in which there is one kind of stuff is the only one that is consistent with all the observations and other theories of science. But this is less than half the story of the universe. 

The basic stuff is made into a lot of other stuff, i.e., structures that persist over time, that are insensitive to swapping out identical parts, and which act as causal agents in ways their component parts alone cannot (like water dissolving salt). In other words, structures are real by any useful definition of that term. In Feb 2019, Sean Carroll recently interviewed James Ladyman on the subject of Reality, Metaphysics, and Complexity. Ladyman's philosophy is similar to what I've proposed in that he argues we have to treat persistent structure as real, but there are some differences between us as well. Listening to him wrestle with the status of numbers I wanted to shout, "Read John Searle's The Construction of Social Reality!" Anyway, I just wanted to point out that I'm not the only one. Incidentally, note that John Worrall's (1989) argument for structural realism is a different kettle of fish.

Any biologist will tell you that dissection can only reveal so much about an organism. You could sequence the entire genome, all the epigenetic info, and map all the genes to proteins and you'd still know nothing about how an organism behaves. You have to observe the living organism interacting with its environment as a system in order to appreciate that organism. Analysis and dissection are the methods of reductionism. And again, these are great for studying substances. It's just that if the object we wish to study is a structure, then reductionism is useless because the moment we dismantle a structure to find what it is made of, we cease to have a structure. 

So, we combine reductionism for understanding substance and anti-reductionism for understanding structures. Anti-reductionism is also sometimes called emergentism. George Henry Lewes (1817-1878) first referred to "emergent properties" of structures in 1875. An emergent property is a property of a complex object that is not possessed by any of its component parts alone or in simple combinations (lacking structure). Generally speaking, emergent properties are not predictable from the properties of the components.

John Searle's analysis of kinds of facts can help us understand how this relates to our daily life.

Kinds of Facts

My Searle-y explanation of the ancient problem of the Ship of Theseus illustrates the principle. Timber has certain intrinsic properties that are ontologically objective: they are real and don't depend on an observer, or we could say that they are true for all observers. Intrinsic properties don't allow a pile of timber to transport a hero across a sea. The timber has to be assembled in a particular way to create a range of new properties. The hull of a ship encloses a volume that has a net density that is much less than the density of the timber and less than the density of water. So a structure floats even if the building material does not. Thus we can build ships from steel which is 8 times as dense as water. Low density is an (emergent) property of the structure that its component parts do not possess. Similarly for the shape that makes a ship move easily through the water, and which resists sideways movement, and so on.

Such a structure is then fit for transporting Theseus across the sea. Functions are observer-relative, and require prior knowledge. A naive dweller may look at a ship and conclude, rather, that it is a cistern for keeping water in. However, for a knowledgeable observer, the fact that a ship is a ship, is epistemically objective. It is true for everyone who knows what a ship it.

Most discussions of this ancient problem centre on "identity" which is, at best, an ontologically subjective fact. I would argue that since identity is only apparent with prior knowledge, that identity is likely to be epistemically subjective as well. The question "Is it the same ship?" has to be followed by the question "To which observer?". All the accounts I have read of the problem assume an unchanging observer with prior knowledge, which is nonsense, and why the problem presents as a paradox.

To come back to the relevant point, the timber has intrinsic properties that make it suitable for shipbuilding. But the ship qua structure also has unique intrinsic properties that are limited by, but not determined by, the properties of the components: the density of the building material does not determine the density of the ship's hull. Structures, in other words, are every bit as real as components.

Structures are real

In my essay about layered reality, I accepted the pragmatic premise that structures are real. But I also pointed out that emergent properties accumulate with complexity. Something as fiendishly complex as a biological cell has many layers of properties that cannot possibly be predicated of mixtures of its individual atoms. There are 1000s of relatively simple chemical compounds as well as 10s of thousands of complex polymers such as peptides, proteins, and nucleic acids. 

As I say, we don't really know what subatomic reality looks like. But the atomic theory of matter is a very successful theory in that it explains a great deal and makes nice and highly accurate predictions. Matter at the atomic scale (just beyond the quantum indeterminacy) is deterministic. The laws that govern matter give (relatively) simple answers: the way the universe evolves on that scale is described by relatively simple equations and if we know the state at any given time, we can use the equations to determine its state at any arbitrary time. 

But this very soon breaks down. As with quantum systems, macro systems quickly become too complex to calculate. If we consider the problem is one of calculability, that is, strictly speaking, an epistemic problem, and we call this view weak emergentism. In this view, the entire universe is still deterministic even if we cannot understand it well enough to predict it. Reductionists who dabble in emergentism (like Sean Carroll) tend to favour this kind of emergentism.

However, if emergent properties are real, if they result in more than just increasing complexity and actually produce wholly new properties, then we have a new ontology at each new level and this is strong emergentism. Reductionists argue for a single, fundamental, ontology combined with some necessary approximations to cope with complexity. Metaphysical antireductionists argue that only the universe considered as a whole, with everything affecting everything else all the time is real (this position is rare). I take a middle path: reductionism for substance, and antireductionism for structures. 

One complicating factor is that in non-linear systems (typically where a large number of components are interacting) predictability may fall to zero. And this happens quickly. A simple pendulum is entirely predictable. But add another degree of freedom halfway along, a pendulum hanging from the end of a pendulum, then the result is apparently chaotic and certainly unpredictable. But this does not make it non-deterministic. The system is still evolving according to patterns (which we call laws when we can codify them), it's just that the system is highly sensitive to changes in the initial conditions. The pattern of a double pendulum is too complex to be computable with any usefulness. The question is whether at some point the unpredictability becomes non-deterministic, i.e. not simply that we cannot determine the pattern from observation, but that the evolution of the system is not governed by simple laws at all. No one would argue that living cells do not change in ways that have patterns, but do such patterns as exist constitute determinism?

The difference between a mass of unstructured matter and, say, a living cell, is vast. So vast that it opens the door to strong emergentism. And if matter organised into biological cells is not deterministic, then how much less so an organism composed of trillions of such cells, themselves structured into organelles, organs, and systems, all in multiple feedback loops. And as we now learn, all in meaningful relationships with our symbiotic microorganisms on the skin and in the gut.

Cutting Loose the Legacy of God

One might ask why we debate free will at all. It is, after all, a theological concept designed to make God seem to be less of a monster for having invented evil and suffering. We're under no obligation to the legacy philosophy and theology of the past. Indeed, the question of whether we have free will is not really the best place to start a discussion about morality. It doesn't even come into my long essay on the evolution of morality for example. What kinds of questions might we really interested in? 
Mainly, as far as I can see, we're nowadays interested in the issue of culpability. It is through this issue that discussion of will has become naturalised in the secular world, i.e., in the absence of god what is the basis for our continuing with the idea that good people deserve to be rewarded and bad people deserve to be punished. In a sense, this is now the issue. Not God's big evil, but our petty human evil. But culpability admits to degrees. I discussed this to some extent in an essay in 2015 called Why Killing is Wrong and I'm actually working on a more nuanced version of this in an essay provisionally entitled Objective Morality (chosen to be provocative). I also touch on relevant issues in my recent essay We Need to Talk About Utilitarianism, which criticises the assumptions that utilitarians make and the way they address moral questions.

If I kill someone, the question is not "am I culpable", but "to what extent am I culpable?" My role in society may involve killing or allow it in certain circumstances (soldiers, police, doctors). As a citizen, I am allowed to defend myself, my loved ones, and my property and lethal force may sometimes be justified. And so on. There are many nuances. 

We know that decisions and choices are influenced by many factors, not least of which is our social environment. It's now many decades since social psychologists pointed out that assuming a person's behaviour is 100% because of their internal motivations is a fallacy (the fundamental attribution fallacy). We are social animals, and much of our behaviour is influenced by what our group expects from us, or at least how we perceive their expectations. We have mutual obligations, sometimes these take the form of rights and duties. We're also subject to "priming", by which I mean if we're having a bad day, for whatever reason, we make different decisions than if we're having a good day. It may even be that what we encounter in the moments before making a decision unconsciously influences the outcome.

Societies do best when there is political stability and citizens are prosperous. Too much stability and a society will stagnate, cease to innovate, and when the time comes they will fail to respond to changes in the environment.  Too little stability and the society will become chaotic and fall apart from the inside out. So we consider everyone to be under mutual obligations. And in large societies, we formalise rights and duties in law codes, the oldest examples of which are almost as old as civilization itself. No human being ever had absolute free will because we live, we exist, in a social network with mutual obligations. Any philosophy that ignores this aspect of humanity is worthless. 


Discussing free will in a reductionist framework is filled with traps. For example, reductionists conclude that anything which is dependent on something else is not real, because it can be reduced to its components. And we've seen how badly physicists go wrong already: If Amy has six apples and Sheldon reduces them to a quark-gluon plasma in a super-apple collider and captures the plasma in a specially designed container that prevents any loss of matter or energy, how many apples does Amy have? None. Reductionists literally cannot see the forest for the trees. Or they cannot see the universe for the quantum fields.

One of the most common reductionist tropes is that human experiences are "just an illusion". It doesn't matter that you have a persistent sense of self, a lasting personality, are able to remember your life, and experience love. In a reductionist framework, it makes sense to say that free will is an illusion, because making decisions is a mental activity, and because everything that is involved in the decision-making process is complex and dependent on component parts. 

If we take an anti-reductionist approach to structure, the fact that an object or entity is complex and made of parts is not important as long as the structure persists over time. Of course, some reductionists also say that time is an illusion. Certainly, the way we measure time is somewhat arbitrary - we simply count the number of iterative processes or events that occur over the period of observation. Time measurements are arbitrary in this sense, but this does not mean that time is an illusion, far from it. Time is a way of talking about the patterns of change that we perceive in the universe around us. Because we can retain information about previous states and compare them to the present, we can perceive change. Change is ubiquitous and unidirectional with respect to the second law of thermodynamics. This gives us the so-called "arrow of time", by which we mean that far in the past the universe was in a low entropy state and the total entropy has been steadily increasing ever since. So time is also real. It doesn't matter that time is not absolute, because nowhere in my definition of real is there any reference to absolutes. Indeed, I'm inclined to argue against absolutes on principle. For example, we know that relativity is wrong at the beginning of time (the big bang) because it predicts a universe of infinite density. That kind of absolute tells us we've made an error, no matter how good the equations are in less curved spacetime. Even if someone manages to prove beyond reasonable doubt that time is an emergent property of quantum fields (and it already seems likely that space is such an emergent property) it won't make time an illusion. 

The problem here is that illusions are not causal. An illusion doesn't make a difference in the world because it cannot interact with the world. Thus, to say that free will is an illusion is to say that humans make no difference in the universe. This is not merely dismal fatalism, it's self-defeating. If humans make no difference, then it makes no difference what we believe and there is no reason to believe that we don't have free will. It is equally valid (at least) to believe that we do have free will. As a philosophy, it ought to lead to passivity, but it doesn't. People who don't believe in free will go on being active and making decisions; they just tell themselves a story about the experience of deciding that makes sense in a legacy/reductive framework, but doesn't in a more sensible framework. 

The same arguments occur for having a sense of self. Of course, self is not an entity; of course, it is generated by the brain, but to argue that our sense of self is not causal, that it makes no difference, is clearly ridiculous. Else why would so many people want to persuade us to stop believing in it? 

The (ill)logic of the Free Will Illusion

The argument is that free will is an illusion, i.e., that there is no free will, and that our apparent free will is not causal, i.e., it makes no difference in the world. But if it is not causal, why is it a problem? The answer is usually that our belief in free will (or self or whatever the "illusion" is) is problematic in some way (usually it makes us unhappy). So free will is an illusion but, being a potential causal agent, a belief is not an illusion. Indeed, in this argument, a belief is real and has causal potential. Beliefs make a difference in the world or they would not be a problem. 

We often see that the same metaphysical reductionists who get so exercised about free will being an illusion seem to become apoplectic about people who hold religious beliefs or even those people who continue to believe in free will. But if free will is an illusion and the world is deterministic why does it matter what anyone believes? Indeed, if there is no free will then no one has a choice about what they believe and trying to persuade them to change their mind is a wild contradiction in terms. If there is no free will then no one ever changes their minds because that would require us to be free to do so.

The reductionist argument about free will being an illusion is not followed through to its logical conclusion by any of its proponents (that I know of). There is clearly a glaring contradiction in asserting, on the one hand, that "free will" (whatever we mean by it) is an illusion and, on the other, asserting that beliefs are persistent in time and causal (i.e., real). Because believing, willing, and selfing are all of the same kind; they are all forms of mental activity (and this epistemically and ontologically subjective). If a belief is causal, then so is our will. Or if will is not causal, then neither are beliefs. You can't have it both ways. 

It does matter what we believe and it matters what we do, if only to the people around us. Because of the latter, the reasons we discern behind our own actions also matter. Will, belief, and behaviour have to be seen in a social context. We need to be able to produce accounts of our behaviour (i.e., reasons) that make sense to those around us, more especially when our behaviour contravenes group norms. Morality evolved in, and only makes sense in, a social context. The broad parameters are limited by our biology, but our flexibility as a species allows for huge variety in mores and customs (and interpretative frameworks).


09 August 2019

We Need to Talk About Utilitarianism

Although there is some debate in universities over different approaches to ethics, the fact is that those of us living in developed societies have been steeped in utilitarianism all our lives and this has been the case for generations. Utilitarian ideas have dominated the moral landscape since the late 19th Century. And utilitarianism was developed by the early liberals. Utilitarianism is the moral philosophy of socio-economic liberalism. Economic liberals believe that trade will maximise utility without any help and thus morality in the abstract. They just want to do business and remove barriers to doing business (and this tells us who they are). Social liberals believe that societies must intervene and take action to ensure maximised utility and tend to have a more concrete and pragmatic approach. Liberals are not interested in equality of outcome, although social liberals pursue equality of opportunity. Economic liberals are ready to abdicate all decisions to the marketplace and financial necessity (including problems like global warming).

When I set out to describe liberalism, I noted that in its time it was a radical and progressive response to centuries of unquestioned absolutist rule. Tyranny, not just over the body, but also over the mind. Thomas Hobbes was a 17th Century transitional figure in that he raised questions about when citizens owe allegiance to a king. Soon, however, John Locke was asserting that liberty was innate in a person and not something that a king or slaver could grant, they could only take it away. However, it is important to keep in mind that classical liberals were all willing to take liberty away from some classes of people. John Locke argued for enslaving prisoners of war and for the expropriation of land and resources from first nation Americans. J. S. Mill campaigned for women's suffrage, but also for continued British rule in India. Thomas Jefferson argued against slavery but personally owned hundreds of slaves throughout his life.

However flawed the early articulators may have been, the concept of individual liberty took hold and fired up revolutionary fervor in France and the USA. It also captured the imaginations of the British intelligentsia and the bourgeoisie created by the Industrial Revolution. This, in turn, led to the assertion of individual liberty as a social, moral, and legal principle. At first, this really only applied to the elite: women, for example, were initially excluded enmasse along with people of colour.  Indeed, the only group that were seen as capable of exercising liberty were landowners, the very people who (literally) enslaved and exploited everyone else. Gradually, a new breed of liberal emerged that wanted to universalise liberty and to use the apparatus of the state to help individuals achieve liberty where it had been denied to them. New or social liberalism happened earlier in the UK than in the USA where exclusionary classical liberalism was and is a much greater force.

The central irony, then, of classical liberalism was its espousal of individual liberty, while denying liberty to whole classes of people. The neoclassical liberals, the ones who espouse an extreme form of economic liberalism, have that same exclusionary mindset. They are not quite the same as libertarians whose attitude is every man for himself. Rather, the economic liberals seek to appropriate and consolidate wealth and power, which they rationalise with a combination of twisted Darwinism and utilitarianism.

Having pulled down the idols of absolutist rule by priests and kings, however, created a whole new set of problems. This essay is about the liberal response to this change and why liberalism goes about it all wrong.

The Better Angels Argument
There will be readers already wanting to argue, along the same lines as Steven Pinker, that liberalism has in fact been very successful: people have more prosperity, freedom, and peace than ever before. I've already made the point that there is liberalism and liberalism. Classical liberalism created a vast amount of wealth but distributed it very unevenly. They moved a predominantly agrarian workforce into urban/manufacturing jobs, and thence into service industries, but job security and working conditions peaked in the 1970s. Since then uncertainty has crept back into work in the richest countries. There has been a decline in overall poverty, but this is largely because the elite are  exporting jobs to poor countries and grooming the world's poor to be the consumers of tomorrow.

It is doubtful that consumerism really does improve anyone's life. The major long term cost of consumerism is climate and ecological breakdown. No matter the positive impact that modern economic liberalism has had, it is about to be wiped out by climate change. We are already seeing the rise in the frequency and intensity of extreme weather events, both droughts and floods. We really don't know what is going to happen as we continue to heat the earth's atmosphere, but we can safely say that it won't been good for the majority of humanity (or non-human species). We are also seeing a collapse in the flying (i.e. pollinating) insect population across Europe. Economic liberalism is not the better angel of our nature. In all likelihood it has killed us via pollution, extreme weather, sea-level rise, and mass extinction.

The ideas at the heart of liberalism, especially "reason" and "self-interest" have led us to existential crises of unimaginable scope and scale. In retrospect our behaviour has been irrational and suicidal.

Yes, of course, classical liberalism has promoted trade between nations making war less likely. Pinker is right about that. But the staggering cost of this is now apparent. Even in the short term, government has been captured by wealthy special interest groups. Based on research they helped to fund, oil companies are redesigning their offshore rigs to cope with massive sea level rise, while at the same time lobbying governments to prevent effective responses to climate change by undermining the credibility of that same science. Worse, we have been fooled this way before, by Big Tobacco, who took the same approach in the mid-20th Century. But also by Big Finance leading up to the 2008 global financial crisis. The "big lie" is a term coined by Adolf Hitler in Mein Kampf (1925) to explain how Germany became Europe's whipping boy. Hitler's idea was developed as an active PR strategy by his PR advisor Joseph Goebbels. And now for Big Business, and politicians the big lie is part of their stock in trade.

It is social liberalism, rather than economic liberalism that has done more for expanding enfranchisement, reducing slavery, objecting to wars, and generally making people free. And which is (belatedly) coming around to trying to prevent and mitigate climate and ecological breakdown.

Political Divisions

Social and economic liberals are to some extent at odds over the role of the state in society. Economic liberals want to minimise the scope and power of the state to allow for business people to make unrestricted profits. Social liberals are willing to accept a larger scope for the state in order to give the poor a hand up (note this is not a hand out). It is this last point that distinguishes social liberals from socialists.

A social liberal wants the individual to prosper. They see that systems sometimes place barriers in the way of individuals and are willing to use the power of the state to level the playing field. Classic responses to systemic inequality are state funded and standardised education. But even education is tailored to finding a job and being a productive member of society. For liberals, the ideal is the self-made person, especially if they have come from humble beginnings. Someone who worked hard and overcame obstacles to become a success in material terms. It's no coincidence that the American Dream is couched in these terms.

Socialists, by contrast, do not see progress in terms of the progress of individuals. Progress is really only progress if everyone benefits equally. Socialists want not (only) equality of opportunity, but equality of outcome. They are more inclined to solve inequality by actively redistributing wealth towards the poor and using the state apparatus to inhibit wealth accumulation. The socialist ideal is more about making a contribution towards the flourishing of the nation, helping people who cannot help themselves.

Despite the ongoing use of the terms, modern politics is not about left-wing and right-wing any more. It is about the clash between economic (classical) liberals and social (new) liberals, i.e., between the right and the centre-right. Actual socialists are rare in Europe these days and absent in the USA. President Trump is centre-right in his economic policies (trade barriers to "protect" American workers are a classic social liberal policy). But he is strongly authoritarian, nationalistic, and racist, which is falsely attributed to the far right, but is in fact independent of the left/right access.

Utilitarianism encapsulates some of the ideals of liberalism, mostly classical/economic liberalism. It prioritises the individual and undervalues context, particularly the social nature of humanity. Utilitarianism was framed by the mercantile class and thus expresses the ideals of economic liberalism as though they are self-evident truths. Like liberalism, it was formulated by members of the educated elite who were (unconsciously) seeking justification for their behaviour on the world stage, i.e. brutal and rapacious empire.

How did we get to this point?

The Collapse of Idols

One of the most important roles of the ancient kings and prophets was as law givers and moral arbiters: The Code of Hammurabi, the Laws of Moses, the Dharma of Manu, the Annalects of Confucius, and dozens of others. And one of the recurring anxieties of history (down to the present) is that in the absence of obedience to such laws human beings would simply run amok. In the Western tradition this anxiety is manifested in Thomas Hobbes' highly influential book Leviathan. As I noted (On Liberty and Liberalism), Hobbes lived through a time of war and socio-political turmoil and as a result described the natural state of people as war. Hobbes' opinion that the life of man in the absence of a tyrant to rule over them was "solitary, poor, nasty, brutish, and short". As a result he argued that only a strong authoritarian leader (the eponymous Leviathan) could impose order and civility on us.

Unfortunately, this view became quite widespread amongst Enlightenment intellectuals. They saw themselves as a uniquely civilised elite surrounded by a sea of barbarians (the rest of us). They probably read too much idealised Roman history and utopian fiction. They seem also to have been infected with a secular version of the "God's chosen people" myth. In particular, the classical liberals saw themselves as having risen above the hoi polloi. They saw themselves as establishing a new kind of rational social order, with themselves at the apex, that would displace the hereditary aristocracy and the Church.

However, they also saw themselves as facing exactly the same problems: how to protect the wealth and justify the power gained through exploitation; how to subdue and control the barbarians outside the gates and avoid the fate of the Roman Empire. The minds of the elite have been obsessed with this ever since. So on one hand they were wrestling with the problem of the collapse of traditional authority and on the other they were very much interested in taking over, exploiting, and expanding that authority. All they needed was some kind of justification that the masses would accept.

The New Idols

One of the products of the early successes of the Enlightenment was hubris. Natural philosophers or scientists as they came to be known, promoted the idea that the universe was a gigantic complex mechanism and that everything could be understood. At this point they did not know that the visible stars are part of a galaxy, or that it was one of approximately two trillion such galaxies, or that all the galaxies are flying apart at an accelerating rate. They were only just starting to get clues that matter could be divided and subdivided into very much smaller units.

The mechanistic universe followed predictable patterns and was governed by simple laws: F=ma, V=IR, 2H2+O2→2H2O and so on. French philosopher Auguste Comte envisaged an anthropocentric hierarchy of such laws. Physical laws would govern matter at the lowest scale and give rise to chemistry. Chemical laws would govern biology. Biological laws would govern the functions of minds, and laws of the mind would govern how societies function. This idea was taken up and developed by none other than J. S. Mill, hero of Liberalism and also one of the first generation of Emergentists. Mill was the first to describe matter as having emergent properties.

The idea here is that the world is governed by natural laws. And the key feature of such laws is that they don't require human intervention; they are universal, impersonal, and follow logical patterns that can be apprehended through the use of reason. Anyone can discover the natural laws for themselves, though in the myth only the elite had the required education and intelligence (women were still excluded from the elite). These laws became the new idols, and those who could discover them the new priests. The Copernican revolution was accompanied by a social revolution.

Many of the natural laws we take for granted were discovered by the classical liberals and their friends. The individual who could discern the law was at the heart of this new idolatry; he was Nietzsche's übermensch. Various threads of modernist thought exploited the new emerging dynamic giving rise to some new archetypes, or at least new manifestations of the what Nietzsche called Apollonian and Dionysian archetype. Most obviously "the scientist" discerns laws through observing nature and applying reason and logic; and "the artist" lives in contact with nature and discerns deeper truths through paying attention to their own subjectivity and through self-expression. Even the protestant plays the game, observing a personal relationship with god and allowing that to guide their actions. Of course, Luthor predates Hobbes by about 150 years, but arguably the loosening of the mental shackles imposed by the Catholic Church opened the way to other changes that loosened the shackles imposed by kings; and thus to the deposing of both.

We are not all as pessimistic as Hobbes. Still, being social animals, humans intuit that their lives must be governed by someone and many believe that they could be that person. Replacing the all powerful law givers with the idea of natural laws was a genuine breakthrough. Potentially, it shifted the power and authority to an abstract, but still natural, third party outside the group. In the early days, it seemed that natural laws would resolve all of our differences and conflicts. Heady stuff.

Of course this was hubris and it did not pan out as the Enlightenment figures hoped it would. There are limits to knowledge, some of which may turn out to be absolute. And the natural laws that might govern our lives, the social analogues of the formulas of physics and chemistry never really emerged. The increasing complexity inherent in the hierarchy of sciences meant that the Enlightenment project floundered and was eventually rejigged into modern science with its emphasis on uncertainty, and statistics, and our ongoing failure to understand quantum mechanics or to reconcile QM with relativity. The early promise of materialism faltered and it fell by the wayside.

What did happen was a swindle: the ideology of liberalism was passed off as a natural law.

With the classical liberal ideas of the individual (i.e. the individual, wealthy, educated, white man) as the chosen one of the new order and their individual liberty (from the oppression of the masses) as the sine qua non, came the emergence of a new form of morality, i.e., the Utilitarianism of J. S. Mill and Adam Smith. The pursuit of happiness was enshrined in the US constitution as a fundamental right, though it took some time until they abolish slavery and longer still to enfranchise women and the descendents of former slaves. Utilitarianism is framed in universal terms, but its founders still saw humanity in Hobbesian terms (i.e., in need of a good tyrant). In particular, the expansion of European imperialism shifted the narrative from a God-given right to rule over man and beast towards naturalistic arguments about survival of the fittest and the lack of fitness of some people to rule over themselves.

Rational Self-interest

If you hold individual liberty to be an inalienable right and also that there are almost no justifications for infringing that right (as did Locke and Mill), then there is really only one viable moral arbiter: each individual must decide for themselves how to behave. This is protestantism reductio ad absurdum. But when human nature is vicious, aggressive, and acquisitive something ought to stand in the way of one human simply killing another and taking their stuff.

The early liberals believed that men, and more especially educated white men like themselves, were in a position to rise above (Hobbesian) human nature through the use of reason and become their own moral arbiters. They partly did this by sending boys to private schools where education in the classics combined with beatings, humiliation, and peer pressure either shaped them into members of the elite or broke them. This is still the preferred route for the children of the elite, though methods have changed somewhat. Private school boys (and now girls) subtly learn that they are better than the masses and destined to be a limb of the modern polypod Leviathan, i.e., the State. Heads of the UK state are once again old Etonians as they were before social liberalism opened the door to common people like Margaret Thatcher.

As we have seen, the early liberals adhered to a view of reason as:
"a specific conscious mental process by which we apply logic to problems and arrive at knowledge of the truth, which then guides our decisions." (We Need to Talk About Reason)
If a citizen could be persuaded by reasoned arguments to follow some basically civilising prohibitions on barbaric behaviour (like murder and stealing), then there was really no need for anyone else to get involved. Lawful citizens ought to be free to go about their lawful business (the word "business" in this cliche is no accident). But this reveals another problem created by the destruction of the pre-modern idols. In religion there is a point to being obedient, i.e., salvation. If salvation is off the table, what is the motivation for being lawful?

In moving away from the lists of banned or taboo actions typical of religious morality, utilitarian liberals had a problem. What was the point of behaving yourself? Their fixation on the individual strictly limited the possible answers to this question. In the end, the best they could come up with was the lame idea that being good led to happiness.
“Happiness is the sole end of human action, and the promotion of it the test by which to judge of all human conduct” (J. S. Mill. Utilitarianism, X: 237).

For any individual, a moral life will consist of maximising happy outcomes; while for a society, morality becomes the greatest happiness for the greatest number. This is known as the greatest happiness principle. It is just as banal as it sounds. I'm sure most people think happiness is important, even if we do not agree that it is the "sole end" of human action. The founding fathers of America placed the pursuit of happiness alongside life and liberty as fundamental rights that all men have. But what exactly is happiness? In general, and against the grain of millennia of religious thinking, utilitarians argued that happiness is pleasure.

You know you are in the twilight zone when people who argue that humans are rational and make rational decisions at the same time argue that the sole end of human activity is an emotion and that the promotion of an emotion is the test of whether our rational decisions have been a success.

Any fan of Star Trek (original series) can see this contradiction in the character of Commander Spock, who consistently denies having any emotions although, because he is half human, he sometimes does display emotions to his acute embarrassment. The obvious question is raised from time to time. Why does Spock show preferences at all? It is, he claims, because it is logical. But what kind of logic is he talking about? Deductive? Inductive? Abductive? How can these guide us to, say, a moral decision? Doesn't it all depend on what we believe in the first place. And as Michael Taft says, "a belief is an emotion about an idea." Of course the TV show plays on the contradictions in Spock. His colleagues are constantly catching him out expressing and relying on emotions and teasing him about it. At which point he always becomes visibly annoyed. Another emotion. Later Star Trek writers took the idea of an emotionless man ever further in the form of Commander Data (who was more obviously crippled by his lack of emotions). But let us return to the main theme of utilitarianism.

Jeremy Bentham stated the happiness = pleasure argument quite crudely but, starting with J. S. Mill, utilitarians have refined the idea. Mill, for example, argued that mere quantity of pleasure it stimulated could not account for the goodness of an action. Indeed, Bentham's theory sounds like a justification of hedonism rather than a moral doctrine. Mill introduced a dimension of the quality of pleasure, arguing that some pleasures are better than others. In a sense Mill was just reflecting the elitism of his day which allowed the refined hedonism of the ruling classes, but frowned on and repressed the simple pleasures of the working classes. This class discrimination persists in the UK despite the muddling of the classes. Middle class British people, especially, like to mock both the elite and the workers. They are our satirists, though they often seem to target (other) celebrities rather than politicians. They are educated enough to understand the exercise of power, but excluded from wielding it, and contemptuous of those who are compelled by it (including themselves).

In the manner of philosophers, some responded that if there is a distinction in the quality of the pleasure then it implies something other than pleasure is involved: some being for this other thing (whatever it was) and some being against it. And so on. Philosophers are trained to argue without any sense of needing to make a contribution to knowledge: the ultimate liberal art.

A central plank in the economic theory associated with classical liberalism, especially associated with Adam Smith and his interpreters, is that "markets" create an invisible hand that steers the economy. The market here is an abstraction from market places. The idea of the market is attractive because it is presented in the form of a natural law; one that can order our lives for us, maximising utility and thus happiness. Markets also ensure fairness (which is why Alan Greenspan was reluctant to prosecute white collar criminals in the finance industry).

Smith's idea of the market is based on a crude understanding of supply and demand, where supply and demand is presented as a "law" (it really is not a "law"). In this view humans only make rational, self-interested decisions; humans are motivated to maximise their utility (i.e. pleasure) through their participation in the economic system. In the myth of supply and demand, a producer responds to demand by making more or less of their product. The market informs the producer about the level of demand via the price that people are willing to pay. And knowing the price of a commodity is tacitly equated to sufficient knowledge to operate in the marketplace rationally. When the cost of production exceeds the price people are willing to pay, production will fall. None of which is true!

By the mid 1970s (at the latest) mathematicians had shown that supply and demand was not, and could not be, lawful in that simplistic sense. Supply and demand, even on the micro scale, does not work (the reasons for this are spelled out in detail in Steve Keen's book Debunking Economics). What is worse, in order to fit this micro idea to the macro economy, i.e. the economy of a state considered as a whole, economists have had to make a series of assumptions, each of which assumes that the previous assumption is true. A macro economy typically consists of thousands if not millions of producers and products, multiple levels of suppliers, and millions of consumers, and none of it following the idealised supply and demand law. In order to make the equations of macro-economics work, the economist assumes that an economy consists of one producer selling one product directly to one consumer.

Philosophy is Bunk

Let's cut short all this frivolous philosophising and call bullshit on utilitarianism. It might still dominate our society, but utilitarianism is not true. Not only have Buddhists been saying so for 2500 odd years but the Positive Psychology movement have confirmed it though empiricism: the pursuit of pleasure does not lead to happiness. And it is not hard to see why.

If I have one digestive biscuit with a cup of tea that is pleasant. A second biscuit may still be pleasurable, but if I keep eating at some point the same combination of texture and sweetness becomes unpleasant, no matter how much tea I wash it down with. It is quite possible to make oneself sick from eating too many biscuits: pleasure turns to nausea. Any addict will tell you that as you become accustomed to a certain level of stimulation it normalises. To get pleasure from it, you either need more of it or the same amount more often. As many experimental psychologists have attested, we rapidly become habituated to pleasurable sensations so that there is a diminishing return from the simple minded pursuit of pleasure. Even the more sophisticated hierarchies of pleasures are bunk.

But more than this, there is the existential truth that every experience ends, and usually quite quickly, because it is dependent on attention, and attention wanders. No matter how much you enjoy an orgasm, it is short lived and soon over and you are back to casting about for some other source of pleasure. The cessation of pleasure itself is unpleasant. If pleasure is happiness then no one can ever be truly happy.

It is difficult for a proponent of self-interest to formulate any coherent moral doctrine since morality is about how we treat other people. Anyone who treats other people only according to their own best interests would in practice be regarded as monstrous rather than moral. Societies very often shun selfish people. Morality is relational or it is meaningless. Even consequentialist or virtue ethics have to define what counts as good/bad in relation to some standard and that standard has to be relational.

Morality is Relational
Fortunately, we don't have to waste too much time on utilitarianism: the pursuit of pleasure makes us unhappy and the pursuit of self-interest makes both us and other people unhappy. At this point we could ask two questions: Firstly, is there a better definition of happiness that could rescue utilitarianism? Secondly, is there a better basis for morality. Ultimately, the answers these questions are "no" and "yes", but I think it's interesting to dwell on the first question a little and here Buddhism has a small contribution to make.

What makes us happy?

Assuming that a human being has food, water, and shelter, what makes us happy is the company of other humans (and some domesticated animals). This is far and away the most important facet of wellbeing. We are social animals and we are happy when we are securely embedded in our social group. In general, our well-being is promoted by sublimating our own needs to serve the group.

Primate social groups are not utopian, and especially they are not a socialist utopia. Primate groups are all hierarchical and all primate groups are violent compared to human groups. They are held together by empathy and a keen sense of reciprocity (including the fear of violent retribution), but there are also some individuals who are well liked, who form coalitions to dominate the group although usually in a narrow sense. An alpha male chimp has primacy when it comes to mating with receptive females, but not in much else. He's also expected to lead the charge against leopards, get involved in all intra-group conflicts (on the side of the weaker party), and has to spend much more of his time grooming other chimps than any member of the group.

Violence also plays a part in primate social hierarchies. Having members of the group who are big enough, strong enough, and aggressive enough to protect the group from predators like leopards, and who defend the territory in which they feed against neighbouring groups, requires social mechanisms to manage that capacity when it is not needed. And thus Frans de Waal has observed older male chimps intervening to prevent conflicts between others, defusing tension through physical contact and grooming. When things do get out of hand, the alpha male is often the one consoling the injured party and reintegrating them into the group. In Bonobos, females play the same role. Conflict cannot simply go on occuring because it breaks down the bonds that hold the group together and undermines the fitness of the group to survive.

So on one level what makes us happy is to be part of a healthy social group. Of course we are also individuals with individual goals. The anonymity afforded by living in groups of tens of thousands and even millions, gives us much more scope for individuality than living in a traditional village of ca. 150 people. We only share minimal mutual obligations with strangers and even if our actions are scrutinised there may be few consequences for transgressive behaviour compared to a more traditional small-scale setting. However, there is another answer to this question that we should consider.

Ego Dissolution

I cannot speak to this from personal experience, but there is a load of anecdote and an increasing amount of actual evidence that the experience of ego-dissolution opens up the possibility of a much deeper sense of satisfaction and well-being. Indian meditation techniques have been inducing ego-dissolution for millennia, as has the use of psychedelic drugs. Importantly, ego-dissolution is often accompanied by a greater sense of interconnectedness. It can be experienced, for example, as a weakening of the boundaries between one's body and the outside world; or as a sense of oneness; or of merging into a totality. How it is interpreted is partly determined by one's cultural conditioning.

Transcending the sense of being an individual seems to be a more satisfying state. Self-interest cannot have much meaning for someone who does not organise their experience around a sense of self.

There is an argument, still largely doctrinal, that it is the ego which seeks to take ownership of experience that causes unhappiness. Even a temporary experience of ego-dissolution opens up the possibility of being in the world without the grasping after experience that causes dissatisfaction. Ordinary experiences, not even pleasurable experiences, become more satisfying and effortlessly so.

However, it is doubtful whether ego dissolution is a realistic possibility for the general populace. The people having this shift in perception have always been a tiny minority.

The Angels of Our Nature

A new approach emerges from evolutionary perspectives on the ethology of social animals. I have written several long essays on this subject, beginning with The Evolution of Morality. This view argues that what we call morality is an emergent feature of the way social mammals, particularly social primates, live.

As Frans de Waal has noted, we share the same body plan and have all the same internal organs, including the endocrine system, as other mammals, so it would be weird if we did not experience the same emotions. Importantly, we seem to have at least two characteristics in common with other social mammals: the ability to experience empathy and a sense of reciprocity.

Empathy operates on many levels, the most basic of which is emotional contagion. When a monkey sees an approaching predators and gives a warning cry, the sound of the cry stimulates fear in the whole colony and sets them all in motion away from the threat. But at its most sophisticated level empathy allows us to use observations of the facial expressions, posture, and gestures of other group members to internally model—and thus experience—the internal states of other individuals. We do this with individuals that we interact with, but we can also understand interactions between other pairs or groups of individuals. We not only know the disposition of a given individual, but we know how they feel about different members of the group. This is vital for the functioning of the social group.

Reciprocity is the application of this ability to know the minds of the rest of our group to keeping track of the contributions the group have all made to each other. The levels of mutual grooming between individuals are important to chimps for example. If everyone sees Steve and Dave grooming each other a lot, then we can safely assume that the two of them will stick together in a fight. So if I want to pick a fight with Steve, I need to wait until Dave is otherwise engaged. But equally, if I'm angling to be alpha, then I know that if I groom one of the pair, the other might also join my coalition.

In my account of the evolutionary origins of morality, I argued that, far from being selfish, social mammals must err on the side of generosity. We can think of reciprocity as a network of feedback loops. I share with you and you share with me; I withhold from you and you withhold from me. If there is no bias towards generosity the second, negative feedback loop would quickly reduce cooperation to zero, whereas social mammals are highly prosocial and highly cooperative.

What's more, we also know from primate ethology and from anthropology that societies often punish selfishness. Jared Diamond recounts the story of a fisherman who one day decided that he wasn't going to share his catch. Not only did the community respond very negatively, he got a reputation for being stingy and this continued to affect him for a long time afterwards. Reputation with respect to meeting mutual obligations within a social group is very important. After all, we evolved to be prosocial in order to better survive.

This approach to morality comes under the heading deontology: it concerns right, duties, and obligations. Sometimes deontology is caricatured as "rule following", but this is an over-simplification. We can still think of morality in terms of consequences (as the utilitarians did) but we understand that desirable and undesirable consequences are relative to our mutual obligations. Similarly, this does not prevent us from seeing morality as a matter of virtues, as long as we understand a virtue is defined in terms of mutual obligations. Generosity is a common virtue, for example, and it is a virtue because it plays a vital role in creating and maintaining mutual obligations.

One might even argue that this view is also consistent with a particularist account of morality - i.e. one in which there are no moral rules and we take each situation as it comes. It is true of this deontological approach to morality that rules may not be easy to articulate or apply because our commitment to mutual obligation can vary from group member to group member. We tend to have one set of rules for family and another for more distant group members. In other primate groups, familial relations are also important, though like us one or other sex will often leave home at sexual maturity and join another community (male chimps and female bonobos).

Cities and Megacities

As I have already observed, the limitation of this account of morality as evolved and based on mutual obligation is that the bulk of human now live in urban settings in which we are surrounded by, and mainly interact with, strangers with who we may have little or no sense of mutual obligation. According to Robin Dunbar's research on social groups and neocortex-to-brain volume ratios there is a physical limit to how many relationships of mutual obligation we can keep track of. Chimps live in groups of 30 - 50 while, other things being equal, humans tend to live in groups of around 150. But we also form looser arrangements with ca 500, 1500, 5000, and so on.

In a group of up to 150 we have a pretty good idea of the overall structure of mutual obligations amongst the group: we know who are friends or enemies or lovers; who is related to whom and how; we know who to ask for help; and we know who is where in the social hierarchy. And so on. Beyond this we begin to take membership on trust. We rely more on external emblems of membership such as personal adornment; this allows us to expand our circle of trust that an individual will be likely to meet their obligation to us.

If I am from the large tribe who paint their faces with red ochre and I meet a stranger whose face is painted with red ochre I can assume that they will be likely to interpret mutual obligations in the same way that I do. I don't have to worry too much about a false flag operation, because we kill any outsider who attempts to adopt our emblems and we have subtle ways of assuring ourselves of the authenticity of membership (shibboleths). Mutual membership of a large tribe means we will probably speak the same language and have the same worldview. I can trust this person and make agreements with them with some assurance that they won't break the agreement. This is still a relatively small world. 

Beyond this, when dealing with strangers we don't automatically have a relationship of mutual obligations. One of the main functions of governments (of all stripes) is the enforcement of contracts between strangers. And this brings us back to the need for laws that govern our behaviour. We need laws to government behaviour but not because Hobbes was right and our natural state is war. Rather, we may say that, in social primates, mutual obligation is only strongly experienced within one's  social group at the 150 layer. 

In a modern state ,we grow up understanding that we have a mutual obligation to the state. We obey laws and pay our taxes and, in return, the state attempts to create an environment that is safe and stable, and the state provides certain services. Importantly, the state seeks to balance the rights and duties of players who have differing amounts of power to prevent the exploitation of the weak (this is the classic alpha-primate role): So the state legislates the rights and duties of buyer and seller, landlord and tenant, employer and employee, and so on. Each state may have a different take on these rights and duties and they may change over time, but the role is the same. 

Philosophers like to use the fact that laws and conventions vary in different societies and states to argue against seeing morality in a unified way. Arguments for moral relativism are not tenable when we look at the structure and function of laws. Yes, to some extent laws are arbitrary and changeable, but they always serve the same functions within the society. It is a classic case of emergent properties in which the higher level (human society) is constrained by not determined by the lower level (primate ethology). By analogy, just because there are many different types of boat, does not mean that boats don't float. 

However, we may say that, under utilitarianism as an expression of economic liberalism and mercantilism, the rights of the rich and powerful tend to be protected ahead of the poor and vulnerable. I am still sometimes shocked at the difference in presumptions in New Zealand where I grew up, and in the United Kingdom where I live. The presumption in favour of landlords and employers, for example, is much stronger in the UK. Although to be fair, under the influence of neoliberalism this balance shifted in my time in New Zealand as well.

One of the features of the modern world is that morality is linked to socio-economic status in the minds of the ruling classes. To be wealthy is held to indicate superior moral qualities and, on the contrary, to be poor or without work is to be considered morally inferior. These days accepting state assistance when out of work requires that one almost becomes a ward of the state. The state undertakes to oversee your redemption in the form of returning to productive work. And to do this it uses a mixture of rewards, punishments, and psychological "nudges", including a barrage of press releases from government departments on the moral qualities of those individuals who accept state help. This is social liberalism in operation: paternalistically trying to make you into an ideal individual. 


Utilitarianism is ubiquitous as a moral theory across the English-speaking world. And yet the assumptions behind it are demonstrably false, the goals of it known to be unreachable, and the methods it proposes do not lead to the stated goals. Utilitarianism was supposedly the Enlightenment rationalism contribution to moral theory but it turns out to be completely irrational.

It's not that at some point people abandoned irrational religion and dedicated their lives to rational pursuits. As we've seen, the classical notion of reason that the moral theorising of Bentham, Mill, and Smith was based on was a fantasy. Many intellectuals did abandon religion, and atheism is now the standard position for English speaking intelligentsia, but there is nothing rational about the beliefs that they now profess with respect to morality. This is nowhere more apparent than in a BBC radio 4 programme called The Moral Maze in which the same group of opinionated neoclassical liberal intellectuals argue with invited guests about the "morality" of some situation. Moral principles are never articulated, but utilitarianism is assumed through out. The panellists adopt a position of superiority to their (often expert) guests and devise arguments against everything that is said. No wonder morality becomes a maze for the rest of us.


One problem is that not all social rules are moral rules. As Sangharakshita pointed out many years ago, for example, most of the rules in the Theravāda Vinaya have no moral significance at all and are merely a matter of etiquette. In her book, Watching the English, anthropologist, Kate Fox, described the complex rules for queuing to buy a drink in an English pub. These are significantly different from queuing in other contexts. Generally speaking the English take queuing very seriously so doing it wrong can result in verbally expressed disapproval. Such mutually agreed rules of conduct—etiquette—both help to establish reasonable expectations and to identify strangers. One of the reasons we may be stressed by immigrants is that they don't--they have not internalised our etiquette (as an immigrant I still struggle with this and cause stress for the locals, sometimes with a certain amount of delight on my part).

Why do we separate out moral rules and etiquette? Well, largely because of religion. Moral rules are those which relate to soteriology. Since atheists have abandoned the notion of soteriology, why have we not abandoned discussion of morals? Why do some decisions have moral connotations and others not. Why, for example, does editing a child's genome using the CRISPR/Cas9 technology seem to be a moral issue, but using food to calm a child down (leading to obesity and the attendant health problems) not seem to be? One is a matter of public debate and the other a matter of personal choice. Again I think answers to such questions can be found in primate ethology. There seem to be rules of human conduct that are non-arbitrary and ubiquitous across human groups (such as killing a member of the group) some that are arbitrary and vary with limit. And I think the deciding factor is the effect on the overall health of the group.


The question, then, is whether there is any alternative to the moral maze created by treating liberalism in its various forms as a natural law and to utilitarianism as one expression of this. Or does this inevitably lead to objectionable moral relativism? I've hinted that I believe that primate ethology and a structuralist approach offer us some relief. In this view, despite the plethora of human societies each with its own rules, we can see the purpose of having such rules as being shared at some level of structure. As long as the rules accomplish the deeper purpose of binding the group and enabling cooperation it does not matter what form those rules take at a higher level of organisation. Indeed, to some extent the forms that human societies take are constrained by our underlying membership of the set of social primates. And these constraints have some objective basis, i.e. empathy and reciprocity. 

In my next essay in this "We need to talk about" series, I'm going to revisit this whole topic from the point of view of objective morality. It is often said that science cannot tell us how to behave, but I think this is now self-evidently inaccurate. Science, particularly the kind of Darwinian evolution articulated by Lynn Margulis, has been very influential on how I see morality, as has the primate ethology of Jane Goodall and Frans de Waal. Science disproves the validity of utilitarianism as a good basis for morality. Evolution and ethology have opened up a whole new way of thinking about what morality is, how we evolved to be moral, and what forms morality can or should take in human societies at different levels of organisation (as well as informing us as to the nature of those levels).


See also: Cooperation with high status individuals may increase one's own status https://phys.org/news/2019-08-cooperation-high-status-individuals.html

"The finding that status depends on cooperation provides insight into why human societies, particularly small-scale societies like the Tsimane, are relatively egalitarian compared to other primates," says von Rueden, joint lead author of the study. "Humans allocate status based on the benefits we can provide to others, often more than on the costs we can inflict. This is in part because humans evolved greater interdependence, relying on each other for learning skills, producing food, engaging in mutual defense and raising offspring."
Related Posts with Thumbnails