Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

23 October 2015

Reality. Again.


There's a lot of talk about reality in Buddhism. Buddhists will often claim that our meditations will give a person direct access to reality, or knowledge of reality. I've come to see that these claims are bunk. Part of the problem is that our Iron Age predecessors introduced a term yathābhūta-ñāṇadassana which is taken to mean "knowledge and vision of things as they are". Now these Iron Age predecessors were not seeking knowledge of reality by any definition of that word that might be relevant to a modern reader. They sought knowledge of the origins and ending of suffering, where suffering was primarily experienced through repeated rebirth into the world. They claimed to have knowledge of how the rebirth process works and the conditions which lead to suffering. At no point do they claim knowledge of reality as we understand that term.

One of the problems we have in the philosophy of science is that classical mechanics is a description of reality as we experience it, but quantum mechanics is not a description of any reality we could experience. Of course we can experience the classical consequences of quantum phenomena - the scattering pattern of the two-slit experiment, for example, is a classical consequence of the quantum phenomenon. But we do not experience the phenomenon inferred by quantum mechanics (a photon passing through both slits simultaneously and interfering with itself). All the clever people do is work out mathematically how to make the same result appear in their calculations. Sure the calculations are accurate, but they have an entirely uncertain relationship to reality. Given that we have an equation that is accurate at predicting the classical consequences of quantum phenomenon, it is tempting to think we have a map of some hidden territory. But nothing could be less certain than this conclusion. No one understands the reality that quantum mechanics describes, however, good they are at fiddling with the parameters to create classical consequences. Reality at that level is a black box and likely to remain so forever. 

We have much the same problem with General Relativity. These days we wonder how stupid people must have been to think that the sun goes around the earth. But just by looking at the sun it is very difficult indeed to  see this. The ending of the geocentric worldview was not brought about by insights into the sun, but into the planets. It was understanding that the planets were not in orbit around the earth, but around the sun that made us question the geocentric model. General Relativity tells us that there really is no force of gravity. The reality is that masses cause spacetime to curve in around them. We know from Newton that masses travel in a straight line unless some force acts upon them (from the First Law of Motion). So when we see an object moving in a curved path we naturally conclude that some force is acting on it. If we throw a ball it travels in a curve (a specific kind of curve known as a parabola) and falls to the earth. As one of my Buddhist teachers once said at a public meeting "Gravity is a larger mass attracting a smaller one". That's completely wrong of course and attracted gasps of horror from the Cambridge audience (there was more than one physics PhD in the room!). But that does describe the experience. Two tennis balls don't attract each other the way that the earth and a tennis ball do. The fact that the earth moves an infinitesimal amount towards the tennis ball is obscure because the effect is too small for us to measure, let alone see. Experience suggests that objects are attracted to the earth in a way that they are not attracted to each other. 

That's just how it seems. But it is not the case at all. A very cleverly designed experiment in Cambridge's Cavendish laboratory showed that small masses do appear to attract each other gravitationally, but that the apparent force is very tiny because the masses involved are so small. Even so all this is incidental because Einstein's theory tells a different story. General Relativity tells us that the reason the ball follows a curved path is that spacetime is strongly curved near the surface of the earth. The ball is doing it's best to obey Newton's First Law of Motion and travel in a straight-line. What it "discovers" is that there are no straight lines near the earth. So the ball follows the curvature of spacetime which happens to be in the shape of a parabola. 

Try as we might we cannot see spacetime. We know it must exist precisely because of things like light propagating through space, and the path of light being bent near masses (light is itself massless so there is no possibility of a gravitational interaction). There is an enormous body of evidence which makes us quite certain that we have understood spacetime under normal circumstances. However, the theory itself must be incomplete because it breaks down at the Big Bang. The maths says that at the Big Bang, the dimensions of spacetime were all zero; implying infinite density. My many physics teachers over the years always emphasised that when your calculation produces an infinity, then you have done something wrong and must go back and check your working. Reality does not contain infinities, or if it did then everything would be incomprehensible different than it is. Either way we do not understand the Big Bang because it involves infinity. 

No amount of mediation and insight is going to directly help us with these problems. One can imagine that meditation and insight might help a physicist or mathematician in their work, but, on the whole, the two projects are completely unrelated. The experience of rarefied mental states does not shed light on reality. So what does it shed light on? Experience. It ought to come as no surprise that what we gain insight into when we examine our experience, is experience itself. I'm more and more convinced that specific types of meditative experiences are what the Buddhists were aiming at. They come under the broad heading of emptiness. Of course, there are many kinds of experience that one can have in meditation. Some of incidental or spurious and others profound. But in terms of the liberating insights said to end rebirth or being (bhava) I am beginning to focus my attention on those states in which there is no content: so sense experience and no normal mental experience, and yet still some kind of experience. I first noticed this in 2008 in an essay called Communicating the Dharma:

Further there are sensations associated with desire (chanda), thinking (vitakka) and with the perceptions (saññā). Sensations are present in all the combinations of presence or absence of these three. When they are all absent something new arises that is simply described as stretching out for (āyāmaṃ) the attainment of the as-yet unattained (appattassa pattiyā), and finally there are sensations associated with this.
It is in these states of emptiness that one has a kind of transformative experience that reorganises the psyche and the relationship with sensory experiences. From the same essay:
The Buddha here is saying something quite profound - that if one looks beyond mundane everyday experiences, if one can put aside desire, intellectual twisting and turning, if one reaches beyond the normal scope of consciousness - then one finds not annihilation, but something as yet unattained. 
And I think it is this kind of experience that is being described or discussed in the Perfection of Wisdom texts. Consider for example this abstruse discussion between Subhuti and Śāriputra from the first chapter of the Aṣṭasāhasrikā-prajñāpāramitā (Chp1, para 7, my translation.)
Then indeed, Elder Śāriputra said this to Elder Subhūti, “Still, Elder Subhūti, does that mind which is without mind, exist?” 
That said, Elder Subhūti said this to Elder Śāriputra, “With respect to a state of being without mind (acittatā) can existence (astitā) or non-existence (nāstitā) be found or obtained?”
Sāriputra said, “This is not [the case], Elder Subhūti!” 
Subhūti said, “If, Elder Śāriputra, existence or non-existence are not found or obtained there in the state of being without mind, is the question, 'Does that mind which is without mind, exist?' appropriate for you Elder Śāriputra?”
When that was said, Elder Śāriputra said this to Elder Subhūti, “So what is this state of being without mind, Elder Subhūti?” 
Subhūti said, “Śāriputra, the state of being without mind (acittatā) is immutable (avikāra), does not falsely distinguish (avikalpa)  [between real and unreal].
This is one of several passages in the Aṣṭa that are reminiscent of the Kaccānagotta Sutta (SN 12.15) in denying the applicability of the ideas of existence & non-existent or real & unreal in discussions about Buddhism. These concepts do not apply to the world of experience.

Following D T Suzuki and Edward Conze, we usually take this kind of self-negating language in the Perfection of Wisdom texts to be an attempt to confuse the rational mind in accordance with Romantic anti-intellectualism. Romantics believe that ultimate truth comes rather from the inner spirit than from the intellect. The idea here is that by tying the rational mind up in riddles, the spirit can assert itself. Apart from the fact that Romantics interpretations are all dualistic and eternalistic, and thus grossly false by most Buddhist standards, this procedure is akin to banging one's head against a brick wall in pursuit of wisdom. Treating the entire Prajñāpāramitā literature as a gigantic koan is simply a mistake. Just because Suzuki and Conze were confused does not mean that confusion is the only possible response to these texts. 

On the other hand, if we assume that the context of the dialogue is two master meditators trying to articulate the experience of emptiness, in the sense of contentless meditative states, then we can stop banging our head against the wall. I don't claim to have unlocked the language of the text, but I am hopeful that abandoning Conze's awful translation and re-reading the text as though it makes sense will be fruitful. Compare this to my comments on Paul Harrison's work on the comprehensibility of the Vajracchedikā-prajñāpāramitā. Part of my optimism stems from several essays written about emptiness by my colleagues in the Triratna Order, one of which is available for public consumption (the others are embargoed as they are part of an in-house discussion in the Order, I am trying to encourage my colleagues to make their work more widely available). Satyadhana's essay The Shorter Discourse on Emptiness (Cūḷasuññatasutta, Majjhima-nikāya 121): Translation and Commentary, in the Western Buddhist Review, gives us a flavour of the discussion. It seems to me that there are important continuities that have yet to be explored, but which promise to shed a great deal of light on the intentions of the Prajñāpāramitā authors.

Buddhists often assume that because Quantum Mechanics and Emptiness are both confusing and reputedly profound, that one must shed light on the other. I've done my best to debunk this fallacy in two previous essays (Erwin Schrödinger Didn't Have a Cat and Buddhism and the Observer Effect in Quantum Mechanics). The reality that we struggle to understand through the abstruse mathematics is not the same as the reality that we seek to understand through religious exercises. The mistake seems to rest on a misunderstanding of what the word "reality" means. Scientists and meditators use the word in ways that are almost entirely unrelated.

~~oOo~~

This essay is partly inspired by a series of essays on the blog a filosofer's thots, starting here: Bohr’s reply to EPR (Part I) spotted in the Twitter feed of @seancarroll.

18 September 2015

The Failure to Communicate Evolution


EVOLUTION IS IN THE NEWS a lot these days. Buzzy scientists, like waspish Richard Dawkins, make stinging attacks on Creationists, who respond in kind: The God Delusion versus The Dawkins Delusion. In the US something like a pitched battle is going on in some places, where creationists want to replace science in schools with a literal reading of the Bible.

When evolution is a self-evident fact, and I think it is, why are so many people unconvinced by it? Building on my work on the psychology of belief I'd like to use problem of communicating evolution, or more precisely the problem of failing to communicate evolution, as a case study.

In my essay Facts and Feelings I set out my take on Antonio Damasio's model of how we process new information. Presented with some new item of information we evaluate the likelihood it is true. As per Justin Barrett's theory of belief, discussed more recently, we make these decisions based on fit with existing non-reflective beliefs. In any situation we will usually have a range of facts (items of information we consider to be true) and we have to judge which information is relevant to the situation, and which information take precedence in determining our course of action. I called this salience. Not everything that makes sense is salient; and not everything that is salient makes sense. 

A few hundred years ago in Europe, everyone knew that God created the world and this seemed to make sense to the vast majority. It was also deeply salient because the existence, omnipotence and omniscience of God were always important factors in understanding any situation and deciding how to act. The Church was the final authority on these matters and had adopted an earth-centric model of the universe. All the "heavenly" bodies, the sun, moon, planets and stars, orbited the earth. And then the situation began to change. Astronomers observed, for instance that the orbits of the planets were very difficult to explain if they orbited the earth and simple it they orbited the sun instead. And the orbits were ellipses rather than perfect circles. They saw that some "stars", visible only with a telescope, orbited not the earth or the sun, but Jupiter (the moons of Jupiter). Old sureties began to break down. Scientific Empiricism started to come into it's own. Knowledge based on closely observing the world began to supplant knowledge gained through abstract or theological speculations. Astronomers, using nothing but simple telescopes and patient observation, changed how we see the world and our place in it. Later with more sophisticated telescopes they introduced more paradigm changes. Now we know that our sun is an average, nondescript star in a fairly ordinary galaxy. One star out of 100 billion stars, in one galaxy out of 100 billion galaxies. Of course some of this knowledge is inferred. But the whole package has been observed so often that there can be no doubt that this is the case. It's as obvious a fact as that Cambridge is a town (population of about 120,000) in the United Kingdom, a country of population ca. 65 million.

A simple view of this change is that this shift in our understanding happened simply because the empirical knowledge was more true than theology. But my model suggests that it must also have been more salient to the people concerned. Why was astronomical knowledge more salient? I'm no great historian, but it seems to me that the Roman Catholic Church was starting to lose authority at around the same time. Martin Luther died in 1546. The key figures of the astronomical revolution were Nicolaus Copernicus (1473 – 1543), Tycho Brahe (1546 – 1601), Galileo (1564 – 1642) and Johannes Kepler (1571 – 1630). The concerns that led to the forming of Protestant churches probably helped to provide an environment in which the observations of astronomers would be taken more seriously. The world was changing in others ways as well. Christopher Columbus (1450 or 51 – 1506) and Hernán Cortés (1485 – 1547) were busy expanding the Spanish Empire and enriching Spain immeasurably around this same time, while Ferdinand Magellan circumnavigated the earth. This was also the time of Leonardo Da Vinci and Michaelangelo, the beginning of European involvement in slavery, and so on. The Renaissance is in full swing and along with it the rediscovery of ancient Greek Humanism.

Truth is relatively simple considered alongside salience. What makes a truth salient, is tied up with psychology, culture, and politics. I will argue that the problem of evolution is complex, because the truth of it is not self-evident to many, there is massive competition in terms of salience, there has been a failure of empathy in communicating evolution. 


Evolution

Empiricism, science, has progressed in leaps and bounds since the 17th Century and the telescope. One of the great milestones in the progress of knowledge about the world was the publication of the On the Origin of Species by Charles Darwin in 1849. Of course this book did not, in point of fact, explain the origin of species, nor did it speak of "evolution", but Darwin subsequently did write about evolution and his name became synonymous with the theory. It was to be almost a century before a plausible theory of the origin of the variation upon which natural selection worked. This came with the discovery of the structure of DNA by Crick, Franklin, Watson & Wilkins, and the subsequent identification of sections of DNA called genes which encode the structures of proteins that a new more complete Darwinism was born which explained both variation and natural selection at the level of genes. 

The theory which combines genetics with Darwinism is sometimes called NeoDarwinism (the term is pejorative). NeoDarwinism is often referred to as The Theory of Evolution, but it really should be A Theory of Evolution. In fact I do not think it is the best explanation for the emergence of new species, nor is it a complete description of heredity and variation. Recent discoveries in epigenetics forced a reconsideration of the NeoDarwininan account of genes. Genes are not passive carriers of information, rather the genome as a whole is actively responding to the environment. For example, the amount of food available in one generation can affect how genes are expressed in a subsequent one for example. Also the genome of our symbiotic microbiome is many orders of magnitude larger than our own and can strongly affect our bodies, to the point where it has been called our "second genome". Study of the interactions between us and our symbionts has been slowed by the dominance of the NeoDarwian view which tends to see everything in isolation. This reduction of heredity to the "selfish gene" was what prompted me to refer to Richard Dawkins' popular explanation of genetics as "Neoliberalism applied to biology". In fact Neoliberalism is libertarian and utilitarian in character and these are both class-based ideologies. (See The Politics of Evolution and Modernist Buddhism).

In my view the best explanation of the origin of species is one with almost as long a pedigree but one which, though having greater explanatory power, is less fashionable. The Theory of Symbiogenesis is closely associated with the late Lynn Margulis whose seminal 1966 paper, under her married name Lynn Sagan, On the Origin of Mitosing Cells (note the implied connection with Darwin in her title) showed that mitochondria were once free living bacteria. However well known this idea is today, it was originally rejected by the mainstream, and Margulis's ideas were marginalised. Margulis saw evolution as "community ecology over time", as a process which included elements of competition and war amongst species or genes, but was primarily driven by elements of cooperation, symbiosis, and combination. I agree with her assessment that Darwinian evolution, with its basis in metaphors of war and later selfishness, appealed to male scientists more than Symbiogenetic evolution which appeared too feminine.

However we describe the mechanism, it seems clear that species evolve from common ancestors and that all life on currently found on earth has a common ancestry, and that the process of life evolving has occurred over thousands of millions of years. No other explanation can fit all the facts. And yet some religieux, particularly fundamentalist Christians, refuse to accept these facts. Some Christians maintain that the Bible is a factual account of the history of the Earth. Why is this belief so tenacious? How can such people refuse to believe in evolution? I think there are a number of reasons, for example I see weaknesses in the theories that leave loopholes; a failure to create appropriate salience; and a failure to establish an empathetic connection.


Loopholes

Theoretically a infinite number of monkeys working over an infinite time span would eventually reproduce Tolstoy's novel, War and Peace, by accident. The time required to produce a novel by random typing is so very long that the probability might as well be zero. But this deeply counter-intuitive idea is central to NeoDarwinism. In this view random mutations are the source of variability, and survival of the fittest weeds out variations which are not viable. It's as though we were to start with the children's book The Very Hungry Caterpillar, and introduced random typos and printing errors over a million printings. We don't expect War and Peace to emerge. We expect the text to become less and less comprehensible and eventually to become random gibberish. We expect this, and it is precisely what we observe happening in copying. Buddhist Sanskrit manuscripts being copied in Nepal are literally gradually becoming incomprehensible because the scribes cannot (or do not) properly error check. It so happens that the most recent manuscript of the Heart Sutra to be identified was discovered by me in a digitised collection from Nepal. This manuscript is rife with errors, omissions and additions. Over about 280 words in Sanskrit, my edition has 140 footnotes, so that on average every second word is problematic. As it is, the manuscript is only readable if we know what it ought to say. On it's own it is already gibberish, though with enough surviving elements to identify the text it descends from. The second law of thermodynamics (entropy) tells us that all closed systems should become more disordered over time. This is what happens at the level of chromosomes and cells. They gradually lose coherence and become more disordered, so that replication errors give rise to cancers for example. To date replication errors in ageing cells have never been observed given rise to rejuvenation. Errors wreck the process of replication, and mutations are vastly more likely to give rise to errors than viable code. We can call this the replication problem

So how does mutation drive improvements in the genome? The idea is that some small proportion of mutations enable an organism to better fit it's environment. And we do see some adaptive variations. The classic example, from texts when I studied biology, is the white moths that sometimes throw up a black individual. In the 19th century everything gets covered in soot and white moths are obvious and eaten by birds, while the rare black variety survive and become the dominant type. But then in the 20th century there's a big clean up and the situation reverses. The white moths come back because in this case white is the dominant gene. The argument is that the different versions of the gene for colour in the moths are the result of mutation and that environmental factors make one more adaptive than the other. 

In order for a mutation to be passed on the individual carrying it must survive and breed. But the vast majority mutations are deleterious (are cancer causing for example), and to be passed on the mutation must occur in gametes (ova and sperm in animals). Even given vast scales of time involved in evolution, this is all very counter-intuitive. The replication problem is a loophole. Any theory of evolution which allows for random mutation to be the driving force, is just not convincing because it is counter-intuitive. Presented with NeoDarwinism as The Theory of Evolution, plenty of intelligent and right thinking people conclude that it too unlikely to be credible. Lynn Margulis argued that while the NeoDarwinian account of evolution might account for variability with species, it did not account for the emergence of new species.

On the other hand, of course, we do see variability in genes. Such variations are apparent in humans for example and have formed the basis of the out of Africa hypothesis - the idea that all modern humans migrated from East Africa ca. 75,000 years ago to colonise every continent is partly based on tracing variations in genes in mitochondria and on the Y chromosome. But these variations are necessarily tiny and are not sufficient to define new species. The gene, or complex of genes, does the same job in all it's variations. Despite quite widely varying physical features, there is presently only one species of humans on the planet, a rather unusual occurrence in the history of hominids. Which brings us to the next loophole, the problem of observing speciation

The scientific literature on the emergence of new species is sparse, and often inconclusive. This is not helped by the fact that we have competing and contradictory definitions of what a species is. Summaries of this literature [1] produce what seems like a relatively small number of candidate cases where speciation seems to have occurred, but many of the examples are not due to the mutation of a gene, but to hybridization and polyploidy (mutation in whole chromosomes by doubling or tripling). Where two populations have diverged to the point of being unable to physically mate or produce viable offspring it is usually from artificial stress placed differentially on two initially identical populations in a laboratory. In the wild, the London Underground Mosquito its thought to be a naturally occurring example. However as Lynn Margulis notes with evident satisfaction (Symbiotic Planet, p.7-8) in an earlier, similar case with Drosophila fruit flies it was shown that what changed was not the organism, but its bacterial symbiont. Indeed from Boxhorn's summary it is not always obvious what has caused the phenotypic change. In most cases of so-called speciation, no gene mutation has been identified, nor has anyone gone back to alter an identified gene in the origin population to artificially produce a new species, though of course we have altered many genes in many different organisms. These would be a minimal requirements for confirming that speciation was due to the mechanisms proposed by NeoDarwinians. Since very few people are interested in symbiosis, changes in, for example, gut bacteria are seldom investigated and cannot yet be ruled out in most of the promising cases. Given the centrality of speciation for the theory of evolution there is surprisingly little research aimed at identifying and replicating the mechanisms of speciation.

Worse, the sources for these 'facts' are not freely available, and the vast majority are not qualified to assess how true they are since they are couched in jargon it takes years to learn. Science journalism further muddies the water because it frequently opts for sensationalism over solid results. Journalistic standards are very much lower than those of scientific publications. And here the specific problem is that journalists repeatedly report variation as though it is speciation. And it is not. Such easily refutable speculations help to undermine the case for evolution, help to make it seem less plausible to those who have a vested interest in a religious view. The lack of widely cited and well replicated cases of speciation is a major failing of evolutionary science.

Another loophole left by NeoDarwinism we can call the incremental problem. This is the argument that something like the eye could not have evolved one step at a time because it is far to complex. This is partly a failure of imagination. We cannot imagine the steps required to go from a single light-sensitive cell to a complex eye with specialist organs like a lens, eye muscles, various fluids, specialised nerve cells and so on. The number of potential steps is enormous and the tiny variations which might accumulate are difficult to put together into a coherent picture. Big numbers are just abstract concepts for most people and have no kind of real life analogue: we struggle with geological time periods especially. "A million years" has more or less no meaning to most people. Thus the evolution of complex organs through random (undirected) mutations in genes, is also counter-intuitive. 

So, even though I am educated in the sciences and have studied evolution, and even though I believe evolution to be self-evident, the details of how evolution works are far from clear to me. A good deal of the detail seems counter-intuitive as it is commonly explained. NeoDarwinism in particular seems a less plausible explanation of speciation than Symbiogenesis. A minor point in favour of Buddhism is that it does not conflict with the basic idea of evolution, even though the cosmology and cosmogony that many Buddhists cite is incompatible with a scientific worldview. On the other hand, for a Young Earth Creationist there are all these loopholes, all these weaknesses in the theories of evolution—the replication problem, the observation problem, and the incremental problem—that make it easy for them to shrug off evolution as a theory. And they have a strong emotional attachment to the competing story in the Bible that means that there is competition for what is most salient in the discussion of what life is and how it changes over time.


Salience

Scientists aim for objectivity. This makes sense. It allows us to get insights into reality by triangulating the observations of many observers. Each observer brings an element of subjectivity to the observation, but by combining the observations of many observers over repeated observations we can eliminate a good deal of what is due to subjectivity. If we observe dispassionately it makes the process more efficient. This approach is sustained in communicating science in official publications. The language is impersonal and favours passive constructions e.g. "the animal was observed to eat an apple." Just the facts. But contrary to the old saw, the facts do not speak for themselves. In ordinary life we rate the importance of information by the emotion that it elicits in us. Those of us who are excited by concepts and science are quite rare. Without any sense of how relevant these facts are, we struggle to assess their salience. We're even puzzled as to why scientists are excited by them and want such huge amounts of money to study them. Recently there's a trend towards funding research on the basis of how much revenue it will generate. I see this as a direct symptom of the failure to communicate the salience of research. Left to their own devices politicians fall back on what they do understand.

Now compare the way that fundamentalists communicate their version of events. The message is accompanied by strong emotions, and these are reinforced by communal rituals, and by peer networks. Preachers not only tell us the facts as they see them, but they communicate both verbally and non-verbally that this is most important thing we have ever heard. The message is simple, clear, and repeated often; and it addresses our most fundamental questions about life and death. The religious message could not have more relevance. One's immortal soul is at stake. And for most people an immortal soul is an intuitive concept, unlike evolution.

It is not so hard to see why some people don't feel any real conflict over what to believe and reject the theory of evolution. It is communicated in such a way that it has little or no salience for them. It is not communicated in a way that demonstrates how important it is to know this. If a person does not value this kind of fact up front, they are not going to be converted by an appeal to intellect. But there is also a countervailing force. When we begin to unravel someone's religious faith we undermine their worldview in many ways. Not simply their view on God's role in creation, but their felt sense of the God's presence; the importance of God's commandments in morality; the whole concept of the afterlife and how it will play out for the individual; the rationale and coping strategies for dealing with adversity; the sense of meaning and purpose that helps them deal with a life working in a bullshit job (and all that goes with that); and so on. It's not that they should simply give up believing in God and will be better for it. We have no reason to think that undermining someone's faith would do anything but harm to them. The wholesale conversion of Westerners to atheism is no doubt a big subject for debate, but to my mind it has created generations of nihilists and hedonists, who threaten to undo much of the progress made since the European Enlightenment through short termism and the individual pursuit of pleasure, wealth, and power without any thought for other people. That one of the main responses to this nihilism is a further retreat into Romanticism is not helpful either. I'm pretty sure that Neoliberalism is not better than Christianity as an ideology.

Even for the average atheist its can be hard to see why believing in evolution is important. Believing or not believing has little or no relevance to how we live our lives: how we work, shop or play. It doesn't make us better people. It won't make us live longer or be more prosperous. There's no reason we should care about evolution. The reason any of us know about it at all, is that eggheads insist that we learn it at school. Which brings us to the third problem.


Empathy

One of the characteristics of the current public debate on religion is hostility. Many prominent atheists now embrace the sobriquet militant. Just as in the theory of evolution, the metaphor most often invoked in these discussions is war. What might have been a discussion or a dialogue, or a dance even, is now a battle. Wars are decided by annihilating the enemy or forcing the survivors to capitulate and lose everything. Verbal exchanges are not aimed at creating understanding, or even communicating facts now, they are aimed at taking positions, landing blows, at undermining opposing positions, and at destroying opposition. Stepping into this theatre of war carries with it the threat of attack. This metaphorical war sometimes erupts into literal conflicts, and in the USA not a few court cases. Not surprisingly in situations where both sides are expressing considerable illwill, there is little actual communication.

There is a good sized body of research on what makes for good communication and how to persuade people of your point of view. Indeed the study of rhetoric dates from ancient Greece. None of this research, nor even common sense, suggests insulting your interlocutor or their beliefs as an effective strategy. Despite this, some of those who lead the secular charge in the war on religion, completely ignore all of this valuable research, and resort to insults and accusations. This issue is much more tense in the USA where Christians themselves are more militant (having been mobilised to political awareness by the political right in the late 1970s). But I think scientists have to have the courage of their convictions. Why are the scientists not using science to inform their rhetoric? Could it be that they lack faith in science, or is it that they don't even consider that they might be poor at communicating? Do leading scientific secularists not observe the results of their actions, reflect form hypotheses and test them? They do not seem to as far as I can tell. They preach to the converted and damn the heretics to hell, as it were. 

The strategy of scientists, presenting people with a series of facts with no clear statement of values, leaves people cold. "Coldness" is part of an extended metaphorical dichotomy relating to our inner life: EMOTIONS ARE HOT; INTELLECT IS COOL. Rational arguments are cool, but purely intellectual people are often perceived as cold. Other phrases which draw on this metaphor are: "He is a cold fish", "She gave him the cold shoulder", "She was frigid". A cadaver is cold to the touch. Warmth is the characteristic of life, warm-blooded animals maintain their body temperature above ambient and thus radiate heat and feel warm to touch. (The use of "hot" and "cool" in reference to Jazz is another story, one I'd love to go into sometime, but a digression too far for this essay). In the Capgras Delusion one can recognise loved ones, or in one recent case one's own reflection, in the sense of seeing and identifying all the details, but a brain injury prevents the connection of the visual details with the emotional response that typically goes with familiarity. The person with Capgras cannot understand the disconnection and typically confabulates a story that the loved one has been replaced by a replica.

On the whole human beings are not moved by bare facts. But it's worse than this. On the whole we see people who try to communicate solely in terms of facts extremely negatively; as cold, unemotional, uncaring, and inhuman. The whole point of the Mr Spock character in Star Trek was that his emotions were just below the surface and constantly threatened to burst out. And even if they did not his apparent coldness highlighted his limitations in dealing with humans, and acted as a contrast to the hot-blooded impulsiveness of Captain Kirk. They were a team that only really functioned well together. And on the contrary people who emotionally communicate a clear sense of values can often get away with being completely irrational.

It's interesting that nature documentaries are a clear exception to this cold style of communication of science. TV producers know that the audience are drawn into their work by drama and intrigue. The facts have to be woven into a narrative which creates an emotional resonance. David Attenborough is a master of this. His documentaries draw the audience in by portraying life as a drama with archetypal characters. This enables the audience to identify with the "characters". This was also part of the fascination with Jane Goodall's work on the chimps at Gombe stream. Her approach of using names helped us to come into relationship with the chimps, to glimpse ourselves in their games, loves, and struggles. And perhaps this dramatic style is a hint to those who would communicate about evolution to a wider audience? We want to know, above all, why we should care about evolution. 

I began writing this essay just after reading Richard Dawkins book Unweaving the Rainbow. In the preface he evinces surprise that his book The Selfish Gene convinced people that he was a nihilist who saw no value in life (he describes people as machines). People apparently often ask him how he even gets out of bed in the morning with his bleak outlook on life. Unweaving the Rainbow is his attempt to show that he is anything a nihilist, that he is alive to the wonder and mystery of life and the poetry of the universe, and is fully convinced that we all should be awed and amazed simply to be alive. He tries to tell the reader that curiosity and fascination with life is what gets him out of bed in the morning. I suggest that part of the problem with The Selfish Gene as literature was that it was not consciously concerned with communicating a sense of values, though I would say that it did unconsciously communicate the values of Neoliberalism. I'm ambivalent at best about his writing and opinions, but no doubt Dawkins has values. However, these values are unspoken in much of his intellectual work, precisely because the academic ideal is to emotional content of communication: the myth of the objective, dispassionate point of view. This has real value in the pursuit of science, but not in communicating to ordinary people. Unweaving the Rainbow appears to be trying to address this point, though I suspect given the low profile the book has in his oeuvre it is rather too oblique. Also a good chunk of the book resorts to being rude about the people he seems to most want to convert to his views; religious believers. He just can't seem to help himself. Whatever his merits as a genetic scientist, Richard Dawkins seems not to understand people very well.


Conclusion


We tend to blame religious people for their failure to embrace evolution. On the contrary I say we can lay the failure to communicate evolution squarely at the door of scientists. They have education and access to the resources, but they squander them. There's a movement in the UK to promote the public understanding of science which is doing great work. Choosing good communicators like David Attenborough, Jim Al Khalili, or Alice Roberts to front TV shows and make public appearances is helpful because they humanise the communication. It doesn't hurt that some of them are very attractive as well as intelligent, but the key to their success seems to be their personal enthusiasm for, and ability to speak clearly on, their subject; and their ability to help us understand why what they are talking about matters. 

The success of any communication between two people depends on their being empathy between them at the outset. If what we are trying to communicate is counter-intuitive then we have a difficult job to show why the idea is still plausible. If the people we are trying to communicate have an emotional investment in some other explanation, then we can improve our chances by trying to understand their values and concerns and addressing them. None of this is rocket science. And the people who are doing the communicating are scientists.

As with Buddhism the process and ideals of science are, generally speaking, admirable in the abstract. But the people involved introduce an element of imperfection. The perfect instantiation of science or Buddhism has yet to arise. Tolerance is called for. Both of religious believers and of scientists, even if we do expect more of the latter. 

~~oOo~~

Notes.

1. Speciation:
  • Boxhorn, Joseph. 'Observed Instances of Speciation.' The TalkOrigins Archive
  • Stassen, Chris. Some More Observed Speciation Events.  The TalkOrigins Archive
  • MacNeill, Allen. 'Macroevolution: Examples and Evidence.' The Evolution List. evolutionlist.blogspot.com [draws on Boxhorn; the comments on this blog post are well worth reading as well!]
  • Zimmer, Carl. A New Step In Evolution. The Loom, Science Blogs. Observations of bacteria evolving a new metabolic pathway. 

Margulis, Lynn. (1998) The Symbiotic Planet: A New Look at Evolution. Basic Books.

06 March 2015

Seeing Blue.

Where does blue begin and end?
There's a meme that seems to come around again and again on the internet. It is that if a language has no word for a concept then that concept must be absent in that language. This naive reading has been applied to the colour blue for example. Some people noticed that ancient European writers, particularly the ancient Greeks, had a limited colour palette in their writing. Indeed many modern languages are rather lacking in colour terms. Until the 1540s there was no word for the colour orange in English, which is why we call people with ginger hair "red heads". This does not mean that we could not distinguish the colour of blood from the colour of ginger hair. It only means that they were in the same colour category. And when we did name the colour orange, we named it after the fruit, not the other way around. However, it seems journalists love this idea that the Ancient Greeks could not see blue and the idea lumbers around like a zombie eating brains: it gets knocked down, but is quite difficult to kill and reduces IQs. 

Colour words do not correspond to objects or entities. Colours are broadly defined categories of perception. Categories are mental and linguistic structures that help us to organise how we perceive the world. We can use the category name to talk about all the members of a category at once without having to use tedious lists of inclusions and exclusions. This is usually possible because we interact with all members of a category in the same way. 

In George Lakoff's powerful model of thinking about categories we define categories towards the middle of a taxonomical hierarchy and by relationship to a prototype. So dog seems like a "natural" category, whereas for every day use: mammal is too broad and includes too many non-dog examples that need to be excluded; while spaniel is too narrow because it leaves out too many dog examples like terrier. Dog as a category works because there are consistent ways that we interact with dogs that are common to all dogs and different from other common pets or wild animals.  And also because this interaction is not something personal, but common to other people in our language group. Sometimes pet is a more convenient category: when renting out a house for example. Though we think of categories defined by forms or functions, one of the most important defining properties is how we interact either in fact or potentially with the entities.

When we think of 'dog' as a category we will have an internalised prototype that defines the category. And we judge other entities to be a member of our category to the extent that they resemble our prototype (this is an extension of Wittgenstein's family resemblances'). By definition some members may be more central and others more peripheral. Say our prototype is something like a German shepherd (left). we can acknowledge, as dogs themselves usually do, that both a chihuahua and a great dane are members of the category dog, despite their size. Similarly though a long muzzle is typical, we can acknowledge that dogs with mutated skulls that give them a squashed look (boxers, pugs) are still dogs. On the other hand despite being furry, carnivorous, quadrupeds, no kind of cat is is a member of the category dog. In Cambridge there is a couple who take their cat out on a lead. But even a cat on a lead is not a dog.

However, the prototype is not fixed or absolute. It is relative to many things, not least of which is how we interact with the category. With respect to dogs, a farmer or a hunter may think in terms of a working animal, a pet owner in terms of companionship, and so on. On the other hand in India dogs are often semi-domesticated urban scavengers - neither pets nor workers, but barely tolerable vermin. In some cultures dogs are seen as food. 

It's possible for there to be doubt about membership at the periphery. Is a wolf a member of the dog category? Is a fox? The wild dog is another peripheral case: it looks like a dog, but we interact with it as a wild animal (to which category it belongs with wolf and fox) rather than as pet or worker. There is no upper or lower limit on how many categories we employ or the extent to which they overlap. 

navy
royal
cobalt
azure
sapphire
beryl
electric
sky
turquoise
cerulean
teal
cyan
Our terms for colours are categories also. Typically for an English speaker the prototype for blue is the sky. This can get complicated because in England the sky is more often grey than blue, and when it is blue, it's often a very pale and washed out blue compared to where I grew up (about 15 degrees of latitude closer to the equator, about 1000ft above sea level, and with much less pollution). In some cultures lapis lazuli or the throat of a peacock are prototypes (the latter is important in India for example).

Other languages, including many living languages define their categories differently. And research has shown patterns in how languages categorise colours. Many languages for example put blue and green in one category. In ancient Chinese the word 青 qīng meant both blue and green, but also black. In this sense it appears to be similar to the Sanskrit śyāma which can mean black, dark, dark shades of blue or green. Used of people it refers to a dark complexion. So in fact, Śyāma Tārā is not Green Tārā, but Dark or Swarthy Tārā despite the fact that she is routinely depicted in bright hues.

Does this mean that those languages which lump blue in with other colours lack a concept of blue? Not necessarily. Because even blue is a broad category. I can distinguish many shades of blue, from cyan to navy, but I don't have words for all these colours. Similarly I can distinguish many shades of green from the almost yellow green of new spring leaves, to the dark blue-green of New Zealand jade. Think about all the distinctions of colours on a typical paint sampler that we have no words for, but for which arbitrary names have to be invented for marketing purposes. We also have at least one word for a colour that is made up, indigo. When Newton was describing the colours of the rainbows he created with prisms he wanted their to be seven colours to fit in with an alchemical scheme and so invented the colour indigo. What Newton called blue is what today we'd call cyan, and what he called indigo is deep blue like ultramarine or cobalt blue. In fact most English speakers shown swatches of these colours would call them both blue. 

As Lakoff explains in his book on categorisation, Women, Fire and Dangerous Things, those languages that have four colour terms will have black, white, red and one of either yellow, blue or green (p.25). Now it seems that Ancient Greek was a four colour language.
"Empedocles, one of the earliest Ancient Greek color theorists, described color as falling into four areas, light or white, black or dark, red and yellow; Xenophanes described the rainbow as having three bands of color: purple, green/yellow, and red." (Ancient Greek Color Vision)
This fits the pattern noticed by colour perception research. The Greeks used four colour terms, roughly, white, black, red and yellow. So when Homer uses the phrase "wine dark sea" or describes the sky as "bronze", he is employing categories that are much broader than we currently use in English. In fact modern English has eleven basic colour categories:
"black, white, red, yellow, green, blue, brown, purple, pink, orange and gray."
This does not stop us seeing blueish green, yellowish red, reddish purple and other colours for which we have no name or category. Categories are as broad as are useful to us. And often colours are difficult to categorise. Blue-green colours for example may appear to be in different categories to different people. But there is no evidence to suggest any anatomical differences between speakers of languages with four or less colour terms and those with eleven.

Now colour perception is a feature of our particular sensory apparatus. We've seen recently with the example of "that dress" how the background against which we see something and the colour of the light illuminating it, affect how we perceive it. But vision does have an objective component because the physiology of it is the same for everyone. Light of particular wavelengths hits our retina and activates patterns of the three (sometimes four) kinds of colour sensing cone cells. Each of the cells responds to different frequencies of light.




The peaks of these curves are the same in all humans. This means that where languages have the same colour terms they tend to agree on where in the spectrum the prototype for that category lies. I presume this has applied at least since anatomically modern humans. Now of course turning the signals from our cone cells into the experience of colour is a process that happens in our brains. But it's not arbitrary. For people who are not colour blind the brain is set up for blue cone cells to respond to the same frequency of light. If I shine light with a frequency of 500 nm in your eyes, you'll perceive this in more or less the same way as every other human being regardless of language and culture. Linking the experience to a word is a function of language, but the ability of the language to translate the experience into words is always limited. People with four cones describe a far more vivid palette of colours (What it's like to see 100 times the colors you see). Some animals have cones sensitive to different wavelengths. In particular bees can see much shorter wavelengths - well into what we call the ultraviolet. While snakes can detect much longer wavelengths in the infrared (though not with their eyes)

Now, the story goes that because some languages lack a word that corresponds to the English word blue, and they treat what we call blue as a member of broader colour category, that this means that the speakers of that language could not see blue. This is like saying that because the English lack a word for schadenfreude that they do not enjoy the misfortunes of others, whereas in fact the laughing at the misfortunes of others is very popular here (it is perhaps the most important theme of English humour). So why does this suggestion keep surfacing?

The idea about the Greeks not being able to see blue can be traced to the 19th century British Prime Minister and amateur philologist William Gladstone. He published a long and highly regarded study of Homer's epics and noticed that Homer's colours did not match ours, the "wine-dark sea" being one of the well known examples (wine being reddish-purple in our language, a colour we never associate with the sea). Others joined in. More recently the idea that how we use language reflects how we perceive the world is called Linguistic Relativism.  It is also known as the Whorf-Sapir Hypothesis because theories about it were postulated (separately) in the early 20th Century by linguistic Benjamin Lee Whorf and his teacher Edward Sapir (amongst others). Whorf in particular was interested in the way that grammar divided the world up into entities and activities. He discovered that some Indigenous American languages seem to not make the same kinds of distinctions. On the basis of this he hypothesised that these differences in grammar might affect how we see the world at a very deep level. How would the world appear to us, for example, if we did not divide it up into nouns and verbs. What if we only had verbs for example, if everything was seen as a process? Whorf asked is the world really is divided up into objects

Linguistic relativism comes and goes in the media. Every few years some journalist comes across Whorf or some other author and writes a piece about it. I should add that Whorf's essays make very good reading (they were collected into a book, Language, Thought and Reality, MIT Press, 1956). The "Greeks couldn't see blue" meme is a popular version of this and one can find many variations on the theme, on the internet, including a few other attempts to debunk it. 

However, quite a bit of research has shown that because of the physical apparatus of seeing there is no room for relativistic effects in colour perception. All humans see colour in the same way, even though different languages categorise colours in different ways. Every (normally sighted) human being is capable of seeing millions of colours, most of which we don't have names for (which is where categories come in handy). And all this commonality is true of subsets with variations on the the normal pattern: people with four cones see similarly to each other; people who are red-green colour blind all see the same shades of grey and so on. In other words the research disproves idea that having no word for blue means one cannot see the colour blue. So basically the whole "can't see blue" thing comes down to a failure to read the research on colour vision.

Ironically if you do a simple image search on "Greece" the predominant colours in the results are white and blue, the colours of the modern Greek flag.

~~oOo~~


06 February 2015

Do We Have Freewill?

In the latter half of the 20th century a series of pioneering experiments by Benjamin Libet, a neuroscientist at the University of California in San Francisco, demonstrated a rather startling phenomenon. Libet was able to show that a conscious decision to flex one's wrist was preceded by brain activity which prepared to make the movement. It appeared that we decide unconsciously to make the move, the brain prepares to send the signal to move, and only then do we become conscious of having made a decision. This experiment and others like it have been interpreted by many as showing that freewill is "an illusion". In this essay I explore this argument and outline an important counter-argument by Patricia S. Churchland, Professor Emerita of Philosophy at UC, San Diego. I also look briefly at the determinist argument that some physicists profess. Freewill is not a particularly interesting problem, but since a lot of people talk about it, this is my two cents worth.


Is My Unconscious Part of 'Me'?

The first assumption to look at in the claims based on Libet is the idea that unconscious mental activity is somehow excluded from the freewill debate, even though it occurs in the same brain. But if my unconscious mental activity is not 'mine' then whose is it? The conclusion seems to be that when a decision is made unconsciously, even though it is our brain that makes the decision, that the decision does not count as freewill. Churchland sees this as a manifestation of matter/spirit dualism that separates out reason as a function of spirit. As I explain in my essay on this metaphor, having associated reason with spirit (arguing that reason itself is the essence of being human) it is entailed in the metaphor to then see reason as  "good" and the unconscious as more closely related to matter and therefore "bad". Additionally, reason appears to be under our control and the unconscious is not. Indeed part of the power of the Libet results is that it shows that reason is not under "our" control at all. It begins to look like a byproduct or an afterthought. However the general view of reason is in desperate need of an overhaul. 

I've gone over this material many times now: Damasio and others have shown that all decisions involve weighting of information via emotional resonances. In making a decision we defer to our emotions and find reasons afterwards (See Facts and Feelings). The practical demonstration of this is found in the advertising industry which, since the 1920s and the interventions of Edward Bernays, has appealed to desires rather than to reason when selling products and ideas. Bernays was able to apply his uncle Sigmund Freud's ideas to changing views. Most famously he convinced women to break the social taboo on women smoking by linking cigarettes with suffragettes. He did this by paying debutantes to pose smoking cigarettes during a parade, and alerting the press so they published the pictures under headlines touting cancer-sticks as "torches of freedom" and thus doomed several generations of women to horrible deaths from cancer and emphysema. (See Culture Wars, or The Society Pages) Sometimes taboos are good! In addition I've repeated cited the argument by Mercier & Sperber that in fact individuals are terrible at reasoning (An Argumentative Theory of Reason). We almost always fall into bias or fallacy when trying to reason on our own. They argue that this is not the case in small groups where different ideas can be kicked around and the group reasons collectively. Small groups are much better at reasoning. 

So it appears that the idea that conscious reasoning is what defines humans is long past it's use-by date. Any theory which even implicitly relies on this definition of reason ought to be discounted. Human beings make use of a range of faculties, including emotions and unconscious processes to make all decisions. Nor is it true to say that sapience is restricted to humans. We have now documented self-awareness and tool making in a number of species. Somehow the antiquated idea about reason being our highest and defining faculty still seems to be invoked, but we ought to be very wary of this. 


What Kind of Free Will are we Talking About?

Patricia Churchland makes a very important distinction about who means what by "freewill". Most philosophers and many scientists use freewill as a shorthand for "contracausal freewill". This is the kind of freewill described by Immanuel Kant. Churchland says contracausal freewill means that:
"... your decisions are not caused by anything at all—not by your goals, emotions, motives, knowledge, or whatever. Somehow, according to this idea, your will (whatever that is) creates a decision by reason (whatever that is)." (2013: 179; emphasis in the original)
When some scientist says, on the basis of Libet, that we have no freewill, this is what they appear to mean. They are arguing that we have no contracausal freewill, because conscious reason comes into play late in the decision process. Apart from the fact that this definition of freewill is counterintuitive and seems unlikely to non-philosophers, we've already undermined some of the key assumptions involved in it. As discussed above, Churchland sees that entailed in this view is the idea of a non-physical soul. By disconnecting the decision making process from our bodily processes (like emotions) and assigning it to "pure" reason, those who use this definition seem to be subscribing to a matter/spirit dualism in which reason is a function of spirit not of body. 

The more commonsense variety of freewill is less well defined partly because, like many commonsense definitions, we use it efficiently without fussing over the meaning. To make us more comfortable with the fuzziness of the definition Churchland invokes George Lakoff's ideas about categories being defined by relatedness to a prototype. In this view freewill is not an all or nothing proposition, but some actions are more free than others. Some acts are more typical of freewill than others. And people are somewhat free to choose which actions most represent freedom, since categories are what we impose on experience to help organise it. Most people intuitively understand that sometimes we have more choice than others, or that sometimes people are compelled to chose one option even though in theory they have a choice. This recognition of degrees of freedom seems vital to any sensible theory of how we make choices, especially moral choices. 

Churchland argues that:
"...if contracausal choice is the intended meaning, the claim that free will in that sense is an illusion is only marginally interesting, Because nothing in the law, in child-rearing, or in everyday life depends in any significant way on the idea that free choice requires freedom from all causes." (184)
In other words the freewill that is being denied by philosophers is not very interesting because, being divorced from experience, it's hardly credible anyway. Churchland likens the claim that contracausal freewill is an illusion to announcing that alien abductions are not real. The response is, "So what?", "Who cares?" or "Duh!" Those who deny freewill on the basis of the Libet experiments are not saying anything interesting, though of course at first glance it appears to be a controversial thing to say so the media covers it and the meme gets spread. This whole section of the debate about freewill can safely be shelved with other legacy ideas from philosophy that are no longer relevant. The question is not "Are we free?", but "How free are we now and how free can we be?"


Self Control

Even if there is some doubt about what freewill means, Churchland argues that there is a related concept about which there can be no doubt: self-control. She points out that self-control, the over-riding of impulses to act, takes conscious effort. And in terms of morality, self-control is often just as significant as conscious choice. Morality is very frequently defined in terms of refraining from actions: "thou shalt not..." (in a Christian context) or "I choose to refrain from..." (in a Buddhist setting). Libertarian secularists often complain about religious morality as just being a bunch of rules, but it might be a natural consequence of self-control being a much clearer concept. And although our laws are profoundly influenced by religious models, there has been no significant move away from prohibitive rules even in secular (or nominally secular) countries. 

Most of being a good group member would appear to be inhibiting impulses that go against group norms. Any sociable animal must at times repress selfish impulses in order to benefit the group. Social animals for example prosper by sharing food sources in a way that solitary animals do not. Our motivation for exercising this impulse control vary: fear of reprisal, shame, habit, altruism, and generosity can all come into play. Or we may feel that the "law is an ass" or decide that a small breach of the rules will draw attention to a greater breach (civil disobedience to protest government corruption for example). In other words we can be negatively motivated or positively motivated to follow established norms or to break them.

My reading of Churchland's account of the freewill debate is that for the most part it is poorly framed and thus does not produce interesting results. The reasons for considering contracausal freewill to be the best definition are no longer plausible if they ever were. It serves to confirm that the freewill debate, such as it is, is not particularly interesting. 


Making Moral Judgements

This is not to say that the matter of voluntary actions is unimportant. Social groups operate with norms and rules and when enforcing those norms it's important to know why breaches happened. This is why most legal systems make distinctions of degree in crimes like murder. A murder than is planned months in advance is always seen as a worse crime than one committed in the heat of the moment. A calculated crime is relatively more serious than an impulsive one. This is because consciously breaking the rules is a clear repudiation of those rules. In this case we have serious doubts about the willingness of the person to return to lawfulness. Part of any calculation to commit a crime is usually elaborate planning to avoid detection and punishment. Even if the rule-breaker shows remorse, we have reason to distrust them in the future.

The crime of impulse however is more likely to be understood as a momentary lapse and to be treated more leniently if accompanied by suitable remorse and a willingness to admit fault. Those who plead guilty tend to get lighter punishments. However if someone is prone to repeated crimes of impulse then we tend to treat them like the person who does calculated crimes, because we cannot trust them to keep the rules.

If someone sets out to injure a person and that person inadvertently die then this is less serious than if the assailant intended kill. It might still be considered murder depending on how we judge the risk involved. An attack with a weapon is more likely to kill than a fist-fight for example. This situation can be seen in the light of calculation and impulse also. If someone is killed purely by accident, with no intent to harm, we may still be found culpable for depriving them of life, but the consequences may be still less severe. For example neglecting our duty of care while doing an inherently dangerous activity, like driving a car, is still quite a serious crime. But if we were proceeding with due care and a pedestrian crosses the street without looking causing them to be knocked down and killed then we are not culpable even those someone has died.

On the other hand if we kill someone in the process of defending ourselves or our property we may not be culpable at all as long as the force we used is judged to be proportionate to the threat we faced. Police officers and soldiers are seldom held to be culpable of murder when they kill someone in the line of duty, even though the community may feel they should be held accountable. This is extremely controversial, but in a culture where murder is fairly routine the enforcement of law comes with severe risk. It's unreasonable to expect police to risk their lives when apprehending a suspect. Soldiers are not given carte blanche to kill. Under the modern rules of war, they may not purposefully kill civilians for example, though this is not a universally recognised restriction especially in asymmetric war where one side is far more powerful than the other. Soldiers may not only kill enemy combatants, but will be rewarded for doing so. In the Vietnam War, efficiency guru Alain Enthoven used the "body count" as a measure of how well the war was going (he subsequently was brought in to reorganise the British health service by introducing the "target culture").

People can be found not-guilty of even the most serious crimes if they do not have the ability to understand the consequences of their actions - either permanently or temporarily. We often detain such people purely on safety grounds. In making judgements about the severity of breaches of social norms we have to take many degrees of intentionality and self-control into account.

Thus an all-or-nothing freewill is not a very helpful instrument in thinking about morality. Moral judgements can be very complex indeed and always take in the motivations and the underlying mental and emotional state of the perpetrator (and often the victim as well). Thus contracausal freewill is fully irrelevant to how our laws operate and to how common sense morality operates (as already pointed out by Churchland). 

As an aside, it is interesting that the baby boomer counter-culture seemed to be all about allowing one's impulses free reign. From "free love" to "greed is good", sections of the post-war generations felt the need to stop restraining themselves and let it all hang out (as the saying goes). As it turns out the backlash against this call for loosening of social restraints has been a far more significant social movement. Neolibertarianism was driven primarily by conservative business people. They wanted freedom from government control on their collective ability to do business, and conceived of this within strong social boundaries which restricted what was acceptable behaviour. The irony is that Neolibertarians are often authoritarian control freaks. They saw increasing liberalism and individualism as a threat to their way of life and took steps to take back control. Now, ironically, we struggle to pass laws to curb the excesses of those same business people even in the face of global economic instability and catastrophic climate change. We can now talk openly about sex, and women have a great deal more social equality, but the businessmen own a great deal more of the wealth and have virtual control over governments. The ideology of the world's leaders is that nothing ought to restrain the creation of profits and that abstract markets are more efficient than governments (though every empirical fact shows this to be untrue). Conservative elements in society still allow liberalism to make gains, such as same-sex marriage for example, but only where it has no consequences for the wealth of the wealthy. At the same time the threat of terrorism continues to eat away at civil liberties and individual freedoms. So the disinhibition of the 1960s is a pyrrhic victory.

The question of who is responsible for actions has become obscured to some extent by determinist scientists. The media has shown itself time and again to be highly irresponsible when reporting science. Media companies are in the business of entertainment and so news streams are only secondarily about informing us and are primarily about distraction and sensory stimulation. Scientists with a controversial message are more likely to get the oxygen of media attention than those with the more sober message. However there is still an argument about freewill based on the view that the universe is deterministic. We turn now to this argument.


Are We Deterministic Robots?

The view that being able to frame regularities in the universe in mathematical expressions, means that the universe is therefore deterministic is popular amongst physicists. In a deterministic system if we had perfect knowledge of the starting conditions, the elements, and rules, then we could perfectly describe the behaviour of the system indefinitely far into the future. This kind of Determinism was espoused, for example, by Stephen Hawking in his last book The Grand Design:
"so it seems that we are no more than biological machines and that free will is just an illusion." (32)
Sean Carroll has also expressed the view that we're all machines that think. This argument is related to the one I was exploring with regard to the afterlife. Life is made up of atoms and we understand the behaviour of atoms, so we understand the basis upon which life exists, even if we don't quite understand all the processes of life yet. But whereas the claim about the afterlife was strictly limited to the persistence of information about the person after death as governed by the Second Law of Thermodynamics (Entropy always increases in a closed system), this claim about a deterministic universe is unlimited. The unlimited nature of the claim trips it up.

It is true that we understand the behaviour of atoms at the energy, mass and length scales relevant to living things. But we also have to take into account the nature of complex systems. Even when a complex system is made up of simple elements following simple rules, the behaviour of the system is nondeterministic: we cannot predict it. When a system is made up of complex elements which combine according to complex rules and we get emergent properties at several different levels at once, then that system is decidedly not deterministic. An economy or the weather are not deterministic, not predictable.

As far as life is concerned we don't have perfect knowledge of the starting conditions and nor can we ever gain such knowledge. As far as the universe as a whole this also appears to be true. We can conjecture, but not have perfect knowledge. In fact because of random quantum fluctuations in space-time we can never be entirely sure about the elements in play. And the rules are sufficiently complex that to date no one understands them with anything like perfect knowledge (something acknowledged by Hawking, who goes so far as to say that he doubts we'll ever have a unified set of equations for the universe). The mathematics describing a single sub-atomic particle interacting with all the known fields has yet to be solved: it involves 7 or 9 extra dimensions of space that themselves at so small that they add nothing to the dimensionality we experience.

We can demonstrate the problem by considering a simple pendulum and then adding complexity. A simple pendulum vibrates in two dimensions, with one end fixed. The behaviour of this pendulum follows a simple law: the period of the vibration for small amplitude (θ << 1) is approximated by:


Where L is the length of the pendulum and g is the acceleration due to gravity. In fact for longer amplitudes the equation is more precisely:


This is complicated, but in fact not difficult to solve to an arbitrary level of accuracy (the factors in the series quickly become vanishingly small). For most large clocks only one or two members of the series are required for sufficient accuracy in calculations.

Intuitively we might think that adding a joint to the pendulum halfway along it's length, in effect a pendulum attached to the end of another pendulum, would complicate matters, but not so much. But in fact a double pendulum's motion is chaotic. Technically if we precisely specify the starting conditions we can predict it's motion, but we can only calculate the next moment, by precisely knowing what has happened from time = 0. For each moment in time the calculation gets longer until it very quickly becomes too difficult a problem for all the computing power inherent in the universe. If we start at an arbitrary time we have almost no chance of calculating what will happen next. A double pendulum is still technically deterministic, because it is theoretically possible to know the starting conditions, the precise details of the system, and the rules that must be followed.

If we conceive of an atom as being connected to other atoms by forces, then a system with two atoms would be like a double pendulum with no fixed end and instead of vibrating in only two dimensions they vibrate in three. The motions of these two atoms are chaotic and far more difficult to predict than a simple double pendulum, i.e. far more difficult than virtually impossible.

Now consider than there are of the order of 10100 atoms in the universe and all of them are connected via forces to each other. And we need to keep in mind that atoms, themselves are in fact systems of smaller particles which are again all interacting with all the other particles, and that fundamentally all that we see as particles and forces are simply vibrations of interacting fields that extend throughout the universe. Conceived of as a pendulum the overall motion of the universe is essentially infinitely complex. Even if we could precisely define the first moment in the history of the universe (something we cannot yet do), then by the second moment the vibrations in the various fields would be impossible to calculate. By the time particles appeared on the scene as an emergent property of the cooling universe, the system is already impossible to predict on the lowest scales. A system like this cannot be considered deterministic, even in theory.


What Kind of Ordered Universe Do We See?

So an obvious question then is, why do we see ordered behaviour at all? The order we see emerging from this 3D pendulum with 10100 moving parts is because of emergent properties when looking at different scales. Order, or quasi-order, appears in chaotic systems. Think of a hurricane. From space it looks like a relatively regular spiral, or a circle, even though at ground-level it can be chaotic. Also the intensity of the forces involved follow inverse square laws, or inverse fourth-power laws. In theory all fields extend throughout the universe, but the effects of forces are typically short range. Gravity is the only force with a very long range and that is mainly because the masses involved in cosmological phenomena are unimaginably large.

The characteristic ordering (or quasi-ordering) we see depends on the scale we adopt. For example 1g of pure carbon contains about 6 x 1023 atoms. In a previous essay I pointed out that if each atom was one millilitre in volume, that gram of carbon would fill the western Mediterranean Sea. The atoms are in motion, but the motions are many orders of magnitude smaller than a human eye can see. When we look at this many atoms, the tiny motions of each atom are cancelled out by other atoms doing the opposite. Each atom is regular in a number of ways: each carbon atom has six protons and six electrons, and either 6, 7, or 8 neutrons (giving 12C, 13C, and 14C), the chemistry of carbon is very predictable and the shape of its molecules known very precisely. But a diamond, a single gigantic molecule of carbon atoms, does not behave like an individual atom. Crystals are macro-structures that exhibit different kinds of regularities than atoms do. Sit two diamonds together and they do not interact, do not behave as a system at all. Carbon macro-molecules have very different properties to individual carbon atoms. A carbon atom is highly reactive and can form millions of compounds. Diamond by contrast is one of the most inert naturally occurring substances.

Steven Hawking wants us to believe that people are just complex machines. But this is not credible either. Perhaps at some absolute level of abstraction this is true, but not in any meaningful sense. The most complex machines we can make are still less complex than a single cell in our body. We are made from atoms, but millions of billions of billions of atoms, following complex rules; built up from another system of simpler components, also following complex rules, itself the visible manifestation of fields. We could not specify all the atoms of a person and predict what was going to happen next without first calculating every vibration in every field in the entire universe from the first moment in time. With all due respect, Hawking might be a good physicist, but he appears to be a poor philosopher. This may be why he also wrongly claims that philosophy is dead. There is nothing deterministic about a human being, which is why philosophy is very much alive (if not entirely well).

Nothing we know about the emergent properties of collections of Septillions of atoms rules out freewill as an emergent property. Nor are consciousness, or for that matter life itself, ruled out as properties of these unimaginably complex systems. We are very far from having plumbed the depths of the complexity of the universe, despite the fact that the elements and the rules governing the system are quite clear. An analogy here is the chess board. There are 32 pieces on 64 squares and the game has clearly defined rules. We can calculate the theoretical number of different games, and the best computers are better than the best humans, and yet not once has a recorded game ever been the same as a previously recorded game. The difference is that our game has 10100 pieces!


So, Do We Have Freewill?

The answer to the freewill question appears to be the one that is ascribed to the Buddha in last week's sutta translation and commentary. We unquestionably have some choice And at the very least we exercise self-control. Perhaps this is why the Buddhist precepts are phrased in terms of refraining from actions? 

The arguments against freewill that have emerged recently in the scientific community are simply poor philosophy. As Mary Midgley (1979) has said:
"There is now no safer occupation than talking bad science to philosophers, except talking bad philosophy to scientists."
That so many scientists are poor philosophers is of course deeply unhelpful. Midgley had Richard Dawkins firmly in her sights in making this comment. She considered his metaphor of the "selfish gene" to be very poor philosophy indeed (as do I). To be fair Dawkins and his followers thought Midgley completely misunderstood what he was getting at. From my point of view, Dawkins' idea is just a Neolibertarian reading of Darwinism. That's not science, it's not even philosophy really; it's ideology. What's more Neolibertarianism is rooted in the Utilitarian philosophy of Jeremy Bentham, which is really rubbish philosophy since it fundamentally misunderstands human beings. Many of these behemoths of popular science are in fact quite poor at philosophy and have created a legacy of poor thinking—especially in the form of unsuitable metaphors—that will continue to haunt intellectuals for many years to come. 

In many ways this debate about freewill is simply silly. It's a legacy of theological debates that were silly to start with. In order to deny freewill one must make a choice. In order to argue against free will, one must make a sustained effort. It's simply not credible. Of course one can choose not to believe in freewill, but that argument is self-defeating. Anti-free will campaigners must argue that they are compelled to believe what they do. This leaves them trying to explain why not everyone is compelled to the same conclusion. If we are not free, then we are apparently not free in a variety of different and conflicting ways. The different conclusions are a powerful argument against determinism if ever there was one. 


~~oOo~~
Churchland, Patricia S. (2013) Touching A Nerve: The Self as Brain. W. W. Norton & Co. 
Midgley, Mary. (1979) 'Gene-juggling'. Philosophy. 54(210): 439-458.


See also:-
Metzinger, Thomas. (2013) "The myth of cognitive agency: subpersonal thinking as a cyclically recurring loss of mental autonomy." Frontiers of Psychology, 19 December 2013 | doi: 10.3389/fpsyg.2013.00931. http://journal.frontiersin.org/article/10.3389/fpsyg.2013.00931/full
Related Posts with Thumbnails