Showing posts with label Dualism. Show all posts
Showing posts with label Dualism. Show all posts

14 May 2021

The Mind-Body Problem and Why It Won't Go Away

One doesn't have to spend a long time talking to people to discover that most of them subscribe to some form of mind-body dualism. Not in any formal way. No one is declaring "I am an ontological dualist". Rather, they find ideas like life after death and a mind that can be independent of the body to be intuitively plausible. These types of views appear to be common to people of all religions and, interestingly, to many people of no religion. It's a gut feeling that death is not the end and a willingness to believe the dualism that this entails. Moreover, many of the people who are ambivalent seem to think that scientific explanations of the world have left the door open to this. The idea being that the afterlife cannot be proved one way or the other, it is beyond the scope of science.

Since virtually all philosophers and scientists now reject such ontological dualism, we have to wonder what's going on here. In this essay I will try to explain why dualism has such enduring appeal, why it continues to confound philosophers and scientists.

Popular culture effortlessly absorbs a philosophical or scientific explanation when it seems intuitive. For example, we use any number of expressions drawn from psychoanalysis—ego, neurosis, narcissistic, subconscious—in daily life without a second thought. Where an explanation is counterintuitive, popular culture simply ignores philosophers and scientists. A striking example of this is that I know plenty of people who still believe that you can catch a chill from being cold and wet; an idea rooted in the four humours theory of the 2nd Century physician, Galen, which relates the qualities cold/wet with the phlegm humour.

So there is still a mind-body problem and it is non-trivial because the majority still find mind-body dualism intuitively plausible despite several centuries of powerful counter-argument and evidence. Any account of the mind-body problem needs to deal with this or it isn't useful. And yet such aspects of the problem are not even part of the philosophy curriculum. Rather, they are dealt with by a completely different academic department, psychology, as though belief is no concern of philosophers. Moreover, philosophers dismiss non-believers as cranks, idiots, or dupes.

As a rule of thumb, I contend that when a problem has been discussed without any resolution for many centuries we have to consider that perhaps we have framed it badly.


Alternative Approaches to Standing Problems

When I took up the problem of identity as reflected in the traditional dilemma of the Ship of Theseus, I realised—with help from John Searle—that the traditional framing of the problem effectively made it insoluble. This may have been unconscious when the problem was first posed, but there's no excuse for retaining this unhelpful approach.

John Searle's On the Construction of Social Reality proposes a useful matrix for thinking about facts. On one axis is the objective-subjective distinction and on the other is the epistemic-ontological distinction. This gives us a grid of four different kinds of facts.

Ontologically objective facts concern the inherent features of an object that are independent of any observer. An example of this is: a screwdriver is made of metal and plastic or wood.

Epistemically objective facts concern statements that are true because we have prior knowledge. We know that the object is a screwdriver only if we have prior knowledge of modern building technology. But everyone who knows what a screwdriver is knows that this screwdriver is one.

Ontologically subjective facts concern statements that are true because of the observer's relationship with the object. Searle especially links this to functions. The function of a screwdriver is to turn screws. But unless you know what a screw is this doesn't make sense. Moreover the function is not inherent in the materials of the object. A function is something that humans impose on objects. The fact that a screwdriver is for turning screws is a real, but subjective fact.

Epistemically subjective facts exist only in the mind of the observer. For example, "this is my favorite screwdriver" is true for me, but you may have a different favourite screwdriver. And the difference does not invalidate either fact. There is no contradiction because the fact is relative to the individual.

With respect to the ship of Theseus, an ontologically objective fact is that the ship is made of timbers arranged in such a way that it floats and can move easily through the water. An epistemically objective fact is that this arrangement of timbers is called "a ship". An ontologically subjective fact is that the function of this ship is to ferry people across the ocean. And an epistemically subjective fact is that this ship belongs to Theseus, it is Theseus's ship.

Traditionally we are supposed to ask, "Is it the same ship when all the timbers have been replaced?" And this generally ties us in knots. Some wish to say it is the same ship because the whole is unchanged, while some wish to say it is not the same ship because all the parts have changed.

My approach is to look at the different types of facts. For example, the ship is a ship at the start of the process of change and it is a ship at the end of the process. We can identify it throughout as a ship. So it has identity qua ship in the mind of any observer who knows what a ship is. This fact is epistemically objective. The ship can carry out its function throughout, so it has identity qua function, i.e. being an ocean-going passenger boat. This fact is epistemically subjective.

The problem here is that the identity of the ship is subjective: it exists in the mind of the observer, not in the object. If the observer believes it to be Theseus's ship then, to them, it is. If I have a different belief that may also be true and the difference does not necessarily invalidate either belief. The ontological status of the ship doesn't matter. It could be, and probably is, purely hypothetical.

The ship qua ship or qua ferry very obviously has identity over time (though I don't see this approach in the account of the problem that I have read). But the kind of identity we are being asked about when the question is framed as—Is it the same ship?—is subjective, i.e. it's not inherent in or to any ship.

The least interesting and least answerable questions are the ones that philosophers typically ask without delineating what they mean by identity, i.e. Is identity vested in the whole or the parts? The answer is that identity is in the mind of the observer. It is a belief about the ships. And as we know, belief amounts to having an emotion about an idea. Opinions are post hoc rationalisations of such emotions. And this means that the order of production is

feeling → belief → actions →reasons

Not the other way around.

There are two points here. The first is that philosophers can't afford to ignore how people actually think and propose solutions in a social vacuum. They may technically right, but if everyone ignores them, what is the point?

The other point is that philosophers are often wrong. The further back in history that we go, the greater the likelihood that philosophers are trapped in an unhelpful way of thinking about an issue. We don't have to accept the traditional way that philosophical problems are framed, especially when centuries of argument have not led to any resolution. If we can see a better way to think about the problem then we are free to adopt it and give the finger to philosophers.


Why We Still have a Mind-Body Problem

Given the overwhelming consensus amongst academics and intellectuals for ontological monism, why do we still routinely encounter the mind-body problem? I've tried to argue that the mind-body problem would be better framed as the matter-spirit dichotomy. I think this is a more general statement of how people actually think about the mind-body problem. People tend to think of matter as cold, dull, hard, dense, lifeless; and by contrast spirit is warm, bright, immaterial, diaphanous, alive. The body is a thus a special case of matter, in this view, because it is matter animated by spirit. Life was seen as something added to matter: an élan vital, or spark of life (such a view is termed vitalism).

If you have ever seen a corpse you know that it is very different from a living body. With reference to a living body, the corpse has shifted decisively towards the archetype of matter. The life has gone out of the person. The difference is what we conceptualise as spirit. Across many cultures, the ancients understood spirit as synonymous with breath. Terms such as spirit, animus, prāṇa, qi, and so on all mean "breath". In the Christian tradition this is epitomised by Yahweh breathing (spiritus) life into the clay body he fashioned for Adam. Adam's soul is the breath of God.

For the longest time, death was equated with the cessation of breathing. And before resuscitation methods were invented this was adequate. Once we realised that forcing air into the lungs of the "dead" person could revive them, we needed new definition of death. Around the same time the function of the heart was discovered and the cessation of the heartbeat became the new definition. Then we learned how to restart hearts and discovered brain waves and the cessation of brainwave activity. Popularly, however, the cessation of breathing is still associated with death. Someone who has been resuscitated is said to have died and come back, and their experiences while their breath or heart stopped is erroneously termed a "near death experience" and treated as a source of knowledge about the afterlife. The fact that we continue to have such experiences is seen by some as proof that there is an afterlife.

Other types of experience can also be interpreted as the mind being independent of the body: lucid dreams, out-of-body experiences, dissociative experiences brought on by trauma, drugs, or physical injury (think of Jill Bolte-Taylor's stroke). And we don't need to have one of these ourselves to find accounts of them plausible. Bronkhorst (2020) deals with how accounts of such experiences are transmitted by those who have not experienced them and become part of the public discourse. I keep in mind also the quote from The Ego Tunnel by Thomas Metzinger:

For anyone who actually had [an out-of-body experience] it is almost impossible not to become an ontological dualist afterwards. In all their realism, cognitive clarity and general coherence, these phenomenal experiences almost inevitably lead the experiencing subject to conclude that conscious experience can, as a matter of fact, take place independently of the brain and body. (p.78. Emphasis added)

The urge to dualism is really quite strong. It is matter-spirit dualism that keeps alive the possibility of an afterlife and also a desire for an afterlife that helps keep dualism alive. This is not something humans are likely to give up on soon, even though for many intellectuals life after death is simply not possible.

Another problem that John Searle pointed out that was that materialism is still rooted in ontological dualism. Materialists still divide the world into two substances; the difference is that they assert that matter is real and mind is not real. Idealists do the same but assert that matter is unreal and mind is real. Even though a materialist may argue that mind is not real—that it is a mere epiphenomenon—they still tacitly concede a substantial difference between mind and matter. They still talk about two distinct substances, even if one is unreal. Lay people pick up on this kind of equivocation even if they can't put it into words.

This tells us that materialism is not an answer because it does not go far enough. If the thesis is idealism and the antithesis is materialism, then we need a synthesis of the two. One synthesis is genuine ontological monism which holds that there is no ontological distinction between mind and matter, that neither can be reduced to the other. In order to address the persistence of dualism we have to invoke epistemology.


Epistemic Pluralism

We can all observe that we have different inputs into our sensorium. I know the world of objects through sight, hearing, taste, smell, touch, temperature, kinaesthesia, etc. I know the world of mind through conscious mental activity and the appearance of pre-formed results of unconscious mental activity emerging into my awareness (intuitions, etc). In other words, even if we formally accept a monistic world in which mind and body are manifestations of a singular, unified reality, there is still an inescapable epistemic distinction between our knowledge of the world and our knowledge of mind.

It is this epistemic distinction that fuels the plausibility of the ontological distinction,especially in the light of out-of-body experiences and other altered states that give the vivid impression of mind independent of matter.

Most people, most of the time, suspend disbelief and proceed in daily life as naive realists. To do otherwise would be inefficient and potentially dangerous. Anyone can examine their experience and ponder the distinction between perception and reality. We all know that there is a difference because our perceptions lead us astray in minor ways quite often. For example, mistaking an object for a threatening agent (e.g. a predator or a dangerous defensive agent like a snake or spider), or getting a colour wrong because of the lighting or background. But note that I never make huge mistakes like perceiving my home to be in Cambridge, England, only to discover one day that in fact I still in Auckland, New Zealand. Glitches on this scale are a sign of pathology. Moreover, minor glitches tend to resolve themselves quite quickly; we may mistake a stick for a snake at a glance, but this does not survive sustained attention. We usually recognise that the "snake" is a stick.

Of course there are abnormal perceptions. Colour-blindness, for example. One can live with colour blindness without too much danger, but one cannot safely pilot an aeroplane. With psychotic delusions the problem becomes more serious. If I perceive my children as demons and follow the urging of internal voices to kill them, the result is catastrophic for everyone involved.

Normal perception is quite reliable and where it is unreliable it errs on the side of protecting us from danger or it is trivial. And so, in daily life, we take perception as reality and most of the time this is fine. Keep in mind that humanity evolved over millions of years and attained the anatomically modern form about 200,000 to 300,000 years ago. For most of this time we were all naive realists and ontological dualists and we survived and thrived. There appears to be no evolutionary disadvantage to being an ontological dualist. Arguably, it is possible that belief in an afterlife keeps us from despair over the fact that we all die and that ontological dualism gave believers some advantage.

The problem is that naive realism encourages us to reify experience, i.e. to consider that what we experience is reality without any intervening processes. And this means we have a tendency to reify the epistemic distinction between world and mind. Hence, so many of us find ontological dualism so plausible. However, this is just the default setting for human beings. It's not a conscious ideology. On the contrary it is only with sustained (and educated) effort that some of us are able to break away from the gravity well of naive realism and subsequent dualism and see the world anew.


Subjectivity

We know that our senses respond to a range of different stimuli from visible light, to physical vibrations, to temperature differences, to our own muscle tension. But all of these are turned into identical electrochemical pulses transmitted by nerve cells exchanging sodium and potassium ions across a semipermeable membrane, linked by synapses in which the signal is briefly carried by neurotransmitters. The point is that the signals that arrive in the brain are not distinguished by being of different kinds. They are only distinguished by where in the brain they arrive and the architecture of the brain. We are still arguing over the extent of the role of the brain in creating experience, but recently Lisa Feldman-Barrett noted that the optic nerves account for only about 10% of the inputs to the primary visual cortex. Fully 90% of the inputs are from elsewhere in the brain. Vision must involve a considerable amount of self-stimulation. And presumably the other senses must be similar. Moreover, we see similar patterns of brain activity whether the subject is seeing something or imagining it. Vision and visualisation both use the same parts of the brain. Which explains why hallucinations can be so compelling.

If we stop back from this level of detail and simply take perception as we perceived it then our "world" is made up from a variety of kinds of sensory stimulation: appearances, sounds, smells, tastes, tactiles, temperature differences, muscle tension, etc. And the characteristic of all of these is that they are objective to some extent. You and I may disagree on the pleasantness of an odour (epistemically subjective fact) but we agree that there is an odour. And this agreement leads us to conclude that the odour exists independently of either of us. The smell is an ontologically objective fact. If the smell is the reek of methyl or ethyl mercaptan (the sulphur analogues of methanol and ethanol) then we may agree that it serves the function of making natural gas for cooking detectable by its odour (epistemically objective fact).

The point is that for many of our senses there is some aspect of the information we have access to that is public and accessible to any observer, even if we disagree on some of the subjective facts. No one would ever argue that the pungent smell of ethyl mercaptan is not an odour. Even the synesthete is aware of perceiving one sensory modality in terms of another. Synaesthesia is not a delusion.

Again, our awareness of mental activity is not like our awareness of the other senses. We may be able to use functional MRI to see enhanced blood flow in different parts of the brain correlating with some experience, but the content of our mental activity is not available to anyone else. Our mental sense is ontologically and epistemically subjective. In some senses mental activity is analogous to digestion. We swallow food and it is digested within our body. The nutrients are absorbed by our gut and circulate in our blood. Those nutrients are not publically available, they are contained within us. We can detect changes in blood flow or blood components, but this information does not permit my nutrients to nourish your body.

In this view, subjectivity is not such a mystery. The brain is an internal organ, housed within the skull, and with the body as its interface with the world. Sense data comes in, muscles move in response to signals from the brain (and to some extent from spinal cord). It would make no more sense for mind to be public than it would for nutrition to be public. Inputs from the brain to the brain, i.e. from one part of the brain to another part of the brain are going to have a different flavour to those which come from outside the brain.


Conclusion

Despite advances in science and refinements in philosophy, we still routinely encounter the so-called mind-body problem. I've argued that this is so because there is a striking epistemic distinction in the sensory modes through which we experience mind and body, self and world, spirit and matter. We all have a tendency to reify this epistemic difference and treat it as a metaphysical difference. And this lends plausibility to the belief. We feel that self and world are quite different and thus we believe that they are, we take actions based on this belief, and we subsequently float reasons why we believe or why we acted in that way. This is the process:

feelings → beliefs → actions → reasons

Scientists and philosophers have decisively come down on the side of monism in their work, with a few holdouts that are not taken very seriously. The methods employed tell us that what seems intuitive and plausible is not the case. If we are interested in understanding the world as it is, then this is important.

Part of the problem is that many science communicators are still working with the classical theory of rationality: if you just present someone with the facts they will changed their minds. That is to say we start with reasons and expect people to work backwards, against the flow, and change how they act, believe, and feel. And it doesn't work. Sadly, right wing politicians have embraced this new model and now spend all their time trying to manipulate how we feel, while left wing politicians are still trying to make rational arguments.

On the other hand, there is no great disadvantage to being an ontological dualist. There appears to be no evolutionary disadvantage and there is no day to day disadvantage. When we combine the intuitive plausibility with the lack of any disadvantage for being wrong we get a persistent fallacy. Many of the dualists I know are simply not interested in metaphysical monism. To them it seems to lack salience, or if it is salient, then it is counterintuitive.

There is no getting around the fact that the audience for philosophy is human beings. If we ignore this and pursue truths in the abstract then we can easily become irrelevant to most people. Worse, many intellectuals fail to understand why their ideas don't take off and they blame the audience. As communicators, the responsibility lies on us to get our message across. We are making assertions and thus the burden of proof is on us. If we fail to get our message across, then we have to consider this our failure, not the failure of the audience. It is a poor teacher who blames the student.

As I write this, I am waiting to hear back from a conference organiser about a proposal to give a presentation. What I propose to do is tear down 2000 years of hermeneutics and exegesis and argue for an entirely new way of seeing things. I have outlined the reasons for doing this in ten peer-reviewed articles and dozens of essays here on my blog. At the same time as feeling confident in my conclusions, I am acutely aware that none of these articles has been cited. I think some of them have been read by some people, but as yet my work is either unknown, or not considered salient. Heart Sutra articles still appear that are completely unaware of my articles. How to go about dismantling a familiar, and to some extent cherished, paradigm? If I had four hours I might present something like coherent case. But the best case scenario is that I'll have one hour. At best I'll be able to gloss some of the main points. I doubt anyone who has not already read the relevant papers will even follow the argument let alone be persuaded by it. And yet I have to try.

This is the kind of dilemma that philosophers face all the time in getting across new ideas. New paradigms seldom emerge fully formed and they are almost always resisted by the old guard. Max Planck quipped, perhaps a little unfairly considering history, that his field progressed one funeral at a time. In other words as the old gurda died they made space for new ideas.


Bibliography

Bronkhorst, Johannes.2020. "The Religious Predisposition." Method and Theory in the Study of Religion 33(2) :1-41.

Metzinger, Thomas. (2009). The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.



25 September 2015

The Complex Phenomenon of Religion.



It's 25 years today since my father died. His death was one of the events that got me thinking about life, death, and all that. I dedicate this essay to:

Peter Harry Attwood (1935-1990).

Religion is sometimes portrayed as a simple phenomenon. As a simple crutch for the weak, as a "violent" control mechanism and so on. Although these kinds of criticisms sometimes contain a grain of truth, in fact religion more generally is a complex phenomenon that emerges from the interaction of a number of qualities, characteristics, or abilities that humans possess. In this essay I will try to outline a set of minimal common features of all religions and link them to an evolutionary account of humans.

The diagram below attempts to summarise some of the key factors involved and to show how these factors interact to produce the basic phenomena of religion. However, any given religion may include many more elements and be considerably more complex that this summary suggests. At the end of the essay I will add a few comments about Buddhism as a religion and about what makes Buddhism distinctive (or not).




Religion seems to minimally involve supernatural agents, morality, and an afterlife. I have argued that belief in all these is "natural", by which I mean they are emergent properties of the way our brains work. I do not mean that these are necessarily accurate intuitions in the sense of being true. However, as ideas which have guided human behaviour they have been very successful in helping us go from being just another species of primate, to the highly sophisticated cultures we live in today (and I include all present day human cultures in this). What follows is not a critique, but a description. There are possible critiques of every point, both in the conclusions of religieux and of the reasons for things that I am proposing here. But I want to outline a story about religion without getting bogged down in the critique of it. In most cases I've made the critique previously. 

Supernatural agents emerge from a combination of such properties of the brain such as pareidolia (the propensity to see faces everywhere); agent detection and theory of mind (Barrett; see also Why Are Karma and Rebirth Are Still Plausible?). Fundamental to the supernatural is ontological dualism and the matter/spirit dichotomy.

Theory of mind is tuned to make living in social groups feasible and means we tend to see other agents in human terms (anthropomorphism). Supernatural agents are human-like in their desires and goals, and counter-intuitive only in that they lack a physical body. Because this is minimally counter-intuitive it makes supernatural agents more interesting and memorable. Thus, human communities tend to be surrounded by a halo of supernatural agents. Lacking bodies, supernatural agents may possess associated abilities, such as the ability to move unhindered by physical obstructions, but they are often located in some physical object, such as a tree, rock or home. Those who can bridge the two worlds of matter and spirit we call shaman. Though of course spirits also operate in the two worlds, if spirits remained wholly in their spirit world they would be a lot less interesting. For some reason the spirit world seems inherently leaky. Shamans interpret and use knowledge gained from spirits to guide decision making in the material realm. Supernatural agents can become gods and when they do, shamans become priests.

Fundamental to this account of religion is the social nature of human beings. Any account of religion which rejects the social nature of humanity or demonizes the basic structures and functions of human groups is simply uninteresting (so that is almost all psychology and most of social theory inspired by French philosophers). Unfortunately in this libertarian age there is a tendency to take a dismissive or critical stance on human groups. Social living is undoubtedly involves compromises for the individual. But the evolutionary benefits massively outweigh any perceived loss of autonomy. What's more human social groups look and work very much like other primate social groups. This has been apparent since Richard Leakey sent three young women to Africa to study chimps, gorillas and baboons in the 1960s. The most revealing of these studies was Jane Goodall's work on chimpanzees at Gombe stream, which showed chimp groups to share many traits with human groups. As social animals our behaviour is tuned towards being a member of a group, as it is in all other social primates.

Robin Dunbar showed that the average size of group that a social animal generally lives in, is correlated with the ratio of the volume of neo-cortex to the rest of the brain. For humans this predicts an average group size of ca. 150, a figure for which there is now considerable empirical support. The Dunbar Number represents a cognitive limit, beyond which we cannot maintain knowledge of each member of a group, their roles in hierarchies, mating preferences, past interactions, that is the information we need to be a well informed group member. In practice humans typically organise themselves into units of about 15, 50, 150, 500, 1500 and so on. Groups of different sizes serving different functions and operating with differing levels of intimacy and knowledge. As well as collecting information through observation, we use theory of mind to infer the disposition of other group members. The smallest viable unit of humanity is probably the 150 sized group.

Social living depends for it's success on the active participation of all group members and social norms. Norms are primarily to help the group function effectively. But they may work indirectly, for example to help strengthen group identity "We are the people who....". If social animals were, as economists claim, fundamentally selfish, then groups could not function. We are adapted to being cooperative. But there are temptations to freeload or break other group norms. Up to around the 150 number, groups maintain norms by simple observation. Everyone knows everyone else's business. 

Anthropomorphism allows us to relate to non-human beings as part of our group. We also have the ability to empathise with strangers, though empathy evolved to help us understand the internal disposition of other individuals or small groups. Empathy is personal, which is why we humans still have trouble comprehending large scale disasters without some Jarrod Diamond has noted that in places like the highlands of New Guinea, where the population is almost at a maximum density for hunter-gather lifestyles and thus competition for resources is intense, that tolerance of strangers is low (which is also true of other primate species). In many instances, strangers are killed on sight. However surpluses and trade between groups makes tolerance of strangers more feasible. Thus the factors which lead to civilisations (i.e. much larger groupings) also facilitated tolerance of strangers. Ara Norenzayan has argued that religion with "Big Gods" was a major factor in enabling the large scale cooperation implied by civilisation. Large groups mean that keeping track of each group member becomes more difficult. Monitoring compliance with behavioural norms starts to break down. 

Social groups which perceive an active halo of supernatural beings incorporated into their daily lives may rely on these supernatural agents as monitors of group norms (Norenzayan). In which case the role of the shaman is also expanded. The beings involved in monitoring are likely to become more active and present. They may begin to play an active role, for example punishing transgressive behaviour. Because supernatural agents are already counter-intuitive in lacking physical bodies, they can easily evolve in this direction. Those involved in monitoring the social sphere have a tendency to become omnipresent (the better to see you) and, as a result, omniscient. Once they start dishing our punishments they can become omnipotent as well. Thus ordinary supernatural agents can become gods.

Once gods emerge they typically require more elaborate acknowledgement, rather like a dominant member of the tribe gets first preference in food and mates. A group may enact elaborate and costly rituals aimed at securing the cooperation of spirits and gods. Making sacrifices (in the sense of giving scarce resources) helps to encourage participation in group norms (see also Martyrs Maketh the Religion). Costly sacrifices bolster the faith of followers. Those who officiate at such ceremonies are likely shaman initially, but become focussed on interpreting and enacting the will of the gods rather than spirits in general. In other words they become priests. The prestige of priests rises with the prestige of the gods they serve. Along with sacrifice, priests may introduce arbitrary taboos that help define group identity. As Foucault noted, the power of the group or leaders to shape the subject is matched by the desire of the subject to be shaped. As members of a social species we make ourselves into subjects of power; or even into the kind of subjects (selves) that accept the compromises of social lifestyles. As social primates we evolved to participate in social groups with hierarchies. On the other hand evolution no longer entirely defines us - we did not evolve to use written communication for example (which is why writing is so much more difficult than talking).

We have a tendency to think in terms of reasons and purposes - teleology. In teleological thinking, things happen for a reason. We exist for a reason. The world exists for a reason. Things happen for a reason. In modern life we often seek reasons in individual psychology. In the past other types of reasons included supernatural interference and magic. The stories we tell about these reasons for events become our mythology. Even so we are left with questions. If we are here for a reason, we want to know what it is (because it is far from obvious to most people). If following the group norms or the prescriptions of gods is supposed to make everything run smoothly, then why does it not? If gods are members of our tribe and can intervene to help us, why do they not?

Despite the emphasis on keeping group norms and associating this with the success of the group, life is patently unfair. We can be the very best group member, keep all the rules, and yet we still suffer misfortune, illness, and death. The world is unjust. But we tend to believe the opposite, i.e. that the world is just, that reasons make it so. If everything happens for a reason, then bad things also happen for a reason. But what could that reason possibly be? The meeting of injustice and teleology is extremely fruitful for religion, but before getting further into it we need to consider the afterlife.

The matter/spirit dichotomy seems to emerge naturally from generalising about human experience. Some people have vivid experiences of leaving their body for example which, on face value, would only be possible if the locus of experiencing is separate from the physical body. The very metaphors that we use to talk about aspects of lived experience tend to frame the matter/spirit dichotomy in a particular way. Matter is dull, lifeless, rigid. Spirit is light, lively, and infinitely flexible. Matter is low, spirit high. And so on (see Metaphors and Materialism). We understand life through Vitalism: living beings are matter made flexible by an inspiration of spirit. Spirit in many languages is closely associated with the breath—spiritus, qi, prāṇa, ātman, pneuma—perhaps the most important characteristic of living beings in the pre-scientific world.

The greatest injustice seems to be that our breath leaves us, i.e. we die. All living beings act to sustain and maintain their own existence, their own life. Self-consciousness gives us the knowledge of the certainty of our own death. In a dualistic worldview, death occurs when the spirit leaves the body. The body returns to being inanimate matter (dust to dust). In this worldview, spirit is not affected by death in the same way as matter. Indeed spirit is not affected by death at all. Once the spirit leaves the body a number of post-mortem possibilities exist: hanging around as a supernatural agent; travelling to another world (to the realm of the ancestors for example, or to paradise); or taking another human form. The precise workings are specific to cultures, but all cultures seem to have an afterlife and the variations are limited to one or other of these possibilities.

Something interesting happens when we combine normative morality, teleological thinking, and the afterlife. If things happen for a reason and one of the main reasons is our own behaviour and there is injustice, then it stands to reason, that our own behaviour is (potentially) a cause of injustice. We link behaviour to outcomes. And if everything happens to a reason it's hard to imagine the morally good not being rewarded and the morally wicked not being punished. And if something bad happens, then maybe we have transgressed in some way. In which case a shaman or priest must consult the unseen, but all seeing supernatural monitors (this is incidentally why the Buddha had to have access to this knowledge). This world, the material world composed primarily of matter, is manifestly unjust. By contrast, an afterlife is very much a world of spirit and as the basic metaphors show, the world of spirit is the polar opposite of the world of matter. If the world of matter is unjust (and it is) then the world of spirit is by necessity just. The rules of the afterlife must be very different. Gods hold sway there for example. Gods whose reason for being is to supervise the behaviour of humans. So it is entirely unsurprising that the function of an afterlife, in those communities which practice morality, is judgement of the dead. This happens in all the major religions, and dates back at least to the ancient Egyptian Book of the Dead.

Here we have, I think, all the major components of religion. And they emerge from lower-level, relatively simple properties of the (social) human mind at work. Thus religion is a natural phenomenon. It is not, as opponents of religion like to assert, something artificial that is superimposed on societies, but something that naturally emerges out of anatomically modern humans with a pre-scientific worldview living together. If chimps were only a little more like us, they too would develop like this. Neanderthals almost certainly had religion of a sort. The naturalness of religion predicts that every society of humans ought to have religion or something like it. And they do, except where people are WEIRD: Western, Educated, Industrialised, Rich, and Democratic. WEIRD people are psychological outliers from the rest of humanity. But WEIRD culture is build upon layers of religious culture, with Christianity superimposed on early forms of religion (and perhaps several layers of this). Again, for emphasis, the naturalness of religion does not mean that a religious account of the world is either accurate or precise. It is certainly successful, depending on how one measures success, but as a description of the world the religious view tends to be flawed making it both inaccurate and imprecise. 

Religious communities have some distinct advantages over non-religious communities in terms of sustaining group identity and encouraging cooperation.  The Abrahamic religions certainly have many millions of followers, and the followers of these religions have established a vast hegemony over most of the planet. On the other hand Christianity seems to be waning. Religious ideologies are giving way to political ideologies. Communism was one such that is also on the wane. Neoliberalism seems to have survived the near collapse of the world's economies to continue to dominate public discourse on politics and economics. Liberal Humanism seems to be a potent force for good still, though as we have seen it cannot be successfully linked to Neoliberal economics. 


Buddhism

There are those who argue that Buddhism is not a religion. This is naive at best, and probably disingenuous. Buddhism has all the same kinds of concerns as other religions, all of the main components outlined above—supernatural agents, morality, and an afterlife—and many of the secondary components as well. In many ways, Buddhism is simply another manifestation of the same dynamic that produces religious ideas and practices in other groups. Sure we have an abstract supernatural monitor, but karma does exactly the same job as Anubis, Varuṇa, Mazda, or Jehovah in monitoring behaviour. It's merely a quantitative difference, not a qualitative one. WEIRD Buddhists play down the halo of supernatural beings, but traditional Buddhist societies in Asia all have folk beliefs which involve spirits (e.g. Burmese nat) and many similar animistic beliefs, such as tree spirits (rukkhadevatā) are Canonical. 

David Chapman (@meaningness) and I had a very interesting exchange on Twitter a few days ago (storified). DC noted that some of those who are opposed to secularisation of mindfulness training, are concerned about disconnecting mindfulness from "Buddhist ethics". They seem to argue that the problem is that mindfulness without ethics is either meaningless or dangerous, or both. DC's point was that there was nothing distinctive about Buddhist ethics and that, in the USA at least, what masquerades as "Buddhist" ethics is simply the prevailing ethics of WEIRD North America. So to argue against mindfulness being taught separately from Buddhist ethics is meaningless. For example Tricycle Magazine has run positive stories on Buddhists in the US military. If soldiers can be Buddhists, then Buddhist ethics really do have no meaning. Indeed there is nothing very distinctive about Buddhist ethics more generally, nothing that distinguishes Buddhist ethics from, say, Christian ethics. Sure, the stated rationale for being ethical is different, but the outcome is the same: love thy neighbour. (David has started his blog series on this: “Buddhist ethics” is a fraud).

Certainly Buddhism is not the only religion to use a variety of religious techniques for working with the mind, including concentration and reflection exercises. Mediation was a word in English long before Buddhism came on the scene (noted ca. 1200 CE). Arguably all the practices that we associate with Buddhism were in fact borrowed from other religions anyway (particularly Brahmanism and Jainism). According to Buddhism's own mythology, meditation was already being practised to a very high degree before Buddhism came into being. The Buddha simply adapted procedures he had already learned.

So is there anything about Buddhism as a religion that is distinctive? Some would argue that pratītya-samutpāda is distinctively Buddhist. However too many of us portray conditioned arising as a theory of cause and effect, or worse, a Theory of Everything. It is certainly a failure as the latter, and far from being very useful in the former role (the words involved don't even mean caused). Since almost everyone seems confused about the domain of application of this idea, one wonders whether Buddhists can lay claim to the theory at all. If Buddhists make pratītyasamutpāda into an ontology then pratītyasamutpāda would hardly seem to be Buddhist any longer. Nowadays, Buddhists all seem to think that having read about nirvāṇa or śūnyatā in a book makes one an expert on "reality".

DC and I tentatively agreed that any distinction that Buddhism might have is probably in the area of cultivating states in which sense-experience and ordinary mental-experience cease, what I would call nirodha-samāpatti or śūnyatā-vimokṣa etc. It is these states in particular that seem to promote the transformation of the mind that makes Buddhism distinctive. It's just unfortunate that we have so many books about these states, and so many people talking about them from having read the books (and writing books on the basis of having read the books), and so few people who experience such states. The thing that distinguishes Buddhism is something that only a tiny minority are realistically ever going to seriously cultivate, and probably a minority of them are going to succeed in experiencing. So Buddhism in practice, for the vast majority consists in beliefs and activities that are not distinctively Buddhist at all - loving your neighbours, communal singing, relaxation techniques, philosophical speculation, propitiation of supernatural agents, and so on.

And while some people are having awakenings, the level of noise through which they have to communicate is overwhelming. Buddhists have adopted so much psychological and psycho-analytic jargon that Buddhism as presented can seem indistinguishable from either at times. One gets the sense that today's "lay" Buddhism is closely aligned with the goals of psychologists. Not only this but we also get a lot of interference from pseudo-science, Advaita Vedanta, and home grown philosophies.

So, to sum up, religion is a natural phenomenon. It emerges from, is an emergent property of, a brain evolved for living in large social groups. A religious worldview makes sense to so many people, even WEIRD people, because it fits with our non-reflective beliefs about the world. Buddhism sits squarely in the middle of this as another religious worldview. But this does not mean that a religious worldview is accurate or precise, or that a secularised version of religion is an improvement on religion per se. Secularised versions of Buddhism are simply religion tailored for WEIRD people. It is more appealing to secularists who none the less feel that something is missing from their lives (because they are evolved to be religious). If Buddhism is distinctive, it is distinctive in ways that the vast majority of people will never have access to.

The main point I take from this is that religion is comprehensible. People who hold to religious views are comprehensible. While I think religious views are erroneous, I can see why so many people disagree, why religion remains so compelling for so many people. I can sympathise with them. And while I'm not an evangelist, it does make it easier for me to stay in dialogue with, for examples, members of my family who are committed Christians. As with the problem of communicating evolution, part of the problem with religion remaining plausible is the sheer ineptitude of scientists as communicators - their remarkable ability to understand string theory, or whatever, seems to be matched by an astounding lack of insight into their own species. And philosophers, whose job to is make the world comprehensible, have also largely failed. They both fail on the level of making new discoveries comprehensible and on the level of communicating why new discoveries are important. And when they fail, priests and other charlatans step into the gap, and that too is understandable. 

~~oOo~~

References to particular works or thinkers that are not linked to directly can be checked in the bibliography tab of the blog. 

08 May 2015

What can the Turing Test Tell Us?

Alan Turing's contribution to mathematics, cryptography and computer science was inestimable. Not only did he shorten World War Two, saving thousands of lives, he advanced us onto the path of digital computers. His suicide after being coerced into hormone treatment is a massive blot on the intellectual landscape in Britain. It is an enduring source of shame. Turing's work remained classified for decades because of the fear that war might break out again and knowing how to break the complex codes used by the Germans was too valuable an advantage to throw away. Nowadays, cryptography has advanced to the point where keeping Turing's work a secret no longer confers much advantage.

Turing was prescient in many ways. Not only did he set the paradigm for how digital computers work, but he understood that one day such machines might become so sophisticated that they were indistinguishable from intelligent beings. He was the first person to consider artificial intelligence (AI). Thinking about AI led him to construct one of the most famous thought experiments ever proposed. The Turing Test is not only a way to distinguish intelligence, it is actually a way of thinking about intelligence without getting bogged down in the details of how intelligence works. For Turing and many of us, the argument is that if a machine can communicate in a way that is it indistinguishable from a human being, then we must assume that it is intelligent, however it achieves this. It's a pragmatic definition of intelligence and one that leads to a practical threshold, beyond which all AI researchers wish to pass.

However underpinning the test are some assumptions about communication, language, and intelligence that I wish to examine. The first is that all human beings all seem to be considered good judges for the Turing Test. I think a good case can be made for considering this a false assumption. The second is the assumptions that mere word use is how we define not only intelligence, but language. Both of these are demonstrably false. If the assumptions the test is built on are false, then we need to rethink what the test is measuring, and whether we still feel this is a sufficient measure of intelligence.


Turing Judges.

The idea of the Turing Test is that a person sits at a teletype machine that prints texts and allows the operator to type text. The human and the test subject sit in different rooms and use the teletype machines to communicate. A machine can be said to pass the Turing Test if a human operator of the teletype cannot tell that the subject is not human. This puts word use at the forefront of Turing's definition of what it means to be intelligent. 

Human beings use of language is indeed one of our defining features. Animals use faculties that hint at a proto-language facility. No animal uses language in the sense that we do. At best animals show one or two of the target properties that define language. They might for example have several grunts that indicate objects (often types of predator), but no syntax or grammar. There has been significant interest in programs that sought to teach apes to use language either as symbols or gestures. But most of this research has been discredited. Koko the gorilla was supposedly one of the most sophisticated language uses, but her "language" in fact consisted of rapidly cycling through the repertoire of signs, with the handler picking the signs that made most sense to them. In other experiments subtle cues from handlers told the animals what signs to use. More rigorous experiments show that chimps can understand some language, particularly nouns, but then so can grey parrots, some dogs, and other animals. Crucially they don't use language to communicate. In fact a far more impressive demonstration of intelligence is the ability of crows to improvise tools to retrieve food, or the coordinated pack hunting of aquatic mammals like orca and dolphins. So animals do not use language, but are none the less intelligent. 

Humans are all at different levels when it comes to language use. Some of us are extraordinarily gifted with language and others struggle with the basics. The distinctions are magnified when we restrict language to just written words. This restriction alone is doubtful. Language as written language, even if used for a dialogue, is only small part of what language use consists of. A great deal of what we communicate in language is conveyed by tone of voice, facial expression, hand gestures, or body posture. Those people who can use written language well are rare. So a Turing judge is not simply distinguishing a machine from a human, but is placing a machine on a scale that includes novelists and football hooligans. What happens when the subject responds to any question by chanting "Oi, oi, oi, Come on you reds!"? Intelligence, particularly as measured by word use, is not a simple proposition. 

The Turing Test using text alone would be more interesting if we could define in advance what elements would convince us that the generator of the text was human. To the best of my knowledge this has never been achieved. We don't know what criteria constitute a valid or successful test. We just assume that any generic human being is a good judge. There's no reason to believe that this is true. As I've mentioned many times now, individuals are actually quite poor at solo reasoning tasks (See An Argumentative Theory of Reason). Reason does not work the way they we thought it did. Mercier & Sperber have argued that at least one of the many fallacies that we almost inevitably fall prey to—confirmation bias—is a feature of reason, rather than a bug. M&S argue that this is because reason evolved to help small groups make decisions and those who make proposals think and argue differently to those who critique them. On this account, any given individual would most likely be a poor Turing judge. 

Humans beings evolved to use language. Almost without exception, we all use it without giving it much thought. Certain disorders or diseases may prevent language use, but these stand out against the background of general language use: from the Amazon jungles to the African veldt, humans speak. The likelihood is that we've been using language for tens of thousands of years (See When Did Language Evolve?). But writing is another story. Writing is unusual amongst the world's languages, in that only a minority of living languages are written, or were before contact with Europe. Writing was absent from the Americas, from the Pacific, from Australia and New Guinea. The last two have hundreds of languages each. Unlike speaking, writing is something that we learn with difficulty. No child spontaneously begins to communicate in writing. Writing co-opts skills evolved for other purposes. And as a consequence our ability to use writing to express ourselves is extremely variable. Most people are not very good at it. Those who are, are usually celebrated as extraordinary individuals. Writers and their oeuvre are very important in literary cultures.

So to chose writing as the medium of a test for intelligence is an extremely doubtful choice. We don't expect intelligent human beings to be good at writing. Many highly intelligent people are lousy writers. We don't even expect people who are gifted speakers to be good at writing, which is why politicians do not write their own speeches! Writing is not a representative skill. Indeed it masks our inherent verbal skill.

In fact it might be better to use another skill altogether, i.e. tool making. A crow can modify found objects (specifically bending wire into a hook) to retrieve food items. Another important manifestation of intelligence is the ability to work in groups. Some orca, for example, coordinate their movements to create a bow-wave that can knock a seal off an ice-flow. This is a feat that involves considerable ability at abstract thought, and they pass this acquired knowledge onto to their offspring. The ability to fashion a tool or coordinate actions to achieve a goal are at least as interesting as manifestations of intelligence as language is.


Language and Recognition.

My landlady talks to her cats as though they understand her. She has one-sided conversations with them. Explains to them narratively when their behaviour causes her discomfort, as though they might understand and desist (they never do). She's not peculiar in this. Many people feel their pets are intelligent and can understand them even if they cannot speak. Why is this? Well, at least in part, it's because we recognise certain elements of posture in animals corresponding to emotions. The basic emotions are not so different in our pets that we cannot accurately understand their disposition: happy, content, excited, tired, frightened, angry, desire. With a little study we can even pick up nuances. A dog that barks with ears pinned back is saying something different to one that has its ears forward. A wagging tail or a purr can be a different signal depending on circumstances. A lot of it has to do with displays of and reception of affection. 

Intelligence is not simply about words or language. Depending on our expectations the ability to follow instructions (dogs) or the ability to ignore instructions (cats) can be judged intelligent. The phrase emotional intelligence is now something of a cliché, but it tells us something very important about what intelligence is. A dog that responds to facial expressions, to posture and tone of voice is displaying intelligence of the kind that has a great deal of value to us. Some people value relationships with animals precisely because the communication is stuck at this level. A dog does not try to deceive or communicate in confusingly abstract terms. An animal broadcasts its own disposition ("emotions") without filtering and it responds directly to human dispositions. Many people would say that this type of relationship is more honest.

There's a terrible, but morbidly fascinating, neurological condition called Capgras Syndrome. In this condition a person can recognise the physical features of humans, but their ability to connect those features with emotions is compromised. Usually when one sees a familiar face there is an accompanying emotion that tells us what our relationship with the person is. If we feel disgust or anger on recognition, then we know them to be enemies, perhaps dangerous and we act to avoid or perhaps confront them. If the emotion is joy or love then we know it's a friend or loved one. In Capgras the emotional resonance is absent. With loved ones the absence of that emotion is so strange that the most plausible explanation often seems to be that these are mere replicas of loved ones, or lookalikes. The lack of emotion in response to a known face can be incapacitating in the sense of disrupting every existing relationship. In the classic novel, The Echo Maker, by Richard Powers, the man with Capgras is able to recognise and respond to his sister's voice on the telephone, but does not feel anything when he sees her. The same is true for his home and even his dog. The only way he can explain it is that they are all substitutes cleverly recreated to fool him. Only he isn't "fooled" which creates a nightmarish situation for him. 

The problem, then, with the Turing Test is that it is rooted in the old Victorian conceit about reason being our highest faculty. Reason was, until quite recently, considered to float above the mere bodily processes of emotion. In other words it was very much caught up in Cartesian mind/body dualism and the metaphors associated with matter and spirit (See Metaphors and Materialism). Reason is associated, by default, with spirit, since it seems to be distinct from emotion. We now know that nothing could be further from the truth. Cut off from emotions our minds cannot function properly. We cannot make decisions, cannot assess information, and cannot take responsibility for our actions. The Turing test assumes that intelligence is an abstract quality, separable from the body. But these assumptions are demonstrably false.


What Kind of Intelligence?

I've already pointed out that language is more than words. I've expanded the idea of language to include the prosody, gesture and posture associated with the words (which as we know shapes the meaning of the words). An ironic eyebrow lift can make words mean something quite different than their face value. The ability to use and detect irony depends on non-verbal cues. This is why, for example, irony seldom works on Twitter. Text tends to be taken on face value, and attempts at irony simply cause misunderstanding. This is true in all text based media. In the absence of emotional cues we are forced to try to interpolate the disposition of the interlocutor. Getting a computer to work with irony would be an interesting test of intelligence!

Indeed trying to assess the internal disposition of the hidden interlocutor is a key aspect of the Turing Test. Faced with a Turing Test subject I suspect that most of us would ask questions designed to evoke emotional responses. This is because we intuit that what makes us human is not the words we use, but the feelings we communicate. Someone who acts without remorse is routinely referred to as "inhuman". In most cases humans are not good at making empathetic connections using text - which is why text-based online forums seem to be populated with borderline, if not outright, sociopaths. It's the medium, not the message. Personally I find that doing a lot of online communication produces a profound sense of alienation and brings out my underlying psycho-pathology. Writing an essay however is far more productive exercise than trying to dialogue in text. Even the telephone, with it's limited frequency range, is better for communicating, because tone of voice and inflection communicates sufficient to establish an empathetic connection. 

So if a computer can play chess better than a human being (albeit with considerable help from a team of programmers) then that is impressive, but not intelligent. The computer plays well because it does not feel anything, does not have to respond to its environment (internal or external), and does not have any sense of having won or lost. It has nothing for us to relate to. Similarly, even if a computer ever managed to use language with any kind of facility, i.e. if it could form grammatically and idiomatically correct sentences, it would probably still seem inhuman because it would not share our concerns and values. It would not empathise with us, nor us with it. 

I suppose that in the long run a computer might be able to simulate both language and an interest in our values so that in text form it might fool a human being. But would this constitute intelligence? I think not. A friendly dog would be more intelligent by far. Which is not to say that such a computer would not be a powerful tool. But we'd be better off using it to predict the weather or model a genome than trying to simulate what any of us, or any dog, can do effortlessly.

An argument against this point of view is that our minds are tuned to over-estimate intelligence or emotions in objects we see. So we see faces in clouds and agency in inanimate objects. So an approximation of intelligence would not have to be all that sophisticated to stimulate the emotions in us that would make us judge it intelligent. For example, in movies robots are often given a minimal ability to emote in order to make them sympathetic characters. The robot, Number five, in the film Short Circuit has "eyebrows" and an emotionally expressive voice and this is enough for us to empathise with it. So perhaps we will be easily fooled into believing in machine intelligence. But this means that simulation of intelligence is insufficiently impressive because people are easily fooled.

This point is brilliantly made in the movie Blade Runner. The Voight-Kampff test is designed to distinguish "replicants" from humans based on subtle differences in emotional responses. The replicants are otherwise indistinguishable from humans. The test of Rachael is particularly difficult because she has been raised to believe she is human (the logic of the movie breaks down to some extent because we do not learn by Deckard persists in asking 100 questions if Rachael is answering satisfactorily). Ridley Scott has muddied the waters further by suggesting that the blade runner, Deckard, is himself a replicant, though based on the original story and the context of the film this seems an unlikely twist.

So there are two major problems here: what makes a good Turing test; and who makes a good Turing judge. The whole set up seems under-defined and poorly thought out at present. My impression is that passing the Turing test as it is usually specified is a trivial matter that would tell us nothing about artificial intelligence or humanity that we do not already know. 


Conclusion

It seems to me that we have many reasons to rethink the Turing Test. It seems to be rooted in a series of assumptions that are untenable in light of contemporary knowledge. As a test for intelligence the Turing Test no longer seems reasonable. On one hand the way that it defines intelligence is far too limited. The definition of intelligence it uses is rooted in Cartesian Dualism which sees intelligence as an abstract quality, not rooted in physicality, not embodied. And this is simply false. Emotions, as felt in the body, for example, play a key role in how we process information and make decisions.

As much as anything our decision on whether or not an entity is intelligent or not, will be based on how we feel about it, how interacting with it feels to us. We will compare the feeling of interacting with the unknown entity, to how it feels to interact with an intelligent being. And until it feels right we will not judge that entity intelligent.

In Turing's day we simply did not understand how decision making worked. We still thought of abstract reasoning as a detachable mental function unrelated to being embodied. We still saw reason as the antithesis of emotion. Now we know that emotion is an indivisible part of the process. We must now consider that reason itself may not have evolved for seeking truth, but merely for optimising decision making in small groups. At the very least, the lone teletype operator needs to be replaced with a group of people; and mere words must be replaced by tasks that involve creativity and cooperation. A machine ought to show the ability to cooperate with a human being to achieve a shared goal before being judged "intelligent". The idea that we can judge intelligence at arms length, rationally, dispassionately has little interest or value any more. We judge intelligence through interaction, physical interaction as much as anything.

As George Lakoff and his colleagues have shown, abstract thought is rooted in metaphors deriving from how we physically interact with the world. Our intelligence is embodied and the idea of disembodied intelligence is no longer tenable. As interesting as the idea may appear, there is no ghost in the machine that can be extracted or instantiated and maintained apart from the body. Any attempts to create disembodied intelligence will only result in a simulacrum, not in intelligence that we can recognise as such.

Buddhists will often smugly claim this as their own insight, though most Buddhists I know are crypto-dualists (most believe in life after death and karma for example). I've argued at length that the Buddha's insight was into the nature of experience and that he avoided drawing ontological conclusions. Thus, although we read the texts as being a critique of doctrines involving souls, the methods of Buddhism were always different from the methods of Brahmanism. The Brahmins sought to experience the ātman as a reality, and from the Upaniṣadic description ātman could be experienced as a sense of oneness or connection with everything in the world (oceanic boundary loss). Buddhists deconstructed experience itself to show that nothing in experience persisted and that therefore, even if there was a soul we must either always experience it, or it could never be experienced, and since we start off not experiencing it, no permanent soul can ever be experienced (which is not a comment on whether or not such a soul exists!). Therefore the experiences of the Brahmins are of something other than ātman. Only after Buddhists had started down the road of misguided ontological speculation did this become an opinion about the existence of a soul. So the superficial similarities between ancient Buddhist and modern scientific views is an accident of a philosophical wrong turn on the part of Buddhists. They got it partly right by accident, which is not really worth being smug over.

History shows that we must proceed with real caution here. Our Western views on intelligence have been subject to extreme bias in the past and this has led to some horrific consequences for those people who failed our tests for completely bogus reasons. We must constantly subject our views on intelligence to the most rigorous criticism and scepticism we are capable of. Our mistakes in this field ought to haunt us and make us extremely uncomfortable. This is yet another reason why tests for intelligence ought to require more interactivity. If we do create intelligence we need to know we can get along with it, and it with us. And we know that we have a poor record on this score.

The Turing Test seems not to have been updated to take account of what we know about ourselves nowadays. The test itself is anachronistic. The method is faulty, because it is based on a faulty understanding of intelligence and decision making. We are not even asking the correct question about intelligence. With all due respect to Alan Turing, he was a man of his time, a glorious pioneer, but we're moved on since he came up with this idea and it's had its day. 


~~oOo~~

See also: Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us. (27 June 2014)

27 June 2014

Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us.

"Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us."

cosmicorigins.com
Artificial Intelligence (AI) is one of the great memes of science fiction, and as our lives come to resemble scifi stories ever more, we can't help but speculate what an AI will be like. Hollywood aside, we seem to imagine that AIs will be more or less like us because we aim to make them like us. And as part of that, we will make them with affection for, or at least obedience to, us. Asimov's Laws of Robotics are the most well-known expression of this. And even if they end up turning against us, it will be for understandable reasons.

Extra-terrestrial aliens, on the other hand, will be incomprehensible. "It's life, Jim, but not as we know it." We're not even sure that we'll recognise alien life when we see it. Not even sure that we have a definition of life that will cover aliens. It goes without saying that aliens will behave in unpredictable ways and will almost certainly be hostile to humanity. We won't understand their minds or bodies, and we will survive only by accident (War of the Worlds, Alien) or through Promethean cunning (Footfall, Independence Day). Aliens will surprise us, baffle us, and confuse us (though hidden in this narrative is a projection of fears both rational and irrational).

In this essay, I will argue that we have this backwards: in fact, AI will always be incomprehensible to us, while aliens will be hauntingly familiar. This essay started off as a thought experiment I was conducting about aliens and a comment on a newspaper story on AI. Since then, it's become a bit more topical as a computer program known as a chatbot was trumpeted as having "passed the Turing Test for the first time". This turned out to be a rather inflated version of events. In reality, a chatbot largely failed to convince the majority of people that it was a person despite a minor cheat that lowered the bar. The chatbot was presented as a foreigner with poor English and was still mostly unconvincing.

But here's the thing. Why do we expect AI to be able to imitate a human being? What points of reference would a computer program ever have to enable it to do so?


Robots Will Never Be Like Us.

There are some fundamental errors in the way that AI people think about intelligence that will begin to put limits on their progress if they haven't already. The main one being that they don't see that human consciousness is embodied. Current AI models tacitly subscribe to a strong form of Cartesian mind/body dualism: they believe that they can create a mind without a body. 

There's now a good deal of research to show that our minds are not separable from our bodies. I've probably cited four names more than any other when considering these issues: George Lakoff, Mark Johnson, Antonio Damasio, and Thomas Metzinger. What these thinkers collectively show is that our minds are very much tied to our bodies. Our abstract thoughts are voiced using metaphors drawn from how we physically interact with the world. Their way of understanding consciousness posits the modelling of our physical states as the basis for simple consciousness. How does a disembodied mind do that? We can only suppose that it cannot.

One may argue that a robot's body is like a human body. And that an embodied robot might be able to build a mind that is like ours through its robot body. But the robot is not using its brain primarily to sustain homeostasis, mainly because it does not rely on homeostasis for continued existence. But even other mammals don't have minds like ours. Because of shared evolutionary history, we might share some basic physiological responses to gross stimuli that are good adaptations for survival, but their thoughts are very different because their bodies and particularly their sensory apparatus, are different. 

An arboreal creature is just not going to structure their world the way a plains dweller or an aquatic animal does. Is there any reason to suppose that a dolphin constructs the same kind of world as we do? And if not, then what about a mind with no body at all? Maybe we could communicate with dolphins with difficulty and a great deal of imagination on our part. But with a machine? It will be "Shaka, when the walls fell." For the uninitiated, this is a reference to a classic of first-contact scifi story. The aliens in question communicate in metaphors drawn exclusively from their own mythology, making them incomprehensible to outsiders, except Picard and his crew, of course (there is a long, very nerdy article about this on The Atlantic Website). Compare Dan Everett's story of learning to communicate with the Pirahã people of Amazonia in his book Don't Sleep There Are Snakes.

Although Alan Turing was a mathematical genius, he was not a genius of psychology. And he made a fundamental error in his Turing Test, in my opinion. Our Theory of Mind is tuned to assume that other minds are like ours. If we can conceive any kind of mind independent of us, then we assume that it is like us. This has survival value, but it also means we invent anthropomorphic gods, for example. A machine mind is not going to be at all like us, but that doesn't stop us from unconsciously projecting human qualities onto it. Hypersensitive Agency Detection (as described by Justin L Barrett) is likely to mean that even if a machine does pass the Turing Test, then we will have overestimated the extent to which it is an agent.

The Turing Test is thus a flawed model for evaluating another mind because of limitations in our equipment for assessing other minds. The Turing Test assumes that all humans are good judges of intelligence, but we aren't. We are the beings who see faces everywhere and can get caught up in the lives of soap opera characters and treat rain clouds as intentional agents. We are the people who already suspect that GIGO computers have minds of their own because they break down in incomprehensible ways at inconvenient times, and that looks like agency to us! (Is there a good time for a computer to break?). The fact that any inanimate object can seem like an intentional agent to us disqualifies us as judges of the Turing Test.

AIs, even those with robot bodies, will sense themselves and the world in ways that will always be fundamentally different to us. We learn about cause and effect from the experience of bringing our limbs under conscious control, i.e. by grabbing and pushing objects. We learn about the physical parameters of our universe the same way. Will a robot really understand in the same way? Even if we set them up to learn heuristically through electronic senses and a computer simulation of a brain, they will learn about the world in a way that is entirely different to the way we learned about it. They will never experience the world as we do. AIs will always be alien to us.


All life on the planet is the product of 3.5 billion years of evolution. Good luck simulating that in a way that is not detectable as a simulation. At present, we can't even convincingly simulate a single-celled organism. Life is incredibly complex, as this 1:1 million scale model of a synapse (right) demonstrates.


Aliens Will Be Just Like Us.

Scifi stories like to make aliens as alien as possible, usually by making them irrational and unpredictable (though this is usually underlain by a more comprehensible premise - see below).

In fact, we live in a universe with limitations: 96 naturally occurring elements, with predictable chemistry; four fundamental forces; and so on. Yes, there might be weird quantum stuff going on, but in bodies made of septillions (1023) of atoms, we'd never know about it without incredibly sophisticated technology. On the human scale, we live in a more or less Newtonian universe.

Life as we know it involves exploiting energy gradients and using chemical reactions to move stuff where it wouldn't go on its own. While the gaps in our knowledge still technically allow for vitalistic readings of nature, it does remove the limitations imposed on life by chemistry: elements have strictly limited behaviour, the basics of which can be studied and understood in a few years. It takes a few more years to understand all the ways that chemistry can be exploited, and we'll never exhaust all of the possibilities of combining atoms in novel ways. But the possibilities are comprehensible, and new combinations have predictable behaviour. Many new drugs are now modelled on computers as a first step.

So the materials and tools available to solve problems, and in fact most of the problems themselves, are the same everywhere in the universe. A spaceship is likely to be made of metals. Ceramics are another option, but they require even higher temperatures to produce and tend to be brittle. Ceramics sophisticated enough to do the job suggest a sophisticated metal-working culture in the background. Metal technology is so much easier to develop. Iron is one of the most versatile and abundant metals: other mid-periodic table metallic elements (aluminium, titanium, vanadium, chromium, cobalt, nickel, copper, zinc, etc) make a huge variety of chemical combinations, but for pure metal and useful alloys, iron is king. Iron alloys give the combination of chemical stability, strength-to-weight ratio, ductility, and melting point to make a spaceship. So our aliens are most likely going to come from a planet with abundant metals, probably iron, and their spaceship is going to make extensive use of metals. The metals aliens use will be completely pervious to our analytical techniques.

Now, in the early stages of working iron, one needs a fairly robust body: one has to work a bellows, wield tongs and hammer, and generally be pretty strong. That puts a lower limit on the kind of body that an alien will have, though the strength of gravity on the alien planet will vary this parameter. Very gracile or very small aliens probably wouldn't make it into space because they could not have got through the blacksmithing phase to more sophisticated metalworking techniques. A metalworking culture also means an ability to work together over long periods for quite abstract goals like the creation of alloys composed of metals extracted from ores buried in the ground. Thus, our aliens will be social animals by necessity. Simple herd animals lack the kind of initiative that it takes to develop tools, so they won't be as social as cows or horses. Too little social organisation and the complex tasks of mining and smelting enough metal would be impossible. So, no solitary predators in space either.

The big problem with any budding space program is getting off the ground. Gravity and the possibilities of converting energy put more practical limitations on the possibilities. Since chemical reactions are going to be the main source of energy and these are fixed, gravity will be the limiting factor. The mass of the payload has to be not too large to be too costly or just too heavy, and it must be large enough to fit a being in (a being at least the size of a blacksmith). If the gravity of an alien planet were much higher than ours, it would make getting into space impractical - advanced technology might theoretically overcome this, but with technology, one usually works through stages. No early stage means no later stages. If the gravity of a planet were much lower than ours, then the density would make large concentrations of metals unlikely. It would be easier to get into space, but without the materials available to make it possible and sustainable. Also, the planet would struggle to hold enough atmosphere to make it long-term livable (like Mars). So alien visitors are going to come from a planet similar to ours and will have solved similar engineering problems with similar materials.

Scifi writers and enthusiasts have imagined all kinds of other possibilities. Silicon creatures were a favourite for a while. Silicon (Si) sits immediately below carbon in the periodic table and has similar chemistry: it forms molecules with a similar fourfold symmetry. I've made the silicon analogue (SiH4) of methane (CH4) in a lab: it's highly unstable and burns quickly in the presence of oxygen or any other moderately strong oxidising agent (and such agents are pretty common). The potential for life using chemical reactions in a silicon substrate is many orders of magnitude less flexible than that based on carbon and would, of necessity, require the absolute elimination of oxygen and other oxidising agents from the chemical environment. Silicon tends to oxidise to silicon dioxide (SiO2) and then become extremely inert. Breaking down silicon dioxide requires heating it to its melting point (2,300°C) in the presence of a powerful reducing agent, like pure carbon. In fact, silicon dioxide, or silica, is one of the most common substances on earth, partly because silicon and oxygen themselves are so common. The ratio of these two is related to the fusion processes that precede a supernova and again are dictated by physics. Where there is silicon, there will be oxygen in large amounts, and they will form sand, not bugs. CO2 is also quite inert, but does undergo chemical reactions, which is lucky for us, as plants rely on this to create sugars and oxygen.

One of the other main memes is beings of "pure energy", which are, of course, beings of pure fantasy. Again, we have the Cartesian idea of disembodied consciousness at play. Just because we can imagine it, does not make it possible. But even if we accept that the term "pure energy" is meaningful, the problem is entropy. It is the large-scale chemical structures of living organisms that prevent the energy held in the system from dissipating out into the universe. The structures of living things, particularly cells, hold matter and energy together against the demands of the laws of thermodynamics. That's partly what makes life interesting. "Pure energy" is free to dissipate and thus could not form the structures that make life interesting.

When NASA scientists were trying to design experiments to detect life on Mars for the Viking mission, they invited James Lovelock to advise them. He realised that one didn't even need to leave home. All one needed to do was measure the composition of gases in a planet's atmosphere, which one could do with a telescope and a spectrometer. If life is going to be recognisable, then it will do what it does here on earth: shift the composition of gases away from the thermodynamic and chemical equilibrium. In our case, the levels of atmospheric oxygen require constant replenishment to stay so high. It's a dead giveaway! And the atmosphere of Mars is at thermal and chemical equilibrium. Nothing is perturbing it from below. Of course, NASA went to Mars anyway, and went back, hoping to find vestigial life or fossilised signs of life that had died out. But the atmosphere tells us everything we need to know.

So where are all the alien visitors? (This question is known as the Fermi Paradox after Enrico Fermi, who first asked it.) Recall that as far as we know, the limit of the speed of light invariably applies to macro objects like spacecraft - yes, theoretically, tachyons are possible, but you can't build a spacecraft out of them! Recently, some physicists have been exploring an idea that would allow us to warp space and travel faster than light, but it involves "exotic" matter that no one has ever seen and is unlikely to exist. Aliens are going to have to travel at sub-light speeds. And this would take subjective decades. And because of Relativity, time passes more slowly on a fast-moving object; centuries would pass on their home planet. Physics is a harsh mistress.

These are some of the limitations that have occurred to me. There are others. What these point to is a very limited set of circumstances in which an alien species could take to space and come to visit us. The more likely an alien is to get into space, the more like us they are likely to be. The universality of physics and the similarity of the problems that need solving would inevitably lead to parallelism in evolution, just as it has done on Earth.


Who is More Like Us?

Unlike scifi, the technology that allows us to meet aliens will be strictly limited by physics. There will be no magic action at a distance on the macro scale (though, yes, individual subatomic particles can subvert this); there will be no time travel, no faster-than-light travel; no materials impervious to analysis; no cloaking devices, no matter transporters, and no handheld disintegrators. Getting into space involves a set of problems that are common to any being on any planet that will support life, and there is a limited set of solutions to those problems. Any being that evolves to be capable of solving those problems will be somewhat familiar to us. Aliens will mostly be comprehensible and recognisable, and do things on more or less the same scale that we do. As boring as that sounds, or perhaps as frightening, depending on your view of humanity.

And AI will forever be a simulation that might seem like us superficially, but won't be anything like us fundamentally. When we imagine that machine intelligences will be like us, we are telling the Pinocchio story (and believing it). This tells us more about our own minds than it does about the minds of our creations. If only we would realise that we're looking in a mirror and not through a window. All these budding creators of disembodied consciousness ought to read Frankenstein; or, The Modern Prometheus by Mary Shelly. Of course, many other dystopic or even apocalyptic stories have been created around this theme; some of my favourite science fiction movies revolve around what goes wrong when machines become sentient. But Shelly set the standard before computers were even conceived of, even before Charles Babbage invented his Difference Engine. She grasped many of the essential problems involved in creating life and in dealing with otherness (she was arguably a lot more insightful than her ne'er-do-well husband).

Lurking in the background of the story of AI is always some version of Vitalism: the idea that matter is animated by some élan vital which exists apart from it; mind apart from body; spirit as opposed to matter. This is the dualism that haunts virtually everyone I know. And we seem to believe that if we manage to inject this vital spirit into a machine, the substrate will be inconsequential, that matter itself is of no consequence (which is why silicon might look viable despite its extremely limited chemistry, or a computer might seem a viable place for consciousness to exist). It is the spirit that makes all the difference. AI researchers are effectively saying that they can simulate the presence of spirit in matter with no reference to the body's role in our living being. And this is bunk. It's not simply a matter of animating dead matter, because matter is not dead in the way that Vitalists think it is, and nor is life consistent with spirit in the way they think it is.

The fact that such Vitalist myths and Cartesian Duality still haunt modern attempts at knowledge gathering (and AI is nothing if not modern), let alone modern religions, suggests the need for an ongoing critique. And it means there is still a role for philosophers in society despite what Stephen Hawking and some scientists say (see also Sean Carroll's essay "Physicists Should Stop Saying Silly Things about Philosophy"). If we can fall into such elementary fallacies at the high end of science, then scientists ought to be employing philosophers on their teams to dig out their unspoken assumptions and expose their fallacious thinking.

~~oOo~~
Related Posts with Thumbnails