04 July 2014

Is Experience Really Ineffable?

What could this possibly be?


There's an old story from India that seems to crop up everywhere. In Buddhist literature it is found in the Udāna (Paṭhamanānātitthiya sutta) and possibly elsewhere. The story goes, that a group of men blind from birth (jaccandhā) were rounded up and asked to participate in an experiment. They are told "this is an elephant" (‘ediso, jaccandhā, hatthī’ti) and allowed to touch part of it. Then asked to describe "an elephant" they assert that it is either like a pot (the blind man who felt the elephants' head), a winnowing basket (ear), a ploughshare (tusk), a plough (trunk), a granary (body), a pillar (foot), a mortar (back), a pestle (tail) or a brush (tip of the tail).

The parable is supposed to illustrate a principle something like "a little knowledge is a dangerous thing". It says that we get a hold of part of something and claim to know everything, but we're like the blind men who don't see the big picture. The parable ends there, but it has to because the story would fall apart if it didn't. A while ago I noticed that a physicist, whose blog I read, had this as his Twitter profile bio:
If the blind dudes just talked to each other, they would figure out it was an elephant before too long. @seanmcarroll 
I bloody love this! I'm so sick of smug religious platitudes and I really love it when someone slam dunks one. Sean is responding to the way the story is typically told, in which the blind men have to identify an unknown animal. But as I say in the Buddhist version the "blind dudes" are told "this is an elephant" and have to describe it. The difference is not crucial.

Part of the reason I love Sean's comment is that I stood right next to an elephant when I was in India in 2004. It was on the road near Kushinagar, where the Buddha is supposed to have died. Elephants are big, smelly animals. If you got a lot of people crowding around an elephant to touch it, the thing would fidget at the least, and probably shuffle it's feet. As a herbivore an elephant not only eats a lot, but it shits a lot. Many times a day. Chances are it dropped a big load of dung while being examined. Maybe it grumbled in low tones. The elephant's handler would have kept up the constant patter of the mahout: an elephant will do as it's told, but it needs a lot of reminding not to just wander off in search of food. And if you'd grown up in India in the time of the texts you'd know exactly what an elephant was like: sight or no sight. No conferring necessary. 

And this is the problem with so many of these smug little parables. We who tell or read these stories are supposed to be much cleverer than those people who are in the cross hairs. But the story itself is... (shall we say) unsophisticated. How naive do we have to be to take this tripe seriously? 

Even so, Sean Carroll has put his finger on something very important about knowledge that is all too often left completely out of philosophical accounts. We don't live in perpetual isolation from other people. We communicate with them incessantly. A blind man is not of necessity unable to communicate because they can't see. 

In the story the elephant is standing still, it makes no sound, has no smell and the blind men get one touch and no chances to confer, and seem to have been kept in isolation for their whole lives. How is this reasonable? It is a poor story designed to make a presupposition sound plausible. Why does everyone nod sagely when they hear this rubbish? Why do they congratulate themselves on not being like the stupid men in the story? The story is self-defeating - it displays the very attitude it is supposed to guard against. To a scientist it's a ludicrous scenario. Scientists work by comparing their observations and coming up with a theory which will explain them all. If the blind men were scientists they'd want to compare notes, to repeat the experiment with another animal and see what happened. If they were presented with various animals at random could they identify which were elephants? And so on. 


The Tennis Match.

When I read philosophers of mind talking about subjectivity, I find myself experiencing cognitive dissonance. Of course we can argue about the ontological status of the objects behind our experiences: do they exist, do other people exist? But take the case of a tennis match before a crowd of some 10,000 people. What we observe is that heads turn to follow the ball. They do not turn at random, they do not turn in an uncoordinated way. 10,000 people's heads turn in unison, at the same time, at the same speed, and they do so without any connection between the people. Are those 10,000 people really having a completely different experience? Would they really struggle to describe why they where turning their head to follow the ball?

True each person would have had a unique perspective on the ball, but there is a considerable overlap. Different people might have supported different players. Some might be elated that their player won, or dejected that their player lost. Does the fact that they had different emotional responses to the experience of watching a ball get batted back and forth mean that they saw an entirely different event? Surely it does not.

If we go to a concert with like-minded friends, afterwards we can talk coherently about what we've seen and experienced during the show. We don't usually find that we heard Arvo Pärt while our friends heard Metallica. We hear the same music. We might have noticed different nuances. My friend might have noticed an out of tune French Horn, while I was oblivious. Our attention to the details will depend on many factors, but we see and hear the same performance and can talk coherently about it afterwards. If my friend found a particular passage moving and they describe that to me, I may well have responded differently, but I can relate to my friends account with empathy. Or I might have been moved but not understood why and when my friend articulates their experience I will suddenly experience understanding and know exactly what they mean.

If I go to a comedy film and find myself laughing along with a few hundred other people am I truly cut off from them in my own little bubble? Robin Dunbar (of the Dunbar Numbers fame) has shown that we are 30x more likely to laugh at a film when we are with four people than if we are alone. Laughter is very often a shared experience. Dunbar hypotheses that shared laughter is a sublimation of primate grooming behaviour. Physical grooming in the large group sizes that human beings live in (facilitated by our large neocortex/brain ratio) would take up too much time, so we laughter, dance and sing together which has a similar physiological effect to physical grooming. See Dunbar's new book Human Evolution (highly recommended).

Thus is seems to me that characterising each person as being in an impenetrable bubble is not accurate. For a social animal like a human being, a good part of our experience is shared.


Private Experience vs Public Knowledge

It's sometimes said that our subjective experience is entirely private. But I don't think the examples above would be possible if this were true. So am I now a proponent of morphogenic fields? No! We know about the emotional state of another person through various cues that that other uses to broadcast their state: facial expression, posture, tone of voice, direction of gaze, etc. And we take these cues and use them to build an internal model - if I were to make my own face and body take on the configuration of the other persons face and body, how would that feel? And this is surprisingly accurate. Indeed we very often go one step further and adopt the posture of the other in solidarity. Less dominant individuals will adopt the body language of dominant individuals, and so on.

Human beings are capable of mentalising to a much greater extent than other animals. So for example Shakespeare wrote a story in which he has us believe that Iago convinces Othello that he (Iago) believes that the love Desdemona feels for Roderigo is mutual (and we the audience can understand the first person perspective of each character and how they see all the others). We understand our own minds from a first person perspective. We and many other animals are aware that other individuals also have a first person perspective that is just like ours. This is second order mentalising. But we humans can take this inference to a whole new level. On average humans can manage fifth order mentalising: for example we (1) might think that he (2) thinks that she (3) thinks that they (4) believe the proponent (5) is a liar. But in order to write such a story the author must be able to stretch to at least one extra order, they must be able to put themselves in our shoes as we take in the story. This is part of why Shakespeare is a remarkable writer, he has an extraordinary ability to see other points of view. The best story tellers place us inside the head of another human being and allow us to experience the world from their point of view. It's a remarkable gift!

We can easily comprehend the inner world of another person, especially if their identity is shaped by the same cultural factors as ours, but even with humans of very different cultures to a large degree. The capacity is not present in very young children but develops by about age 5. When the capacity does not develop, as in Aspergers Syndrome, it can be very painful to know that other people have inner lives but not to have easy access to them. It can be a source of considerable anxiety. Which is not to say that people who cannot assess the inner states of other person don't have inner lives themselves. They do.

One of the interesting features of the Buddhist tradition is that it seems to be understood that knowledge follows from experience. Far from being ineffable for example, the Spiral Path texts suggest that from the experience of liberation (vimutti) comes the knowledge of liberation (vimuttiñāna). I've noted in the past that Richard Gombrich makes this distinction also. The experience itself might be ineffable, but having had that experience we can say what it is like to have had it. We can say a lot about how the experience changed us, about how we feel about other things now we've had that experience. And this is why early Buddhist texts are full of descriptions of what it is like to have had the experience of bodhi.

In a recent talk at the University of Cambridge philosopher John Searle made an interesting distinction between ontology and epistemology (Consciousness as a Problem in Philosophy and Neurobiology). He said:
"The ontological subjectivity of the domain [of consciousness] does not prevent us from having an epistemologically objective science of that domain".
So conscious experience is ontologically subjective. Our first person perspective is internal to our own mind. By contrast molecules, mountains and tectonic plates are ontologically objective, they undoubtedly exist independently of our minds. If I say "Van Gogh is a better painter than Gauguin" that is an epistemologically subjective statement. It's something I think I know, but it is an aesthetic judgement that others may disagree with. However if I say "Van Gogh died in France", then this is epistemologically objective - it's knowledge that is external to me, something that everyone knows and there is no disagreement over.



Searle says that the argument that we can never study the mind scientifically is mixing up ontology and epistemology. This is a fallacy of ambiguity. We regularly use our ontological subjectivity to create a class of phenomena about which we can then make statements that are epistemology objective. There are many examples of this kind of phenomenon. Searle gives the examples of money, property, government, and cocktail parties.

Computation (2+2=4) is another ontologically subjective phenomenon about which we can make epistemologically objective statements. If I have two bananas and you give me two more, then objectively I have four bananas. As a written statement this is epistemologically objective, despite the fact that as a mental operation perceiving bananas, counting and addition are entirely subjective. Despite the subjective nature of these mental operations, there is no barrier to you having objective knowledge of what's just happened in my mind.

Searle uses the example of a falling object. If you drop a pen onto the floor it follows a path which defines a mathematical function: d = ½gt2 (where g = the acceleration due to gravity, t = time and d = distance). But nature does not do computation. The pen is simply a mass that travels through space. And close to the earth space is bent by the mass of the earth (the pen's mass also bends space, but not nearly as much because the effect is proportional to the quantity and density of matter). The effect looks just like a force of attraction. And that effect is described by the equation given above. But the universe doesn't calculate the distance. Calculation, computation, is purely subjective. Never-the-less the statement d = ½gt2 gives us objective knowledge (it allows us to subjectively make objectively accurate predictions), it's independent of our point of view.

Thus, according to Searle, the argument that the subjectivity of consciousness precludes any objective knowledge of it, is simply a logical fallacy that stems from confusing ontology and epistemology. And this means that consciousness is not ineffable in the way that some Buddhists argue that it is.

I would add to this that it's now possible, through stimulating individual neurons to provoke experiences. We discovered this during surgery on the brain. In some forms of brain surgery the patient remains conscious. If a tumour is in a delicate place the surgeon may want the patient to report what happens when a particular part of the brain is stimulated so as to avoid damaging a crucial function. What patients report under these conditions is entirely dependent on which part of the brain is being stimulated, at times which particular neuron: the results can be memories, sensory hallucinations (the illusion of sensory stimulation coming from direct neuron stimulation), motor activity, and so on. One could spend hours trawling through the search results of the search "awake during brain surgery". It's fascinating.


Conclusion

We need to think critically about parables that smack of platitude. Are they telling us something important, or are they, as in the case of the elephant and the blind man, simply religious propaganda that in fact blind us to greater truths? The whole arena of discussion about consciousness is fraught with difficulty. If Searle is right then there is widespread confusion over epistemology and ontology (which is one of the problems that plagues Buddhist philosophy too). Thinking clearly under these conditions can be exceedingly difficult.

It's true that an elephant, like any complex object of the senses, is a beast of many parts. It does have a ear like a winnowing basket, tusks like ploughshares, a trunk like a plough, a body like a granary, a leg/foot like a pillar, a back like a mortar, a tail like a pestle, and the tip of its tail is like a brush. Ears, tusks, trunk, legs, body, and tail all contribute to the animal we call "elephant". If we know what an elephant looks like we know we're looking at one from the slightest clue. Hence the picture accompanying this essay. I don't expect any of my readers to have any difficulty in identifying the elephant in the picture from its legs alone, even if they've never seen a real elephant.

We need not be like the blind men in the story and remain ignorant. We don't live in isolated bubbles. If we just compare notes on experience we come to a collective understanding. Even if there were plausibly a dozen people blind from birth in Sāvathī and even if plausibly they had never before had any experience of an elephant, the conversation they had would have revealed the bigger picture. In a sense this is what is implied by Mercier & Sperber's account of reasoning: reasoning is something we do together and on our own we're rather poor at it (see An Argumentative Theory of Reason). There's no a priori reason why we cannot compare notes, share knowledge and come to a greater understanding. And even if the domain is subjective, by comparing notes we do know that there are similarities which allow us to gain objective knowledge of that subjective domain.

I know some people like to play up the differences and discontinuities, but that story on its own is incomplete and partial. It's the kind of thing the elephant story warns us about. We always only have partial knowledge. Claims to full or ultimate knowledge are far more likely to come from religieux than scientists. Yes, experience is subjective, but this does not mean we can have no objective knowledge about experience. We can and do have partial objective knowledge about experience - else I could not expect anyone to read these words and find them meaningful. To my mind, religious stories like the elephant parable just get in the way of understanding.


~~oOo~~

27 June 2014

Why Artificial Intelligences Will Never Be Like Us and Aliens Will Be Just Like Us.

"Yet across the gulf of space, minds that are to our minds as ours are to those of the beasts that perish, intellects vast and cool and unsympathetic, regarded this earth with envious eyes, and slowly and surely drew their plans against us."

cosmicorigins.com
Artificial Intelligence (AI) is one of the great memes of science fiction, and as our lives come to resemble scifi stories ever more, we can't help but speculate what an AI will be like. Hollywood aside, we seem to imagine that AIs will be more or less like us because we aim to make them like us. And as part of that, we will make them with affection for, or at least obedience to, us. Asimov's Laws of Robotics are the most well-known expression of this. And even if they end up turning against us, it will be for understandable reasons.

Extra-terrestrial aliens, on the other hand, will be incomprehensible. "It's life, Jim, but not as we know it." We're not even sure that we'll recognise alien life when we see it. Not even sure that we have a definition of life that will cover aliens. It goes without saying that aliens will behave in unpredictable ways and will almost certainly be hostile to humanity. We won't understand their minds or bodies, and we will survive only by accident (War of the Worlds, Alien) or through Promethean cunning (Footfall, Independence Day). Aliens will surprise us, baffle us, and confuse us (though hidden in this narrative is a projection of fears both rational and irrational).

In this essay, I will argue that we have this backwards: in fact, AI will always be incomprehensible to us, while aliens will be hauntingly familiar. This essay started off as a thought experiment I was conducting about aliens and a comment on a newspaper story on AI. Since then, it's become a bit more topical as a computer program known as a chatbot was trumpeted as having "passed the Turing Test for the first time". This turned out to be a rather inflated version of events. In reality, a chatbot largely failed to convince the majority of people that it was a person despite a minor cheat that lowered the bar. The chatbot was presented as a foreigner with poor English and was still mostly unconvincing.

But here's the thing. Why do we expect AI to be able to imitate a human being? What points of reference would a computer program ever have to enable it to do so?


Robots Will Never Be Like Us.

There are some fundamental errors in the way that AI people think about intelligence that will begin to put limits on their progress if they haven't already. The main one being that they don't see that human consciousness is embodied. Current AI models tacitly subscribe to a strong form of Cartesian mind/body dualism: they believe that they can create a mind without a body. 

There's now a good deal of research to show that our minds are not separable from our bodies. I've probably cited four names more than any other when considering these issues: George Lakoff, Mark Johnson, Antonio Damasio, and Thomas Metzinger. What these thinkers collectively show is that our minds are very much tied to our bodies. Our abstract thoughts are voiced using metaphors drawn from how we physically interact with the world. Their way of understanding consciousness posits the modelling of our physical states as the basis for simple consciousness. How does a disembodied mind do that? We can only suppose that it cannot.

One may argue that a robot's body is like a human body. And that an embodied robot might be able to build a mind that is like ours through its robot body. But the robot is not using its brain primarily to sustain homeostasis, mainly because it does not rely on homeostasis for continued existence. But even other mammals don't have minds like ours. Because of shared evolutionary history, we might share some basic physiological responses to gross stimuli that are good adaptations for survival, but their thoughts are very different because their bodies and particularly their sensory apparatus, are different. 

An arboreal creature is just not going to structure their world the way a plains dweller or an aquatic animal does. Is there any reason to suppose that a dolphin constructs the same kind of world as we do? And if not, then what about a mind with no body at all? Maybe we could communicate with dolphins with difficulty and a great deal of imagination on our part. But with a machine? It will be "Shaka, when the walls fell." For the uninitiated, this is a reference to a classic of first-contact scifi story. The aliens in question communicate in metaphors drawn exclusively from their own mythology, making them incomprehensible to outsiders, except Picard and his crew, of course (there is a long, very nerdy article about this on The Atlantic Website). Compare Dan Everett's story of learning to communicate with the Pirahã people of Amazonia in his book Don't Sleep There Are Snakes.

Although Alan Turing was a mathematical genius, he was not a genius of psychology. And he made a fundamental error in his Turing Test, in my opinion. Our Theory of Mind is tuned to assume that other minds are like ours. If we can conceive any kind of mind independent of us, then we assume that it is like us. This has survival value, but it also means we invent anthropomorphic gods, for example. A machine mind is not going to be at all like us, but that doesn't stop us from unconsciously projecting human qualities onto it. Hypersensitive Agency Detection (as described by Justin L Barrett) is likely to mean that even if a machine does pass the Turing Test, then we will have overestimated the extent to which it is an agent.

The Turing Test is thus a flawed model for evaluating another mind because of limitations in our equipment for assessing other minds. The Turing Test assumes that all humans are good judges of intelligence, but we aren't. We are the beings who see faces everywhere and can get caught up in the lives of soap opera characters and treat rain clouds as intentional agents. We are the people who already suspect that GIGO computers have minds of their own because they break down in incomprehensible ways at inconvenient times, and that looks like agency to us! (Is there a good time for a computer to break?). The fact that any inanimate object can seem like an intentional agent to us disqualifies us as judges of the Turing Test.

AIs, even those with robot bodies, will sense themselves and the world in ways that will always be fundamentally different to us. We learn about cause and effect from the experience of bringing our limbs under conscious control, i.e. by grabbing and pushing objects. We learn about the physical parameters of our universe the same way. Will a robot really understand in the same way? Even if we set them up to learn heuristically through electronic senses and a computer simulation of a brain, they will learn about the world in a way that is entirely different to the way we learned about it. They will never experience the world as we do. AIs will always be alien to us.


All life on the planet is the product of 3.5 billion years of evolution. Good luck simulating that in a way that is not detectable as a simulation. At present, we can't even convincingly simulate a single-celled organism. Life is incredibly complex, as this 1:1 million scale model of a synapse (right) demonstrates.


Aliens Will Be Just Like Us.

Scifi stories like to make aliens as alien as possible, usually by making them irrational and unpredictable (though this is usually underlain by a more comprehensible premise - see below).

In fact, we live in a universe with limitations: 96 naturally occurring elements, with predictable chemistry; four fundamental forces; and so on. Yes, there might be weird quantum stuff going on, but in bodies made of septillions (1023) of atoms, we'd never know about it without incredibly sophisticated technology. On the human scale, we live in a more or less Newtonian universe.

Life as we know it involves exploiting energy gradients and using chemical reactions to move stuff where it wouldn't go on its own. While the gaps in our knowledge still technically allow for vitalistic readings of nature, it does remove the limitations imposed on life by chemistry: elements have strictly limited behaviour, the basics of which can be studied and understood in a few years. It takes a few more years to understand all the ways that chemistry can be exploited, and we'll never exhaust all of the possibilities of combining atoms in novel ways. But the possibilities are comprehensible, and new combinations have predictable behaviour. Many new drugs are now modelled on computers as a first step.

So the materials and tools available to solve problems, and in fact most of the problems themselves, are the same everywhere in the universe. A spaceship is likely to be made of metals. Ceramics are another option, but they require even higher temperatures to produce and tend to be brittle. Ceramics sophisticated enough to do the job suggest a sophisticated metal-working culture in the background. Metal technology is so much easier to develop. Iron is one of the most versatile and abundant metals: other mid-periodic table metallic elements (aluminium, titanium, vanadium, chromium, cobalt, nickel, copper, zinc, etc) make a huge variety of chemical combinations, but for pure metal and useful alloys, iron is king. Iron alloys give the combination of chemical stability, strength-to-weight ratio, ductility, and melting point to make a spaceship. So our aliens are most likely going to come from a planet with abundant metals, probably iron, and their spaceship is going to make extensive use of metals. The metals aliens use will be completely pervious to our analytical techniques.

Now, in the early stages of working iron, one needs a fairly robust body: one has to work a bellows, wield tongs and hammer, and generally be pretty strong. That puts a lower limit on the kind of body that an alien will have, though the strength of gravity on the alien planet will vary this parameter. Very gracile or very small aliens probably wouldn't make it into space because they could not have got through the blacksmithing phase to more sophisticated metalworking techniques. A metalworking culture also means an ability to work together over long periods for quite abstract goals like the creation of alloys composed of metals extracted from ores buried in the ground. Thus, our aliens will be social animals by necessity. Simple herd animals lack the kind of initiative that it takes to develop tools, so they won't be as social as cows or horses. Too little social organisation and the complex tasks of mining and smelting enough metal would be impossible. So, no solitary predators in space either.

The big problem with any budding space program is getting off the ground. Gravity and the possibilities of converting energy put more practical limitations on the possibilities. Since chemical reactions are going to be the main source of energy and these are fixed, gravity will be the limiting factor. The mass of the payload has to be not too large to be too costly or just too heavy, and it must be large enough to fit a being in (a being at least the size of a blacksmith). If the gravity of an alien planet were much higher than ours, it would make getting into space impractical - advanced technology might theoretically overcome this, but with technology, one usually works through stages. No early stage means no later stages. If the gravity of a planet were much lower than ours, then the density would make large concentrations of metals unlikely. It would be easier to get into space, but without the materials available to make it possible and sustainable. Also, the planet would struggle to hold enough atmosphere to make it long-term livable (like Mars). So alien visitors are going to come from a planet similar to ours and will have solved similar engineering problems with similar materials.

Scifi writers and enthusiasts have imagined all kinds of other possibilities. Silicon creatures were a favourite for a while. Silicon (Si) sits immediately below carbon in the periodic table and has similar chemistry: it forms molecules with a similar fourfold symmetry. I've made the silicon analogue (SiH4) of methane (CH4) in a lab: it's highly unstable and burns quickly in the presence of oxygen or any other moderately strong oxidising agent (and such agents are pretty common). The potential for life using chemical reactions in a silicon substrate is many orders of magnitude less flexible than that based on carbon and would, of necessity, require the absolute elimination of oxygen and other oxidising agents from the chemical environment. Silicon tends to oxidise to silicon dioxide (SiO2) and then become extremely inert. Breaking down silicon dioxide requires heating it to its melting point (2,300°C) in the presence of a powerful reducing agent, like pure carbon. In fact, silicon dioxide, or silica, is one of the most common substances on earth, partly because silicon and oxygen themselves are so common. The ratio of these two is related to the fusion processes that precede a supernova and again are dictated by physics. Where there is silicon, there will be oxygen in large amounts, and they will form sand, not bugs. CO2 is also quite inert, but does undergo chemical reactions, which is lucky for us, as plants rely on this to create sugars and oxygen.

One of the other main memes is beings of "pure energy", which are, of course, beings of pure fantasy. Again, we have the Cartesian idea of disembodied consciousness at play. Just because we can imagine it, does not make it possible. But even if we accept that the term "pure energy" is meaningful, the problem is entropy. It is the large-scale chemical structures of living organisms that prevent the energy held in the system from dissipating out into the universe. The structures of living things, particularly cells, hold matter and energy together against the demands of the laws of thermodynamics. That's partly what makes life interesting. "Pure energy" is free to dissipate and thus could not form the structures that make life interesting.

When NASA scientists were trying to design experiments to detect life on Mars for the Viking mission, they invited James Lovelock to advise them. He realised that one didn't even need to leave home. All one needed to do was measure the composition of gases in a planet's atmosphere, which one could do with a telescope and a spectrometer. If life is going to be recognisable, then it will do what it does here on earth: shift the composition of gases away from the thermodynamic and chemical equilibrium. In our case, the levels of atmospheric oxygen require constant replenishment to stay so high. It's a dead giveaway! And the atmosphere of Mars is at thermal and chemical equilibrium. Nothing is perturbing it from below. Of course, NASA went to Mars anyway, and went back, hoping to find vestigial life or fossilised signs of life that had died out. But the atmosphere tells us everything we need to know.

So where are all the alien visitors? (This question is known as the Fermi Paradox after Enrico Fermi, who first asked it.) Recall that as far as we know, the limit of the speed of light invariably applies to macro objects like spacecraft - yes, theoretically, tachyons are possible, but you can't build a spacecraft out of them! Recently, some physicists have been exploring an idea that would allow us to warp space and travel faster than light, but it involves "exotic" matter that no one has ever seen and is unlikely to exist. Aliens are going to have to travel at sub-light speeds. And this would take subjective decades. And because of Relativity, time passes more slowly on a fast-moving object; centuries would pass on their home planet. Physics is a harsh mistress.

These are some of the limitations that have occurred to me. There are others. What these point to is a very limited set of circumstances in which an alien species could take to space and come to visit us. The more likely an alien is to get into space, the more like us they are likely to be. The universality of physics and the similarity of the problems that need solving would inevitably lead to parallelism in evolution, just as it has done on Earth.


Who is More Like Us?

Unlike scifi, the technology that allows us to meet aliens will be strictly limited by physics. There will be no magic action at a distance on the macro scale (though, yes, individual subatomic particles can subvert this); there will be no time travel, no faster-than-light travel; no materials impervious to analysis; no cloaking devices, no matter transporters, and no handheld disintegrators. Getting into space involves a set of problems that are common to any being on any planet that will support life, and there is a limited set of solutions to those problems. Any being that evolves to be capable of solving those problems will be somewhat familiar to us. Aliens will mostly be comprehensible and recognisable, and do things on more or less the same scale that we do. As boring as that sounds, or perhaps as frightening, depending on your view of humanity.

And AI will forever be a simulation that might seem like us superficially, but won't be anything like us fundamentally. When we imagine that machine intelligences will be like us, we are telling the Pinocchio story (and believing it). This tells us more about our own minds than it does about the minds of our creations. If only we would realise that we're looking in a mirror and not through a window. All these budding creators of disembodied consciousness ought to read Frankenstein; or, The Modern Prometheus by Mary Shelly. Of course, many other dystopic or even apocalyptic stories have been created around this theme; some of my favourite science fiction movies revolve around what goes wrong when machines become sentient. But Shelly set the standard before computers were even conceived of, even before Charles Babbage invented his Difference Engine. She grasped many of the essential problems involved in creating life and in dealing with otherness (she was arguably a lot more insightful than her ne'er-do-well husband).

Lurking in the background of the story of AI is always some version of Vitalism: the idea that matter is animated by some élan vital which exists apart from it; mind apart from body; spirit as opposed to matter. This is the dualism that haunts virtually everyone I know. And we seem to believe that if we manage to inject this vital spirit into a machine, the substrate will be inconsequential, that matter itself is of no consequence (which is why silicon might look viable despite its extremely limited chemistry, or a computer might seem a viable place for consciousness to exist). It is the spirit that makes all the difference. AI researchers are effectively saying that they can simulate the presence of spirit in matter with no reference to the body's role in our living being. And this is bunk. It's not simply a matter of animating dead matter, because matter is not dead in the way that Vitalists think it is, and nor is life consistent with spirit in the way they think it is.

The fact that such Vitalist myths and Cartesian Duality still haunt modern attempts at knowledge gathering (and AI is nothing if not modern), let alone modern religions, suggests the need for an ongoing critique. And it means there is still a role for philosophers in society despite what Stephen Hawking and some scientists say (see also Sean Carroll's essay "Physicists Should Stop Saying Silly Things about Philosophy"). If we can fall into such elementary fallacies at the high end of science, then scientists ought to be employing philosophers on their teams to dig out their unspoken assumptions and expose their fallacious thinking.

~~oOo~~
Related Posts with Thumbnails