29 July 2016

A Layered Approach to Reality. Part II.2


In this part of the essay I continue to explore how different layers of description of the world relate to each other, but begin to look for more fine detail. I particularly want to draw out how applicable it is to use descriptions from different scales to comment on each other. This is because a favoured strategy of metaphysical reductionists is to argue that because classical physics is deterministic, that the whole universe is deterministic. This eliminates in one foul swoop everything that makes human beings interesting, e.g. consciousness, intention, imagination, and relationships. In this views freewill, morality, and aesthetics are also eliminated. Could there be a more depressing and uninspiring vision of humanity? So here, I want to show that, given the dynamics of our levels of description, that metaphysical reduction simply does not apply and we can find much better and much more interesting ways to talk about the world.

  • Lower level descriptions are more general and apply more widely to the universe as a whole; higher level descriptions more specific and apply more narrowly to subsets of the universe.
  • Lower level descriptions are more susceptible to be described in mathematics; higher lever descriptions require the use of narrative.

Comte's original hierarchy of science was based on levels of generality and complexity. This basic insight holds.

Lower level descriptions ideally say very precise things about the building blocks of the universe that apply at all times and places. The current fundamental descriptions we have are not complete. We don't know, for example, how matter and energy look inside a blackhole or at the moment of the Big Bang. However, we do know, for example, that all electrons have the same mass and charge; they all interact with other forms of matter and energy in identical ways. Electrons can be in a variety of states of spin and energy that are precisely specified. So when dealing with one electron at a time, we can be very accurate and precise in our descriptions, but we are almost never dealing with one electron at a time. In bulk matter we are typically dealing with in the order of 1020 electrons at a time, at a minimum. And all of those electrons are interacting with each other and everything around them, all the time. In which case we have to generalise. 

Because building blocks are identical and follow lawful or law-like patterns over time, their behaviour is susceptible to being expressed as mathematical equations. But once we make structures from building blocks this becomes difficult to sustain. As structures get more complex and their behaviour is less patterned and less predictable, it is more difficult to express that mathematically without introducing simplifying assumptions or generalising. Once structures are not identical, then mathematics begins to lose its usefulness as a descriptive tool because the equations become too complex or there are too many equations to solve simultaneously. Also in the real world there is non-local coherence: everything depends on everything else. Everything evolves together, which is difficult, if not impossible to model. 

As we go up levels of structure to more complex entities we find disjunctions. A cell is made up of molecules which are self-similar. For example, a cell uses a certain range of phospholipid molecules to make its outer membrane, each of which is broadly similar. But each cell is unique and no two cell membranes are identical. Of course cells from the same organism are very similar, but not identical in the way that the lipid molecules in its membranes are similar, and very far from the way that the atoms that make up the molecules are identical. Most cells produce tens of thousands of proteins, each with a complex structure and unique function. A mathematical model of a cell would require tends of thousands of equations to describe the role of each of these complex molecules. 

Once the units of a structure cease being identical it becomes impractical to express their behaviour in mathematical form without introducing simplifying assumptions. Very often the assumptions are of the nature that non-identical things can be treated as identical for the purposes of approximation. In the case of the different descriptions of a gas (in Part II.1 of this essay) the lower level description pays attention to individual molecules and produces many equations or equations with many variables. The higher level theory treats the gas as a single entity and requires a small set of simple equations. The higher level description describes the behaviour of the gas on a higher level of organisation, without reference to its nature as a collection of molecules. The ideal fluid is infinitely fine grained, so a real gas made of molecules only approximates a fluid. But the approximation in this case is better than the limits of our ability to measure such differences. However the equations can be accurate at very coarse grained levels, such as describing the flow of traffic on a motorway, at some cost to precision.

The behaviour of an organism, by contrast, has to be described in terms of narratives. Where it is modelled mathematically, as in Game Theory or Economics, gross over-simplifications and even false assumptions must be made to make the equations manageable. In economics (which supposedly describes the economic behaviour of human beings) the scaling up of the mathematical law of supply & demand from one product to the level of a whole economy requires the assumption that the economy has exactly one product and one customer (See Debunking Economics). A gas physically approximates an ideal fluid. Hence the simplified mathematics is useful. In the case of economics the a post-hoc rationalisation is required to make the equations work. The ideal to which human behaviour is supposed to approximate, i.e. the "rational consumer", is based on a vast misunderstanding of human beings. This is why economic forecasting is so infamously unreliable compared to genuine scientific models. 

  • Lower level descriptions can only produce generalisations about higher levels of description, not complete descriptions.
This is one of the most important conclusions about levels. The idea that lower levels determine higher levels is known as supervenience. However it is apparent that supervenience is not a good fit for how our descriptions work. Lower level descriptions circumscribe possibilities above; they place broad limits on what is possible through structure, but they do not specify which of the possibilities will manifest.

For example, the behaviour of atoms is deterministic, so chemistry is deterministic on the atomic level. Which chemistry actually takes place in a chemical system is determined by many factors: temperature, pressure, concentration, availability of reactants, external energy sources. Some of these factors are themselves emergent properties that belong to the domain of chemistry. These higher level factors are not necessarily deterministic (because they are emergent and have properties that are not determined by the lower level). So determinism on a lower level does not necessarily translate to determinism on a higher level, because emergent properties are causally effective on the level they emerge at. 

The atomic description of chemistry tells us what chemistry is possible, and can allow us to analyse what chemistry has taken place retrospectively. But if we set up a complex mixture of organic compounds, we cannot predict exactly which reactions will take place or to what degree they will consume ingredients. All we can do is make generalisations about what is possible, or about the probabilities of various outcomes, and then observe what actually did happen.

This is partly because in any complex system we do not have perfect knowledge. The more complex the units of structure, the less predictable they are. Any given atom is completely predictable to the limits of our ability to measure. A person can be wildly unpredictable. There are still limits to what a person can do based on parameters set by the fact of being made of septillions of atoms arranged in hierarchical structures and living in the kind of universe we do; as well as limits from the chemical, biological and psychological levels. Superman or Harry Potter will never be realities. But humans cannot be described in mathematical terms. Attempts to do so are inaccurate because the assumptions required to make us fit into mathematical expressions are, on the whole, false or falsifying. 

This means that we ought not even to expect lower level descriptions to determine what happens at higher levels. As we go up levels, mathematical rigour falls away and uncertainly creeps into our description. At some point the use of narrative takes over and becomes more useful.

From lower level knowledge we can only generalise about higher levels. Emergent properties cannot be guessed at, even with perfect knowledge of the lower level.

  • The further apart levels are, the more general the generalisations.
Because structure contributes to the picture, a description of building blocks becomes less and less relevant to higher level structures as we go up the levels. Building materials allow us to generalise about the bounds of architecture, but atoms are too far removed from the domain of architecture to have much to tell us. This is despite the clear contribution to the physical properties of building materials that atoms make. When we leap over levels, the contribution a lower level description makes is drastically reduced.

To put it another way, if level one allows us to generalise about level two; and level two allows us to generalise about level three; we would expect that level one could only generalise about level three in the broadest terms, i.e. generalisations about generalisations. We are made of atoms, but so what? The fact that we are made of proteins, phospholipids, and nucleic acids is more apposite to our behaviour since mountains and stars are also made of atoms. The fact that we are made of cells more so still. Atoms place limits on how proteins can form, but do not determine which proteins our cells actually use, or explain why some cells use different proteins. Amongst other things this is because of natural selection, a high level process/description. The properties of atoms don't directly contribute to our behaviour.

There's still no way to smuggle in the supernatural as an emergent property—we still cannot break the laws of physicsbut neither are we completely defined by the lowest level of description (or reality). Of course physics and chemistry are the backdrop for all life (and all other bulk matter), but the backdrop is not the play. Nor is the stage. Nor even the actors. The play combines all three in motion and is greater than any of its components. 

  • If structure reductionism works at all, it becomes less plausible or applicable the further up the hierarchy.
It is doubtful that any structure reductions work, but for the sake of argument we can allow that some structures might be reducible. Some might (or do) argue, for example, that all of chemistry can be comprehended by physics. The argument does not seem to work for biological organisms. Organisms are real structures in their own right. It's not at all clear that organisms can be reduced to chemistry, even in principle, because chemistry does not describe the causal potential of organisms as structures. That has to be done on the level of biology, or higher.

So if we find structure reductionism plausible at all, it is more likely to be plausible when trying to collapse one low level to another lower level. At higher levels the plausibility drops off precipitously; and in biology it is not plausible at all. So the plausibility of reducing chemistry to physics does not generalise to all levels. Processes or descriptions from lower levels do not explain or determine higher levels.

  • Higher level descriptions usually say nothing about lower levels.
Downward causality is a major problem area. Some emergentists argue that higher level structures can causally affect lower level structures. It's certain that human societies causally affect the individual humans in them, but this may be because we have misunderstood the levels. The individual human is not so much a distinctive level of organisation as they are a simplifying assumption. As the Nigerian proverb goes, It takes a village to raise a child. In other words when describing these levels we don't go from individual cell to individual organism, we go from cells to species. The individual isn't a level on their own, they are part of a level which includes their social circumstances. Indeed non-local coherence applies on higher levels also. A person's behaviour only makes sense in a social and physical context.

However plausible downward causation seems on a social level, it seems implausible at any lower level. Atoms do not influence the properties of quarks, etc. If there is any downward causality, then it must be short range. A human society may well influence a human individual living in that society, but a human society cannot influence the cells that make up a human being.

Although philosophers argue about it, I don't see downward causation as essential to structure antireduction. There may well be examples of it, but does not force us to generalise. If lower level properties do not propagate up the hierarchy then downward causation, if it exists, may be a domain or level specific property. Since it is clearly not present at the lower levels it may even be an emergent property.

As we will see in the next instalment of this essay, causation itself is a problematic concept.

  • A lower level description cannot specify or anticipate an autonomous higher level property.
Chemistry allows for life to emerge, it does not specify how it will emerge or ensure that it does. If, for example, the proponents of the Warm Alkaline Hydrothermal Vent Theory of the origins of life manage to create a laboratory model which leads to self-sustaining, self-replicating chemical reactions within cells that constitute a metabolism; they will still not have demonstrated that this is how life actually got started on earth. Chemistry only provides us with a general description of the possibilities of biological processes. It aims at a comprehensive general account of interactions between atoms and molecules. If this were not true then we would already know exactly how life got started because we understand chemistry and even biochemistry extremely well. Perfect knowledge of chemistry would still not get us closer to the solution. It might even be unhelpful because it would include more possible routes to life and still not provide us with a means of choosing between them.

So when people declare that they simply cannot see how to get from brain to mind as an emergent property as though this is a valid argument against antireductionist approaches to mind, we ought to just reply "Duh!". The nature of emergence is that it's not clear how any given structure is going to behave or what properties it might have based on studying the level below it. The higher up the levels we go the less clear things will be because the elements are complex. This will be a problem for research on the mind which focusses solely on the brain. It's not that lower levels tell us nothing about emergent properties, but without a study of the mind's properties on the level of the mind, we won't get the full picture. We've hardly begun to systematically study the mind or the brain.

Buddhists will sometimes claim to have been studying the mind for 2500 years. However, Buddhists theories of mind are thoroughly mixed with myth, legend, and mysticism. Some of the traditional observations are interesting starting points, but it remains to be seen how well they stand up to proper scrutiny, or what proper scrutiny will look like. I suspect that disentangling Buddhist ideas of the mind from the obfuscating cultural elements might be more effort than starting from scratch.

Inherent limitations of descriptions at different scales are what require us to adopt levels of descriptions. Thus at the outset of considering this theory, we do not expect that lower level descriptions will suffice to explain higher levels features or properties. Indeed we expect that as the scale increases descriptions will cease being applicable.

The rules of chess specify the board, the pieces, how the board is set up, and how the pieces move. But those rules do not specify how any particular game plays out. Higher level concepts such as strategy, positional considerations, or material advantage also contribute to the decisions players make.

And this leads us to one of the most important corollaries of this approach to describing reality.

  • Properties, such as determinism, do not propagate upwards through layers.
Bulk matter does not possess any of the properties of individual atoms. The behaviour of individual atoms is completely deterministic, but put together a few septillion of them, arranged in a hierarchy of structures, and the behaviour of them may or may not be deterministic. If what we make from matter is a planet, then determinism may still hold, even if the scale and complexity makes precise description difficult. If on the other hand we make an organism, then neither the properties nor the behaviour of the organism are determined by the properties of the atoms.

In the chess analogy the rules of the game are simple and entirely deterministic. But because decisions on how to move are partly determined by what moves are possible and partly by strategy, position and material the game itself is not deterministic. Nothing about the rules of the game favour a defensive over an attacking strategy for example. Either may be successful according to what defines success in chess. Nor must a game inevitably play out to the bitter end. Once a player becomes convinced they will lose, they may resign. This is allowed by the rules, but not determined by them.

  • Arguments about freewill, aesthetics, values, morality or other high level properties based on fundamental physics are incoherent.
it is incoherent to say, that because physics is deterministic, that there is no freewill.
Freewill, aesthetics, values, and morality are all concepts that only make sense at the level of human beings, since they rely on the high level property of self-aware consciousness. I've previously argued that the individual is not the best way to think about humanity. Any description of an individual human inevitably ignores non-local coherence and contains simplifying assumptions that are not valid. People come in groups. So the domain of these particular properties is the human society. All the statements and generalisations about layers apply: lower level theories may generalise, but not determine or specify; the more layers between the domains the more general and less applicable are the descriptions, and so on. Because of this, it is incoherent to say, that because physics is deterministic, that there is no freewill. And this is before we are reminded that Libet's conclusions have been debunked (See Freewill is Back on the Menu). 

Metaphysical reductionists ignore all the layers, collapsing them down to the one at the bottom. They argue that what appear to be new properties at higher levels are either aggregates of lower level properties or illusions. In practice this does not work, so they have to argue in two stages: 1) that reductionism is possible in principle and 2) because it is possible in principle, we can treat it as the actual case. The fact that in practice such reductions simply don't work is ignored and ignoring the failure of a theory in practice in favour of a theory in principle is not physics, it is metaphysics, and not even good metaphysics. Structure reductionism is not a scientific theory, it is a secular metaphysical conviction (very like a religious conviction).

On the other hand reactions to metaphysical reductionism can also be unproductive. Romantics confuse us through denying that reason plays a role in these domains. In the Romantic view all that matters is the present moment, emotion, and aesthetics. Relativists try to undermine decision making processes by reducing the salience of all facts to the lowest common denominator. Neoliberals argue that self-interest is the only applicable principle in the conduct of human affairs. Neoliberalism becomes a kind metaphysical worldview, so we get NeoDarwinism and the "selfish gene" which is Neoliberalism (if not Ayn-Randism) applied to biology. Nor are these ideologies mutually exclusive. But none of them is realistic. Nor do these few examples exhaust the variety of approaches to understanding the world. Psychoanalysis has been elevated to a worldview these days. As have Marxism, Feminism, and so on. 

  • The Universe has no purpose, but human beings may do.
One of the problems with a reductionist ideology is that it leads to nihilism. There is no design to things, no greater goal for the universe. The universe just is, it just unfolds. There are patterns to how it unfolds, but no direction. The universe has no purpose to fulfil and no plan for us. It makes no sense to talk about purpose in relation to the universe. Physical laws are just mechanistically going through the motions that were initiated at the big bang or perhaps earlier.

But this applies to a particular domain. It's a story about physics. It's many layers removed from human experience. There is no reason to presume that the domain of physics can inform our own human sense of purpose. In fact having outlined the relations between layers of description, there is reason to presume that it does not inform our lives. The properties of physics do not propagate upwards through the hierarchy. So the lack of teleology in physics need not apply further up the hierarchy.

Living things, for example, do seem to have a purpose, to wit, staying alive and propagating themselves. Persistence of life is the purpose of living things. It's not a conscious purpose, nor should this statement be misconstrued as animism ("everything is alive") or panpsychism ("everything is conscious"). A pattern for organisms of all kinds is that they carry out actions which have persistence as a consequence. Every organism consistently does this. Life persists and has persisted in an unbroken chain for roughly 3.5 billion years. On the other hand life has evolved into myriad forms, not because it is aiming at something, but because it expands to fill ecological niches as they become available, exploits new metabolic pathways, and responds to changes in the environment. So the persistence is not necessarily aimed at some goal, it is persistence itself which is the goal. 

For self-aware creatures, and mainly I'm speaking about humans here, this purpose is, or at least can be, conscious. We generally know that we are performing actions aimed at our own persistence. We even consciously plan how to optimise our lives. This distinction in purpose is part of what creates a new level for human beings. Our purpose is to some extent shared with all life. We share basic purposes related to staying alive: seeking water food, shelter, a mate, etc. But we also have the ability to reflect on our lives and desire them to be lived in service to some ideal or in the context of some larger purpose. In fact most of us feel the need for this. It is possible that this ability and the resultant need can be explained in terms of functions we evolved; but it cannot be reduced to the properties of our parts or properties we share with other animals.

Because we are talking about two very different levels of description, it is not incoherent to speak of a universe without purpose and human beings with purpose. The dynamics of layers mean that we expect differences like this. It might even make sense to speak of a hierarchy of purposes, though I'm not arguing that it does.

  • Our approach to "levels" is anthropocentric.
The idea of levels in science has always been linked to increasing specificity with respect to human beings. Those sciences that aim to specify what human beings are and how they work are at the top of the hierarchy,  while those that only say general things about the world are at the bottom. 

This isn't necessarily problematic. Our search to understand the universe is complemented by the quest to understand our place in it. Trying to understand our relation to the universe makes sense on the level of being human. It's a high-level oriented activity. And as such it will most likely produce narratives rather than mathematics. It will not determine the lower levels.

However, we can also see life and sentience as offshoots of the mainstream of layers of complexity. From chemistry, the mainstream continues up through geology to cosmology, i.e. planets, solar systems, galaxies, clusters, and finally the universe as a whole, with the possibility of a multiverse (which to date is post hoc metaphysical speculation that helps solves some equations).

Life, as far as we know, occurs only on planet earth. Science fiction aside, the types of chemical compounds and reactions that can result in stable macro-molecules that can reliably replicate is quite restricted. As are the kinds of energy gradients that can drive the process. And while basic consciousness and even intelligence is quite widespread in the animal kingdom, our level of intelligence is orders of magnitude higher. While proto-language and other proto- features can be found in other species, none have the full range of fully developed mental faculties that we have. And for all we know, we may be unique in the universe (and given the laws of physics even if there were another intelligent form of life we would probably never know it).

So we are a unique outcome of the unfolding of the world. And as such we deserve close study. 

  • We are still stumped by how mind fits into this picture.
My examples so far are drawn from physical science or analogies to the physical world for a good reason. I don't understand mind. A lot of people claim to understand mind. A load of speculative books with titles like Consciousness Explained have been written. But none of them does explain consciousness. At best they produce a plausible speculation on how the mind might work. On the other hand some people claim we'll never be able to explain mind. Someone whose professional reputation and career are predicated on them understanding the mind cannot afford to admit ignorance. So a variety of speculative theories are produced to fill the gap. It's hard to know what to take seriously. Some of my touchstone intellectuals are Oliver Sacks, Antonio Damasio, Joseph Le Doux, Thomas Metzinger, George Lakoff, Mark Johnson, Sean Carroll, Robin Dunbar, Justin Barrett, Richard H. Jones, and John Searle (from fields including neuroscience, linguistics, physics, evolutionary psychology, and philosophy).

One of main problems in understanding this domain might be terminology. For example, John Searle has pointed out (1992) that if we reject mind/body dualism, we cannot continue to talk about "mental properties" and "physical properties", or "mental causation" and "physical causation", as though they are ontologically different. By rejecting dualism we are rejecting the ontic distinction between mind and body. The reason the language problem persists that the dualism is intuitive and non-dualism is counter-intuitive. We have kept using the familiar terminology of dualism without really thinking about the implications of this. As Searle says:
"The vocabulary is not innocent, because implicit in the vocabulary are a a surprising number of theoretical claims that are almost certainly false." (1992: 14)
It ought to be possible to make an epistemic distinction in how we understand different types of experience without implying an ontic difference. But the mind-body language we are forced to use almost inevitably implies a substantive difference, if not to us, then to the majority of our readers. Ironically Searle notes that he is accused of being both a materialist and a dualist when he is neither.

If one says "the mind is the brain", the assumption is that one is a physicalist. Physicalism is routinely defined by (tacitly) dividing the world into mental and physical and discarding the mental. Physicalism is not a unified mind-body view; it is a dualist view in which one side of the dualism is disposed of or ignored. Idealism disposes of the body instead, but still rests on prior dualism. A truly monist approach would not be able to make an ontological distinction between mind and body and thus could not reduce one to the other. However, as with unified properties like matter and energy; or space and time, we may make an epistemological distinction.

Another observation from Searle is that the subjective/objective distinction has an epistemic sense and an ontic sense. An important corollary of this for meditators is that we may well find that the distinction falls away at certain stages of the path, but this does not mean that it does not exist! What goes on in someone's mind is still subjective and private, even if they are awakened. We are too eager to reason from private knowledge to public reality and it seldom works. So if someone says that they can no longer distinguish themselves as a distinct person, this does not change everyone's point of view, it only changes one person's point of view. A change in the way we understand experience the world does not change reality. I no more have access to arhat Daniel Ingram's mind than he has access to mine.

If we are going to understand mind then, according to this view, we need to study the mind qua embodied mind. We cannot make an ontic mind-body distinction, even if an epistemic distinction is useful. For the moment we do not understand how the brain produces subjectivity, but it does happen in the brain. There is no other option.



~ Conclusions to Part II. ~

This (for me) newly discovered way of looking at the world, which combines substance reductionism and structure antireductionism, seems like an extremely promising way to make progress in understanding life, the universe, and everything. Within this framework we make ontic distinctions and epistemic distinctions. Our structure antireductionism has to be epistemic. My view is that collective empirical realism—the investigation of patterns of experience, combined with comparing notes—strongly implies ontic antireductionism, i.e. reality itself has a hierarchy of structures which force us to adapt our descriptions of reality at different scales. Collective empirical realism does not give us metaphysical certainty.

Some may feel that ontic structure antireductionism goes too far, that we cannot be sure about other minds, but that leaves them with the challenge of explaining experience in the absence of mind-independent objects. To date I find those explanations uninteresting and unconvincing. 

If there is a transcendental reality then there is no way for us to know anything at all about it. No knowledge is possible and therefore no discussion is possible. Speculation about transcendental realities is pointless. I'll say no more on this subject.

We have a serious legacy problem in philosophy and in Buddhism. Ontological dualism pervades our language and often our thinking. Some varieties of monist ontologies are based on cut-down dualism rather than genuinely transcending dualities. We can posit no ontic mind/body (i.e. mental/physical) distinction, however useful an epistemic distinction might be. In Buddhism we have descriptions designed for one domain routinely applied across the board; and we have he false axiom that existence = permanence. Buddhism also employs metaphysical reductionism. 

The beauty of this combined approach is that we can have our cake and eat it. We can fully acknowledge and embrace Naturalism without thereby capitulating to metaphysical reductionism or metaphysical determinism. The laws of physics are never broken, but they allow for a vastly wider scope than is admitted by reductionists or determinists. Nor are we forced to deny the existence of such interesting human qualities as freewill, purpose, or consciousness. We can allow for novelty, complexity, and even mystery without opening the door to supernaturalism, teleology, vitalism or any of the other problematic non-scientific explanations of events. 

If we adopt this as the background against which Buddhism has to make sense, then it certainly places limits on which aspects of the tradition we can retain and which we cannot. But it leaves open possibilities for genuine knowledge obtained through our practices, with the caveat that not perceiving a subject/objective distinction does not mean that such a distinction does not exist

However there is an important area of description to address before attempting a synthesis and this is causality, which will be the subject of the fourth instalment of this essay.

~~oOo~~

22 July 2016

A Layered Approach to Reality. Part II.1


What I aim to do in the rest of this essay is make some observations and generalisations about the relations between levels of description. A "description" is way of talking about reality, though I have scientific theories at the forefront of my mind. The focus is on description rather than reality itself, though my view is that our descriptions are based on objective knowledge that we have inferred about a mind-independent, though entirely natural, reality through critical observation and comparing notes. What I call Collective Empirical Realism is similar in some ways to Object-Oriented Ontology. It seems that Realism is in the process of making a come back in academic philosophy (for what it is worth). In this sense, one could read this essay as an attempt at a naturalist ontology, though it is not compulsory. The levels of description seem to me to reflect the way that reality is organised into hierarchical levels of structure. I don't say this with any metaphysical certainty, but I think the picture is accurate.

This idea of a layered hierarchy of structure based on a fundamental substance seems to me to be far more productive than either reductionism or antireductionism considered on their own. The problem being that reductionism tries to ignore structure; while antireductionism tries to ignore substance. Acknowledging both allows for a much more interesting discussion.

While I think the ontological implications of this approach are natural and intuitive, we are on much safer ground discussing the epistemology and methodology of this layered hierarchical approach to the world. In two parts I will now outline some properties of hierarchical layers of description and draw out corollaries that I think are important for how we understand ourselves and our world.


~ Key Points About Levels of Description ~


Part 1.
  • Descriptions of the universe typically apply within a few orders of magnitude and break down as scale changes in either direction, requiring new descriptions.
  • This happens because structures are real.
  • Hence, descriptions of the universe are necessarily layered and there is a hierarchy of sciences.
  • Levels of description are mostly autonomous.
  • What applies to one layer does not necessarily apply to another.
  • Therefore, we cannot collapse levels of description, even in principle.
  • The higher the level of description a scientist is working at, the less interest they have in structure reductionism.
Part 2.
  • Lower level descriptions are more general and apply more widely to the universe as a whole; higher level descriptions more specific and apply more narrowly to subsets of the universe.
  • Lower level descriptions are more susceptible to be described in mathematics; higher lever descriptions require the use of narrative.
  • Lower level descriptions can only produce generalisations about higher levels of description, not complete descriptions.
  • The further apart levels are, the more general the generalisations produced.
  • If structure reductionism works at all, it becomes less plausible or applicable the further up the hierarchy one is working.
  • Higher level descriptions usually say nothing about lower levels.
  • A lower level description cannot specify or anticipate an autonomous higher level property. 
  • Properties, such as determinism, do not propagate upwards through layers of description.
  • Arguments about freewill, values, morality or other high level properties based on fundamental physics are incoherent because they use the language of a different level.
  • Our approach to "levels" is anthropocentric.
  • We are still stumped by how mind fits into this picture. Which means that any conclusion about the ontology of mind is premature

~ Commentary ~

  • Descriptions of the universe typically apply within a few orders of magnitude and break down as scale changes in either direction, requiring new descriptions.
Scale is vitally important in any account of the world, precisely because of the discontinuities that it causes in our descriptions. The observable universe covers about 60 orders of magnitude from the Planck length to the furthest galaxy. In energy, the range is over 100 orders of magnitude.

The incompleteness of a physical theory is almost always brought to light by extending the scales of reality that can be observed (i.e. scales of mass, length, and energy). The inventions of the telescope and the microscope, at roughly the same time in the early 17th Century, both produced revolutions in our understanding of the universe at scales beyond which our human senses work. Increasingly sophisticated versions of each have brought us to a number of watersheds that changed our view of the world we live in and our place in it. New discoveries about the mechanics of the universe are still being made at the extremes: distant galaxies and very high energy particle collisions.

The importance of scale is overlooked by many people searching for similarities between traditional Buddhist knowledge and science knowledge. The knowledge gained from doing Buddhist practices is psychological and thus high level. Importantly this means it is completely unrelated to the science of substance or any of the science of structure at the level of biology and below. For example, there is no connection, and can be no connection, between traditional Buddhism and quantum mechanics. Buddhist theories about living things are also unrelated to modern theories of biology. However, we may find correlations with higher-level scientific theories from the domains of psychology and sociology. Worse, many Buddhist doctrines are metaphysical speculations, i.e. things we believe to be true, but for which there is and can be no evidence. Like our supposed theory of causation, which I will deal with in a separate essay. There is no way to correlate speculative metaphysical doctrines from Medieval India with modern scientific knowledge. Myths that might have served a purpose in the Iron Age do not necessarily serve that purpose now. Times change.

  • This discontinuity happens because structures are real.
The substance of the universe is a relatively straightforward problem. The world is made of fields. But matter is also made into things. When matter is made into things, those things have properties that cannot necessarily be explained in terms of the lower level properties. In many cases, complex wholes (or systems) are greater than the sum of their parts. And importantly, many structures bestow an apparent causal potential on complex objects. So reducing everything down to the simplest level is not a way to explain the world, because important aspects of the world are left out of reductionist accounts. Particularly life and consciousness. 

Allowing for real structure alongside real substance makes for a much richer view of the world without some of the problems of substance antireduction (dualism, pluralism) or metaphysical reductionism. It also explains why we need levels of description and allows for descriptions to be autonomous: we can talk about chemistry in terms of atoms without having to reduce everything to quantum fields. Being sensitive to scale, we can treat atoms as real on the atomic scale, not as substance, but as complex objects with a real structure. 

The reality of structure is where I disagree with metaphysical reductionists and probably with most Buddhists. Buddhists labour under two misapprehensions:
  1. Existence = permanence. This axiom of Buddhist doctrine is demonstrably false and has confused Buddhists for centuries (see Buddhism and Existence); and 
  2. Metaphysical reductionism; the whole is only the sum of its parts. This is apparently required to eliminate any essence (ātman), but it eliminates any structure as well. 
The trouble seems to be the supernatural. If we eliminate the supernatural, we are left with the natural world in flux, but with persistent (though not permanent) real structures that in no way support the idea of a soul. Indeed, the Naturalist critique of the soul is far more powerful and comprehensive than the Buddhist critique.

I also disagree with Sean Carroll on this issue. Carroll's Poetic Naturalism has many attractive features and I largely agree with him. However he makes an awkward distinction between what he calls weak emergence and strong emergence. These labels more or less align with epistemic structure antireductionism and ontological structure antireductionism. Carroll is enthusiastic about epistemic structure antireductionism. The "poetic" part of Poetic Naturalism refers to the fact that our various descriptions are "stories" that we tell about the universe. Stories about the world apply to layers that are autonomous, with the caveat that they cannot break the laws of physics. 
"Something is 'real' if it plays an essential role in some particular story of reality that, as far as we can tell, provides an accurate description of the world within its domain of applicability." (Carroll 2016: 111). 
That is to say that for Carroll a "story" is an epistemological statement related to what we know about the universe and how we express it. However, he also contrasts "real" with "fundamentally real", which shows that his ontological commitment is to reductionism.

In Carroll's discussion of what he calls "strong emergence", he suggests it is characterised by downward causation: the whole affecting the parts. But ontological structural antireductionism does not seem to require downward causation. It does requires two things: firstly, that the whole is greater than the sum of its parts, i.e. higher level properties are real in an ontic sense rather than an epistemic sense; and secondly that complex objects are causally potent on their own level. The molecule need not affect the behaviour of its own atoms, but it surely does affect the behaviour of other molecules. 

It is difficult to pin Carroll down on this point because he uses "ontology" synonymously with "story" or "description", but he defines it in epistemological terms as a form of knowledge. Ontology ought to be used more strictly to refer to what exists, not to stories about what exists, which is the domain of epistemology. His ontology is that quantum fields are fundamentally real. So although he cheerfully admits to the reality of atoms, for example, he clearly believes that they are not fundamentally real. In other words he does not accept the reality of fields, but he does accept the necessity of descriptions which contain atoms. This is fine as far as it goes, but I think we can go further. 

  • Hence, descriptions of the universe are necessarily layered and there is a hierarchy of sciences.
This approach to describing reality has inherent discontinuities related to scale, which seem to be related to discontinuities in reality caused by structuring. That is to say that because structures can have unique properties, we find that we are forced to develop level-specific descriptions. These descriptions are autonomous, in that they do not depend on lower levels, though they cannot contradict them or break the fundamental laws of physics.

So, for example, we use physics to describe the structure of the atom; but chemistry to describe the behaviour of the various atoms interacting; biology to describe matter organised into living cells; and psychology to describe the functioning of our minds. Even if we were to stipulate that it is possible to use lower level descriptions, it is almost never practical. Low level descriptions take on enormous unwieldy complexity when applied to higher levels. This renders them practically useless at best, but also useless in principle as well. Quantum field theory is utterly useless in the field of psychology. Atomic theory is useless in sociology.

For all that metaphysical reductionism dominates public discourse on science, our universities have departments of chemistry, biochemistry, geology, microbiology, zoology and so on. And this is unlikely to change. 

In other words, despite the rhetoric associated with metaphysical reductionism, in practice science is not reduced to physics. Science is mostly about higher level, non-fundamental patterns. The typical scientist is not a physicist and probably not a metaphysical reductionist. All the different sciences and many of the subdivisions produce descriptions of the world that are domain or scale specific and deal with real phenomena.

The argument over whether this epistemology accurately reflects an ontology (whether what we talk about as real is in fact real) is far from settled and may never find a consensus, though for the purposes of discussing the dynamics of levels of description it does not matter how the mapping of description onto reality works. Though of course we can say that it does work; often to a level of accuracy and precision that is set by the limits of our ability to measure things or the apparent limits of the physical universe. 

  • Levels of description are mostly autonomous.
We can use the description of a volume of hydrogen gas as an example. We can describe this using different levels of description. We might opt for a fine grained (lower level) description, by specifying the number of hydrogen molecules and the position, velocity of each. This would be a demanding computation, but would accurately describe the gas and it's behaviour. Or we might opt for a coarse grained (higher level) description and treat the gas as a fluid by specifying the density, temperature, and pressure of the gas. This can be in a few simple equations known as the gas laws

If we choose the coarse grained description we don't have to reference individual hydrogen molecules at all. All fluids behave the same way. If we do choose a coarse grained description, we do so without reference to a fine grained description. If we say that gas is characterised by density, temperature, and pressure, these are high level concepts that need not reference lower level descriptions such as the number of molecules or their individual velocities.

This is example of a volume of gas is one that Sean Carroll chooses in The Big Picture (2016). It has what turns out to be an unusual feature. One level maps directly onto another: the number of molecules in the volume of gas is the density; the average velocity of the molecules is the temperature. Carroll probably chose this example because the coarse-grained model is reducible to the fine-grained model. It allows him to demonstrate the value of higher level descriptions while still subscribing to a fundamental reality.

However, in most cases it is not possible to see how the descriptions at different levels are related. Levels do not typically map well onto each other so well (and it seems more likely to happen at lower levels than at higher levels). We can describe an organism, for example, from a (relatively coarse grained/higher level) biological perspective or a (relatively fine grained/lower level) chemistry perspective, but we cannot simply map one level onto the other in this case. There is no equivalent of the rule that says that the average velocity of molecules of gas equate to the temperature of a volume of gas considered as a whole. This is partly because complexity increases as we go up the levels, structure is built upon structure, and complexity of elements promotes complexity in relations between elements and between levels involving complex compounds.

A corollary of this is that no one theory of emergence is going to describe how emergent properties manifest across levels. Scale affects emergence as well.

  • What applies to one layer does not necessarily apply to another.
The lack of transitivity is most striking as we move from sub-atomic particles to bulk matter. Quantum mechanics very precisely tells us about properties/behaviour in the sub-atomic. And it does this in terms of the probability that a vibration in a field will have a particular value for a variety of parameters (position, momentum, spin, energy, and so on). Electrons for example, do not "orbit" a nucleus, but form a cloud of probable locations. If we "look" at any given time, i.e. if the electron interacts with another particle, we will "see" the electron at one location. And because of the Uncertainty Principle, the more precisely we specify the position, the less we know about its momentum (i.e. speed and direction of movement). The process of interaction involves the exchange of a virtual particle representing the expression of a force, and application of force (which includes any observation) changes the value of the electron's parameters. 

Objects visible to the human eye do not behave like subatomic particles. At all. A baseball being pitched towards a batter does not spread out, take all possible paths to the plate, and adopt a location only when the batter swings (that would make it a very different game indeed!). Nor does looking at the baseball change its speed, direction, or spin. The motion of a baseball is classical. We can precisely know both the position and momentum of the baseball at any time. Bulk matter does not obey quantum mechanics, and sub-atomic particles do not obey classical mechanics. To be sure there is a transition area. Scientists have forced a fairly large single molecule (a 60 carbon atom "bucky ball") to behave like a quantum wave; but this is at the limit and requires a great deal of tinkering with conditions. No object visible to the human eye will ever behave like a sub-atomic particle. In which case, describing a baseball in terms of quantum mechanics is pointless.

But something like this is true of all the transitions between levels. Molecules, on the whole do not behave like atoms. Organisms do not behave like molecules. Self-conscious organisms do not behave like non-sentient organisms. And so on. The fact that we require different descriptions is not something we arbitrarily impose on reality in order to make it more manageable. Levels necessarily emerge from the patterns of behaviour of the world on different scales and impose different descriptions on us.

A corollary of this is that any Grand Unified Theory of the universe will be practically useless. The first level of emergent structure will cause it to break.

  • Therefore, we cannot collapse levels of description, even in principle.
To be clear, I accept that we can always reduce substance from any level down to a more fundamental level. The world is made of fields. However, what we see when we look at the world depends on the scale we look at, at any given scale the organisation of those fields makes the world look different.

Minimally, epistemological antireductionism agrees that even if we can reduce everything in principle, that in practice the computations are so difficult as to be impractical for any computer less complex than the universe itself. Ontological antireductionism denies that reductionism works, even in principle; it would say that the organism, for example, is greater than the sum of its parts and must always be studied as an organism. 

As far as I can see, we have to accept that levels of description will always be necessary, even if we argue that our descriptions don't map onto reality. But in my view, the levels of description are imposed on us by reality and so reduction, even in principle, doesn't work for structure. 

  • The higher the level of description a scientist is working at, the less interest they have in structure reductionism.
The obsession with reductionism is very much a thing for physicists and neuroscientists. When Professor Brian Cox is talking on his BBC radio show, The Infinite Monkey Cage, he often says things like, "It's all just physics". One can hear the other scientists guests on the show wince at this.
Since physicists work in the lower regions of the hierarchy they tend not to see the really difficult cases of emergence that falsify metaphysical reductionists.

Biologists, by contrast, are seldom interested in reductionism because organisms reduced become very much less interesting. One cannot

The real surprise is that so many neuroscientists, working at the interface of biology and psychology, are hardcore metaphysical reductionists. A good number of them, for example, deny that there is any such thing as consciousness. They deny that we have subjective experiences, an inner life, or any kind of mental state. No hope, joy, fear or any of that. Understanding why this is so will prove the rule (in the old sense of testing it).

The difficult is three-fold. Firstly, few scientists distinguish structure from substance in a useful way. The reductive program in physics has been incredibly successful and it would be irrational to reject it. But without a proper distinction between substance and structure, they end up adopting reductionism across the board. It becomes a metaphysical stance. Even if this makes their attitude to structure irrational (which it does), the powerful logic of substance reductionism over-whelms any other concern. This is a good example of a powerful belief changing the salience of other information, particularly counter-factual information (as outlined in my theory of religious beliefs over a number of essays).

Secondly, consciousness is currently an intractable problem. When your career depends on saying "I know", admitting that "I don't know" is career suicide. No one ever gets published explaining that the problem is too difficult for them. Better to publish nonsense and survive, than to not publish and perish. So academia has produced a whole raft of nonsensical theories that eliminate consciousness. And it has produced a smaller number of nonsensical theories that embrace ontological dualism or pluralism. And a bunch of theories in the middle that seem quite interesting, but which struggle to reach the evidential barrier that would make them truly plausible. Long time readers will know that I favour the explanations being developed by representationalists such as Antonio Damasio and Thomas Metzinger. These seem to me to be the most interesting avenues for exploration. In a fascinating recent development Metzinger's research group is offering 10 scholarships for PhDs in neuroscience/philosophy combined with intensive training in mindfulness meditation over three years. The idea being that the candidates will combine third person and first person investigations of their experience (The Mental Autonomy Project - applications close 15 Sept 2016).

Thirdly, the field of neuroscience is bedevilled by legacy terminology. Consciousness, for example, turns out to be largely unconscious. We mostly still talk about mind as an entity rather than a process. And despite the fact that the mainstream long ago abandoned Cartesian Dualism, we still often read about, for instance, the difficulties of explaining mental causation versus physical causation. If we are not dualists, why are we still making this distinction? The point was made, quite forcibly, by John Searle in 1992, but we seem to be stuck using the terminology developed under dualism. So of course there is confusion.

The upshot is that neuroscientists are the exception to this rule because they lack the proper tools for thinking about the problem, and the problem itself seems insoluble in the terms that it is presented. However, generally speaking biologists and psychologists go through their whole career with no reference to quantum field theory. And why would they? Such lower level descriptions have little or no contribution to make at these higher level structures - quantum mechanics simply does not apply on the macro-scale. 


~ Conclusion of Part II.1. ~

In this part of the essay I've been making some broad generalisations about layers of description and about how the layers relate to each other. Part II.2, which follows, will start to draw out some finer details about the relationships that have important consequences for how we use science knowledge in philosophy (and in Buddhism). It should be finished by next week (29 July)

~~oOo~~


Searle, John R. (1992). The Rediscovery of the Mind. MIT Press.

Continued by Part II.2.

15 July 2016

A Layered Approach to Reality. Part I.

This essay will come in several parts. In Part I, I outline an approach to knowledge about the world that combines substance reductionism and structure antireductionism. I try to show how this combination provides for a much richer and realistic discussion about the nature of the world. Importantly, I outline why our descriptions of reality form a series of hierarchical layers. Whether this antireductionism applies only to descriptions of the world (epistemology) or it is how things really are (ontology), is an important question (raised by Nagel 1998). On this question, I have argued for a strong correlation between scientific knowledge and reality in my previous essay, Buddhism and the Limits of Transcendental Idealism (1 Apr 2016). While we cannot know the world directly, comparing notes on experience allows us to reliably infer objective knowledge about the world, i.e the world that exists independently of our senses or knowledge. Of a world beyond comprehension, I believe it is best to remain silent. However, most of what I will say here is about descriptions of the world, and thus mostly applies to epistemology or what we can know.

Having established that this is a valuable approach, in Part II (currently in two instalments) I outline the dynamics of this layered, hierarchical approach to talking about reality. In particular I develop a critique of those who would use the properties of low level descriptions (generally speaking physics) to comment on higher level issues such as freewill or morals. I try to show why mixing up layers leads to incoherence, so that physics cannot inform our understanding of freewill or morality, except through metaphor. The use of physics metaphors at higher levels is counter-indicated by the naive tendency to relentlessly reify all such metaphors.

Finally, as a separate issue, I will tackle how causation fits into this model as an emergent property. Causation is absent from fundamental descriptions of the physical world, which simply describe a world evolving according to certain patterns. David Hume's conclusion that we can observe sequences of events, but not causation as a specific kind of event, still holds true. However, as an emergent property causation is still important, if only because the way we come to understand causation is via acts of will and it therefore informs our definitions of freewill.

The position I'm going to outline in this essay draws very heavily on the book Analysis and the Fullness of Reality by writer, translator, and philosopher, Richard H. Jones (2013). However, I also draw on lectures and books by physicist and science communicator, Sean Carroll (2010a, 2010b, 2012, 2013a, 2013b, 2013c, 2016a, and 2016b). My interest in this subject is lifelong, but a recent turning point was reading a series of blog posts on The Brains Blog by William Jaworski (2016b, 2016c, 2016d, 2016e) which discuss the thesis of his book, Structure and the Metaphysics of Mind (2016a). It was Jaworksi that serendipitously led me to Jones; and Jones that unlocked the issue that I have been thinking about in one way or another for more than forty years now. Some essays on mechanistic models of cognition that appeared on Notes from Two Scientific Psychologists (2016a, 2016b, 2016c, 2016d, 2016e) have given me food for thought while considering the relations between layers.

On the whole I am not concerned here with ancient and traditional theories of the world, but with how we see the world in the 21st Century. However, having defined a kind of philosophical "space", I will mention how traditional Buddhist descriptions of the world fit into this space.

In this essay I take the terms "reality", "world", and "universe" to be synonymous and interchangeable. 


Levels of Description.

The idea that our descriptions of the universe are affected by scale can probably be traced to the inventions of the telescope and microscope in the early 17th Century. These two tools gave us the ability to see the world on different scales for the first time. This ignores myths and other speculations which are not descriptions of the world per se; but what some people thought the world might be like. They were the science fiction of their day. The insights about scale began to be formalised in the 19th Century. In The Philosophical Considerations on the Sciences and the Scientists (1825), August Comte arranged the branches of natural philosophy into a hierarchy based on relative levels of generality and complexity. In modern terms he saw the hierarchy (bottom to top) as physics, chemistry, biology, psychology, and sociology. At the time, this hierarchy was widely considered to reflect reality, in the sense that each science produced accurate knowledge of the world.

In 1843 John Stuart Mill in his System of Logic (see Mill 1868) describes two distinct types of causation: additive causes and combinatory causes. These are my terms for what he called homopathic and heteropathic causes respectively. Additive causes produce effects on the same level if the hierarchy, characterised by aggregation of existing properties and epitomised by the additive quality of forces. By contrast, combinatory causes produce effects (at least) one level up; characterised by novel combinations that are more than the sum of their parts, and epitomised by atoms combining to form molecules with new properties. In 1875 George Lewes referred to the effects of combinatory causes as emergent. This idea of causes having emergent effects is central to modern philosophy of science and yet there is little consensus on what emergent means, nor on the consequences of accepting emergence as a feature of reality. As we will see, some scientists and philosophers still reject the notion of emergence, while for others it is the only way to understand the world. 

European natural philosophers had inherited a simple but rich ontology of four elements: earth, water, fire, and air. Everything in the universe was made of these four elements combining in various ways. They also had a bias towards the view that everything has a cause, sometimes called the Principle of Sufficient Reason (after Leibniz and Spinoza). The principle of sufficient reason survives in a raft of trite aphorisms of the kind, "Everything happens for a reason". The ancients had at various times sought to reduce these four elements to one, seeking a "first cause", but no theory of a single, original substance ever achieved consensus. The failure was inevitable, but instructive for us. It highlights a deep desire for simplicity and unity that continues to guide our search for knowledge. A simple universe is a predictable universe; a predictable universe is a survivable universe. So arguably we evolved to find simple explanations that worked (or rules of thumb). Ideally we would like everything to be reduced to a single cause. One of the enduring appeals of monotheism and a reason that monotheism dominates the world of religion is that it gives us precisely that: a single, first cause and a reason for everything.

Something similar happened in Buddhist India. Early Buddhists, possibly influenced by proto-Nyāya philosophy, adopted a simple but rich ontology with four or six elements, though the elements in this case were qualities or processes rather than substances. They also adopted a model of mental functioning, i.e. pratītya-samutpāda or conditioned co-production, that was then applied to karma and rebirth, and to the physical world. This is not a theory of causation, since causation per se is never discussed, but a theory of when causation occurs, i.e. effects arise only in the presence of the necessary and sufficient conditions. How they arise is unclear, which is a major problem for modern Buddhists trying to defend ancient doctrines. The various Abhidharma projects introduced different roles for conditions in the causation process, though these seem to have been aimed at preserving the doctrine of karma in the light of a conflict between it and dependent arising, i.e. the problem of action at a temporal distance (See for example: Action at a Temporal Distance in the Theravāda. 22 Aug 2014)

Back in Europe, as elemental theory gave way to modern atomic theory, it started to look like we would require a very rich fundamental ontology with dozens of atoms replacing the four elements. In other words the quest for simplicity or unity seemed to be going in the wrong direction. But the atom was not, as the name suggests, indivisible. As natural philosophers, now called "scientists", began to unravel first the atom and then the atomic nucleus, a new sparse ontology beckoned. The discovery that all atoms are made of identical electrons, neutrons, and protons structured in different ways, once again pointed to a sparse ontology, if not a unity. As high-energy physics progressed and other particles began to be discovered, we were once again faced with an expanding ontology.

At around the time same as matter was being dissected, the four fundamental forces were being identified. Later, scientists began to find ways of conceptually unifying the fundamental forces. Then energy and matter were conceptually unified and finally forces understood as an exchange of particles. Again the quest was for simplicity and unity. Efforts towards unity have been disrupted by the discovery that there is more mass in the universe than we can see (dark matter) and that something is causing galaxies to accelerate away from each other (dark energy, though this may be the vacuum energy).

Currently we think of "science" as an ontological monism in which there is one kind of stuff that has a matter/energy duality when seen at certain levels. The most plausible theory is that the world is made up from a relatively large number of quantum fields (one for each type of particle in the standard model) whose vibrations and interactions make up the world. Most physicists would argue that quantum fields are a single class of stuff with different varieties, rather than a raft of different kinds of stuff, so that science counts as a substance monism. The standard models are as yet incomplete at the extreme ends of the scales of mass, energy, and length, though the dynamics of the middle ground of scale, occupied by humans, is mapped out in principle. In terms of substance we are made of atoms which follow patterns constrained by three of the four fundamental forces (gravity, electromagnetic, and weak nuclear). 

Popular histories of science often highlight this move towards the fundamental and the efforts of scientists towards unifying descriptions of the substance of the universe. However, throughout the history of science, including the present, lawful or law-like behaviour has been described at higher levels. In fact these high-level descriptions and methods make up the bulk of what contemporary scientists are concerned with. A few examples are: Boyle's Law relating temperature, pressure and volume of gases; Ohm's law relating voltage, resistance and current;  the laws of fluid mechanics; the periodic table of elements; evolution by natural selection; and the identification and classification of living things (taxonomy). All these examples are attempts to describe high-level properties, without first reducing them to fundamental physics. Most of science happens at these non-fundamental levels. Thus we need to re-evaluate the idea that hardcore reductionism is the acme of all science. It is not.

We can think of a "level" of description as a Lakoffian category. George Lakoff defines a category as based on resemblance to a prototype. With minimal resemblance, membership of a category might be marginal and any given complex item might be a member of more than one category. The categories that seem natural or intuitive to human beings tend to fall in the middle of hierarchies of taxonomy, i.e. they are fairly general, not too specific, but not all inclusive. This seems to be governed by how we perceive and physically interact with the world (Lakoff 1987).

Each of the main branches of science has a typical method or range of methods for interacting with the world. For example, chemistry is a distinct level because of the ways that chemists go about knowledge seeking, i.e. the analysis and synthesis of chemical compounds. Chemists explain things in terms of the interactions of atoms without reference to quarks or quantum fields. Though they work in bulk, their explanations feature idealised situations in which just enough of each kind of atom is present (i.e. abstract chemical equations like O2 + 2H2 → H20). Chemists treat atoms and molecules as autonomous with respect to lower levels, and tend to ignore higher level problems (which are the provinces of biologists, geologists, materials scientists, and engineers). Treating atoms as autonomous works very well in practice. One could try to describe chemistry in terms of quantum physics, but it would be very cumbersome indeed, and it is doubtful whether it could cope with the effects of scaling up the macro scale. Of course physics has made important contributions to our detailed knowledge of how individual atoms react with each other and the properties of molecules, but when the scale is micrograms to kilograms, chemistry is not only accurate and precise, but it is very much simpler than quantum mechanics. The further up the hierarchy we go, the more cumbersome and unwieldy mathematics and physics become as descriptive modes.


Reductionism

The approach of breaking things down and explaining them solely in terms of properties of a lower (and preferably the lowest) level is called reductionism. Reductionism has had its successes in the area of the investigation of what things are made of. And this success has prompted many scientists who sought fundamental laws to assert that everything would be one day reduced to a single theory of everything. They have acknowledge that the universe is complex, but assert that it is an ultimately comprehensible mechanism made from identifiable components, with identifiable properties, taking part in identifiable processes. Where something is not obviously reducible, or even obviously not reducible, some scientists and philosophers claim that it is still reducible in principle. This commitment to reductionism in principle, I refer to as metaphysical reductionism. It is a commitment to reductionism as a metaphysical principle, i.e. a principle which is held to be true, but which is not accessible to empiricism and cannot therefore be tested.

The proponent of metaphysical reductionism is unconcerned by the lack of apparent progress on intractable problems, the absence of tested hypotheses, or the lack of any viable method for achieving reduction in practice. They have faith in reductionism as a metaphysical principle. So even if individual cases (such as the mind) resist all efforts at reduction, there is no reason to doubt that, at some point in the future, the reduction will be achieved. Or else, ignoring all evidence to the contrary, they claim that the reduction has already taken place. If this fails, then the metaphysical reductionist can fall back on the argument that the entity in question doesn't exist (this strategy is known as eliminativism). If the mind, for example, is an illusion, then it does not need to be reduced to a lower level description, because it has been reduced to nothing.

Metaphysical reductionism is similar, in many respects, to a religious commitment. It distorts the field of salience around ideas in much the same way that religious beliefs do: counterfactual information, if it is acknowledged at all, is simply dismissed as not-salient to the issue at hand. Presenting the metaphysical reductionist with contradictory evidence may even reinforce their belief as it does with religieux. The existence of structures with irreducible features, which ought to falsify metaphysical reductionism, is dismissed because the commitment to reduction in principle overrides the salience of any counterfactual information. So metaphysical reductionism is not a belief that one can simply talk a person out of.

One of the problems with metaphysical reductionism is a tacit simplifying assumption. How the universe works, according to Quantum Field Theory, is that the whole universe is in a prior state, all the laws of physics apply, and the universe evolves into a subsequent state according to the pattern outlined by the laws of physics. The laws apply to the whole universe at once with everything interacting with everything else. Reductionists make the simplifying assumption that we can sensibly talk about single particles without reference to any other particles or fields. We can for example solve the wave equation for a single particle, or even for a single hydrogen atom, which tells us what is most likely to happen next for any given state that the particle or atom is in. Although this can allow us to approximate results for a subset of reality, it is not realistic to treat particles in isolation because particles and atoms interact.

To be able to reduce things in principle means that we would have to solve the equations for the whole universe obeying the all the laws of nature (including the laws we don't yet know). The computer powerful enough to do this computation for our universe would look exactly like our universe. For all intents and purposes it would be our universe. Reductionists forget that they have made the simplifying assumption without which their theory would not be practical. They forget that scale matters. Just because it is possible to use approximations like the wave function of a single particle, does not automatically mean that the theory scales up from individual particles to structures. It's not at all clear that QFT does practically scale up to the entire universe, but we know for certain that as currently formulated it does not account for gravity.

In previous essays I've noted that Buddhism, though philosophically pluralistic, tends towards metaphysical reductionism as well: reducing phenomena to dharmas; beings to their skandhas; the world to the four great elements (mahābhūtāḥ); and so on. Early Buddhists explicitly argued, for example, against collocations of skandhas being considered any more than the sum of their parts.  The "being" cannot be found in any skandha individually, nor in the skandhas collectively. Beings are merely aggregates of components with no properties that are not attributable to their parts. This is partly because Buddhists equated existence with permanence. Buddhist "components" are qualities and processes rather than substances, which might offer a slightly more sophisticated view of reality were it not for criterion of permanence and the saturation of Buddhism with mysticism with respect to qualities and processes. Even though the Abhidharma multiplied the number of fundamental dharmas, making the ontology pluralistic, they still saw analysis as the only method for seeing things as they really are. Later, a more holistic approach to observing mind would emerge, variously called mahāmudra, dzogchen, etc, though this is still combined with reductive theories of the mind as having an essence to which it can be reduced via these techniques. The fact that Buddhism tends to metaphysical reductionism is ironic given how hostile some Buddhists are to the reductionism of physicists. This is just one of many unacknowledged antimonies in Buddhism. A further irony is the tendency to reduce all science to "materialism".

Reductionism has been very successful in helping us to understand what the world is made of. However, what things are made of is not the end of ontology, there is also what things are made into. And this leads us to consider the opposite of reductionism.


Antireductionism

John Stuart Mill's prime example of combinatory causes was the the way that atoms combine to create molecules with unique properties, i.e. where the properties of the molecule are not simply the additive properties of the atoms that make them up. In his example, the sodium-chloride (NaCl, i.e. table salt) which forms white crystals is nothing like either of its components: the caustic, yellow-green gas chlorine and the soft, reactive, silvery metal sodium. Sodium-chloride must be studied as sodium-chloride, not as a mixture of chlorine and sodium.* We have to consider that sodium-chloride exists as a compound in its own right, not simply as an aggregate or mixture of chlorine and sodium. Sodium chloride has new properties that are not the sum of the properties of sodium and chlorine atoms. Sodium-chloride acts like sodium-chloride, not like a mixture of sodium and chlorine. Thus, following Jones (2013) and Carroll (2016a) we can say that sodium-chloride is real, it exists and cannot be reduced to its elements. We can now explain these features in terms of the combined wave functions of the elements, but this does not remove sodium-chloride from the picture. An explanatory or descriptive reduction is not the same as a substance reduction. Even explained in terms of QFT, sodium-chloride is still real. 
* Chemistry in 1843 was considerably less sophisticated than it is now. I'm maintaining the simplification because it still illustrates the principle. Dissolved in water sodium-chloride dissociates into sodium (Na+) and chloride (Cl-) ions, but as a substance we still have to think of sodium-chloride as a compound rather than a mixture in order to understand its properties. This basic distinction is still fundamental to chemistry.
Where living things are concerned, an organism can be dissected and analysed, but in order to be understood, it must be studied as a living whole. Organisms, like sodium-chloride, are real and irreducible. This counter to reductionism is sometimes called emergentism, but I will refer to it as antireductionism. Antireductionism is as important in the history of science as reductionism is, but because some prominent scientists have adopted reductionism as a metaphysical position, the importance of antireductionism is overlooked. When it comes down to it, most scientists are actually working in an antireductionist framework.

In his book The Big Picture Sean Carroll observes that:
“It’s not possible to specify the state of a system by listing the state of its subsystems individually. We have to look at the system as a whole, because different parts of the system can be entangled with one another.” (2016: 100. Emphasis added)
It seems that we are forced into antireductionism. Entangle means that systems of particles behave differently than a collection of unentangled particles. Here entanglement is not some new substance, or some new fundamental force, but is a feature of how particles interact and form systems. All entanglement adds to the system is structure. And this feature of systems seems to hold at all levels: parts are capable of interacting in ways that force us to consider systems as a whole rather than simply adding up their parts.

Just to be clear the result of this structure antireduction is not to deny the fundamental reality of the stuff from which the universe is made. The universe is composed of fields. But it is composed into structures whose properties, particularly causal properties, cannot be fully explained in terms of fundamental theories. Ultimately sodium-chloride is fields, but it is also a white crystalline substance. It is the structures that we are saying are real. No new fundamental substance has come into existence. But one cannot season food with sodium metal and chlorine gas. 

If we are to understand reality then we cannot limit our study to what things are made of, but must extend it to what things are made into. And the stuff that matter is made into can be made into other stuff that has still more unique properties. In fact the focus of most of science is at this end of things. This is why our universities have departments of chemistry, geology, geochemistry, biology, biochemistry, genetics, microbiology, and so on. In practice, science does not reduce to physics. And a sizeable majority of scientists have no interest in reducing science to physics, but operate entirely at higher levels. Such scientists certainly use analysis of parts as a tool for understanding a complex whole, but the fact that an object can be analysed as parts, does not mean that it can be reduced to parts. This is particularly true of organisms. Dissecting a frog does give us insights into its physiology, but in order to understand the frog we have to see it alive, going through its life-cycle, performing all its functions. No amount of dissection or physics will give us these insights. But more than this we have to see the frog in its ecological niche, in a network of relationships with all the other species in its environment. Even in principle we will always need to consider some structures, such as organisms, as irreducibly real.

This leads to the important conclusion that science does not reduce to physics, either in practice or in principle. Systems, wholes, and structures are inherent aspects of reality that must be accounted for in any theory of the world or philosophy. Metaphysical reductionism simply cannot explain the universe and it does not.


Conclusion

I've tried to show that metaphysical reductionism fails as a philosophy. Instead we have to take different approaches to substance and structure. Substance does ultimately boil down to fields. Each layer of substance can be understood in terms of lower levels. The world is composed of fields. But this account is incomplete because the substance is also organised into structures that have irreducible properties - properties which are no longer apparent once we start to dissect the structure. The differences that scale make mean that we end up with a reality that has emergent layers.

In terms of the old Buddhist simile, once we take the chariot apart it ceases to be a chariot. But when put together, the chariot performs unique functions that are not an aspect of any of the parts individually, or in the parts in a jumble. Structure makes an irreducible contribution. Assemble a chariot and hitch it to a horse and you have a devastating war machine that allowed Indo-Europeans to conquer huge amounts of territory; or a way for a farmer to move much larger amounts of produce to market. These functions are also part of reality. Similarly, while not finding a soul in a dissected human being, a human being is none-the-less a structure capable of remarkable things. Buddhist thinkers were fatally hampered by the axiom that "real" meant permanent and unchanging. Once we realise that this axiom is not true or accurate, and drop it to allow for temporary or contingent existence, then we can have a more sensible discussion about the role of structure in the universe.

Metaphysical reductionism not only hampers progress in science, it hampers attempts to communicate the results of science. It seems to actively alienate parts of the audience for science (which is everyone). In a previous essay I labelled the failure to effectively communicate evolution to the general public as one of the greatest failures of science to date (The Failure to Communicate Evolution. 18 Sept 2015). Though it was not clear to me at the time I wrote that essay, one of the central causes of this failure has been the adoption of metaphysical reductionism and the reaction to this, at times, aggressive stance. Proponents of metaphysical reductionism are perceived as dogmatic, pompous, and arrogant. And for good reason! A full account of the failures and wrong views of metaphysical reductionism would take me too far from my purpose in writing this. On the other hand those who have reacted to metaphysical reductionism have tended to over-react. As Richard Feynman is supposed to have said:
"Philosophers say a great deal about what is absolutely necessary for science, and it is always, so far as one can see, rather naive, and probably wrong."
Very often those who reject metaphysical reductionism resort to caricatures of science and scientists, employing strawman arguments to the effect that all scientists are metaphysical reductionists. Ironically, for a reaction against reductionism, the idea that one can treat all scientists as metaphysical reductionists is itself a reductionist strategy! The reactions to the problem are at least as problematic as the problem itself. Neither side has the philosophical or moral high-ground, though both sides act as though they do. And thus they fail to communicate. Science may not reduce to physics, but nor should scientists be reduced to metaphysical reductionists. 

One major problem for philosophy has been the failure to explain the Hard Problem. Unfortunately, clever people tend to think that because they cannot imagine how something might emerge, that nobody can. So a philosopher like David Chalmers concludes from his inability to imagine a solution to the Hard Problem in a framework of substance reductionism, that it cannot be solved and we must turn to substance dualism or worse, panpsychism. The obvious conclusion from an inability to imagine a solution to a problem is "I don't understand", not "This cannot be understood". Certainly, the Hard Problem remains unsolved, but we need to be cautious about defining a problem as insoluble before we fully understand the problem.

Generally speaking, we can best study what the universe is made of using the methods and theories of reductionism. On the other hand, what the stuff is made into requires a different approach that is synthetic rather than analytic; oriented towards wholes rather than parts; and that acknowledges the value of levels of description; i.e. antireductionism. At least some aspects of higher levels seem to be irreducible features of reality and approaching them through metaphysical reductionism is thus counter-productive. Our descriptions of reality are discontinuous across scale. This means that different descriptions, even different kinds of description are required at different scales. Different sciences continue to be viable and productive despite not employing reductive methods or interpretations.

The rest of this essay will be concerned with the relationships between levels of description.

Part II.1 - published 22 July 2016.
Part II.2  - published 29 July 2016.
Part III.  - not yet published

    ~~oOo~~


    Bibliography

    Carroll, Sean. (2010a). The Laws Underlying The Physics of Everyday Life Are Completely Understood. Preposterous Universe [blog]. 23 Sept 2010. http://www.preposterousuniverse.com/blog/2010/09/23/the-laws-underlying-the-physics-of-everyday-life-are-completely-understood/

    Carroll, Sean. (2010b). Seriously, The Laws Underlying The Physics of Everyday Life Really Are Completely Understood. Discover. 29 September 2010. http://blogs.discovermagazine.com/cosmicvariance/2010/09/29/seriously-the-laws-underlying-the-physics-of-everyday-life-really-are-completely-understood/

    Carroll, Sean (2012). The Particle at the End of the Universe. Dutton.

    Carroll, Sean (2013a). The Particle at the End of the Universe. [Address to the Royal Institution, London]. http://richannel.org/the-particle-at-the-end-of-the-universe--talk

    Carroll, Sean (2013b). Poetic Naturalism. [Address to Second Oxford Miniseries: Is ‘God’ Explanatory? 9-11 January, 2013, St Anne’s College, Oxford. https://www.youtube.com/watch?v=xv0mKsO2goA

    Carroll, Sean (2013c). The World of Everyday Experience, In One Equation. Preposterous Universe [blog]. http://www.preposterousuniverse.com/blog/2013/01/04/the-world-of-everyday-experience-in-one-equation/

    Carroll, Sean. (2016a). The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. Dutton.

    Carroll, Sean. (2016b). Fear Of Knowing. NPR. 15 May 2016. http://www.npr.org/sections/13.7/2016/05/15/478143589/fear-of-knowing

    Jaworski, William. (2016a). Structure and the Metaphysics of Mind: How Hylomorphism Solves the Mind-Body Problem. Oxford University Press.

    Jaworski, William. (2016b). The Hylomorphic Mind: Part 1. The Brains Blog. 8 May 2016. http://philosophyofbrains.com/2016/05/08/the-hylomorphic-mind-part-1.aspx

    Jaworski, William. (2016c). The Hylomorphic Mind: Part 2. The Brains Blog. 9 May 2016. http://philosophyofbrains.com/2016/05/09/the-hylomorphic-mind-part-2.aspx

    Jaworski, William. (2016d). Hylomorphism and Mind-Body Problems. The Brains Blog. 10 May 2016. http://philosophyofbrains.com/2016/05/10/hylomorphism-and-mind-body-problems.aspx

    Jaworski, William. (2016e). Hylomorphism and Emergence. The Brains Blog. 11 May 2016. http://philosophyofbrains.com/2016/05/11/hylomorphism-and-emergence.aspx

    Jones, Richard H. (2013). Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.

    Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

    Lewes, George Henry. 1875. Problems of Life and Mind. Vol.2. London: Kegan Paul, Trench, Turber. https://archive.org/details/problemsoflifemi01leweiala

    Mill, John Stuart. (1868). A system of logic, ratiocinative and inductive; being a connected view of the principles of evidence and the methods of scientific investigation. 2 Vols., 7th ed. London: Longman, Green, Reader and Dyer. [First published 1843.] https://archive.org/stream/asystemoflogic01milluoft#page/410/mode/2up/search/chemical

    Nagel, Thomas (1998). Reductionism and Antireductionism. In Gregory R. Bock and Jamie A. Goode (eds.), The Limits of Reductionism in Biology. (Novartis Foundation Symposium 213): 3–10.

    Wilson, Andrew D. and Golonka, Sabrina. (2016a). Mechanisms and Models of Mechanisms (#MechanismWeek 1). Notes from Two Scientific Psychologists. 20 Jun 2016. http://psychsciencenotes.blogspot.co.uk/2016/06/mechanisms-and-models-of-mechanisms.html

    Wilson, Andrew D. and Golonka, Sabrina. (2016b). Cognitive Models Are Not Mechanistic Models (#MechanismWeek 2). Notes from Two Scientific Psychologists. 21 Jun 2016. http://psychsciencenotes.blogspot.co.uk/2016/06/cognitive-models-are-not-mechanistic.html

    Wilson, Andrew D. and Golonka, Sabrina. (2016c). Do Dynamic Models Explain? (#MechanismWeek 3). Notes from Two Scientific Psychologists. 22 Jun 2016. http://psychsciencenotes.blogspot.co.uk/2016/06/do-dynamic-models-explain-mechanismweek.html

    Wilson, Andrew D. and Golonka, Sabrina. (2016d).  Ecological Mechanisms and Models of Mechanisms (#MechanismWeek 4). Notes from Two Scientific Psychologists. 23 Jun 2016 http://psychsciencenotes.blogspot.co.uk/2016/06/ecological-mechanisms-and-models-of.html

    Wilson, Andrew D. and Golonka, Sabrina. (2016e). Mechanisms for Cognitive and Behavioural Science (#MechanismWeek 5). Notes from Two Scientific Psychologists. 24 June 2016. Mechanisms for Cognitive and Behavioural Science (#MechanismWeek 5).


    Related Posts with Thumbnails