Showing posts with label Causality. Show all posts
Showing posts with label Causality. Show all posts

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

05 August 2016

There is No Cause & Effect.

"All philosophers, of every school, imagine that causation is one of the fundamental axioms or postulates of science, yet, oddly enough, in advanced sciences such as gravitational astronomy, the word 'cause' never occurs… The law of causality, like much that passes muster among philosophers, is a relic of a bygone age, surviving, like the monarchy, only because it is erroneously supposed not to do harm." – Bertrand Russell (1917: 132).

If you take the state of the universe at any moment in time and you apply all the laws of physics, you get the universe at the next moment in time. Cause & effect, you might think, but you would be wrong.

The laws of physics describe patterns of evolution over time at different levels, but they don't include an account of causation and don't require a separate causal process. Although Russell understood this as early as 1917, causality is still integral to most philosophical discussions of reality. John Searle constantly uses causality to define the relation between mind and brain, for example. Many Buddhists and Philosophers of Buddhism, notoriously, and erroneously, refer to pratītya-samutpāda as a "theory of causality". For the average lay-person, the idea that there is no cause and effect is profoundly counter-intuitive.

In this 4th essay in my series on reality, I will try to show why there is no cause & effect, and try to explain why, contrarily, cause & effect seems so natural and intuitive. In doing so, I'm trying to clarify the idea for myself and to see if I can make a convincing argument for it. I'm not sure whether I have succeeded.


A Very Brief History of Cause & Effect

In his 2003 paper, John D. Norton, Professor in the Department of History and Philosophy of Science at the University of Pittsburgh, briefly describes the trajectory of Western thinking about causality. Aristotle described four kinds of causality: material, efficient, final, and formal (for more on this see Stanford Encyclopedia of Philosophy entry Aristotle on Causality). The mechanistic philosophies of the 17th Century reduced these four causes to one: the efficient cause; i.e., “the primary source of the change or rest”. It was around this time that final causes, i.e., teleology, went out of fashion in Western philosophy, though final causes are still prevalent in some forms of modern Buddhism which see the world evolving towards perfection as defined in some scripture. In any case, when we speak of causality these days we mean only Aristotle's efficient cause.

However, this notion of efficient causality was further weakened, in 1843, by John Stuart Mill (whom we met in Substance & Structure5 June 2016) who was concerned to strip out the purely metaphysical elements of the definition. Norton, citing the 8th edition of Mill's A System of Logic (1872):
All that remained was the notion that the cause is simply the unconditional, invariant antecedent: "For every event there exists some combination of objects or events, some given concurrence of circumstances, positive and negative, the occurrence of which is always followed by that phenomenon" (§2).
As Norton says, at this point causation has been reduced to mere determinism. But determinism itself soon received a killing blow with the advent of quantum theory in the 1920s. Norton offers the example of Radium isotope 221Ra with a half-life of roughly 30 seconds. During any given length of time there is no way to tell how many of atoms of  221Ra will emit an α-particle to become Radon 217Rn, but the probability is that in any given period of 30 seconds half of the atoms in a sample will do so. In the mainstream of quantum theory, determinism is largely replaced by probability, though there are interpretations of the theory that are deterministic (e.g., Everett's Many Worlds Interpretation). Relativity also introduces indeterminism, as parts of spacetime can become isolated from others (because of the limitation on the speed of light). Norton concludes,
"This means that we can always find circumstances in which the full specification of the present fails to fix the future." (Norton 2003: 5).
This ought to have been the end of determinism as a philosophy of science, though curiously it was not, and many physicists continue to assert determinism as a feature of our universe. It is the determinism of the physical world that, according to some scientists, denies that we have freewill. The example of Newtonian indeterminacy cited by Norton, now known as Norton's Dome, has excited a lot of critical comment. Most commentators argue along lines that the system Norton describes is not Newtonian, and therefore his argument fails, but he has at least highlighted that the definition of a Newtonian system was ambiguous and needed clarification.

But even if Newtonian indeterminism is not real, it's clear that determinism has, at best, a limited reach. Determinism breaks down in many different circumstances and this takes us back to Hume's 18th Century observation that we never see causation, per se. At best, we see consistent correlation. And from Hume we come to Kant. As Kant puts it, causality is an a priori judgement that we add to perception in order to make sense of it. Norton is broadly sympathetic to this approach.
"... in appropriately restricted circumstances our science entails that nature will conform to one or other form of our causal expectations... The causes are not real in the sense of being elements in our fundamental scientific ontology; rather in these restricted domains the world just behaves as if appropriately identified causes were fundamental" (Norton 2003: 13).
In other words, cause & effect is a useful approximation to reality in the way that Newtonian descriptions of gravity are a useful approximation. We need to look at this conclusion in more detail.


Why There is No Cause & Effect?

In Sean Carroll's recent book, The Big Picture, he notes that "concepts like 'cause' and 'effect' appear nowhere in Newton's equations, nor in our modern formulations of the laws of nature" (2016: 63). If we take the Second Law of Motion as an example, it is often interpreted as saying that if we apply a force to a mass it will accelerate, i.e.:

F = ma

This appears to say that forces cause masses to accelerate. In fact, it codifies an empirical observation derived from watching masses accelerate. It doesn't tell us how the force works, i.e., how force causes acceleration, but only tells us that if we know the mass and the acceleration we can work out the magnitude of the force and helps us define units for force (i.e., kg.m-2 or Newtons). The force could be gravity, an explosion, the lift of an aircraft wing, a magnetic field, or the gentle pressure of photons from the sun, and the Second Law still applies. If F can be literally any kind of force, then is this not telling us force causes acceleration, because different forces work differently. While F = ma seems to imply causality, in fact, it's a statement about the magnitude of effects, not the cause, or even the nature of them.

Another way of looking at this is that the equation works in any combination: m=F/a or a=F/m. Given the magnitude of any two values, we can work out the magnitude of the third. Force is not prioritised in this equation. 

As Danny Hillis says, we tend to think of force as contingent, because the prototypical "force" is us exerting mechanical force through our bodies (using "prototype" in the sense employed by Lakoff 1987). Our intuitive world is the one described by Aristotle, in which matter is stationary unless something, prototypically a human being, acts on it by exerting a mechanical force (grab, pull, push, etc). I'll return to this naive view in more detail below. We now know that everything in the universe is in motion, so Aristotle was wrong and our intuition is wrong. At the lowest level we know about, matter is motion: by which I mean matter is vibrations in quantum fields. I hasten to add that we have no sensory access to these fields and they do not match up with Hindu-derived hippy talk of the world being made of vibrations. It's a whole different vibe.

Newton's Law of Gravitation
In the case of gravity, it makes intuitive sense to say that if we jump off a building then gravity makes us fall down because it exerts a force on us. Experientially, it feels like something is pulling us down when we fall. In this view, gravity is the reason we fall, in the same way the our will is the reason we reach out and grab something. In fact, the so-called "force of gravity" is just a story we tell about a regularity we observe in the universe in a particular domain. It is like centrifugal force, an apparent force that turns out to be something else. Centrifugal force is a manifestation of the property of matter called inertia. Gravity is so regular that we can make it into a mathematical equation, as Newton, Laplace, and Einstein did.

However, Newton himself did not accept the idea of an invisible force acting at a distance:
"[T]hat one body may act upon another at a distance through a vacuum, without the mediation of anything else, by and through which their action and force may be conveyed from one to another, is to me so great an absurdity, that I believe no man, who has in philosophical matters a competent faculty of thinking, can ever fall into it. Gravity must be caused by an agent acting constantly according to certain laws… " (Newton 1692-3 cited in Norton 2003: 4).
Two-dimensional representation
of a gravitational field
Pierre-Simon, Marquis de Laplace took Newton's theory of gravity as a force and reformulated it in terms of a theory of a gravitational field. Seen this way, there is a field of gravitational potential around every particle of matter that has a value at every point in space. The value is a vector; i.e., it has magnitude and direction. The direction is always towards the centre of mass and the magnitude drops off in proportion to the square of the distance, i.e. as the distance increases in the series 1, 2, 3, 4... the force experienced is 1/1, 1/4, 1/9, 1/16...  So, although it literally has a value at every point in space, the magnitude of the gravitational field quickly becomes negligible. Thinking of gravity in terms of a field extending throughout space gets around the action at a distance problem. A mass has potential energy in this field and wants to minimise that potential, which it does by converting potential into kinetic energy, moving towards the centre of the field. Hence, masses tend to clump together around the centre of mass or, if they have enough kinetic energy, go into orbit around that centre.

Einstein took this idea one step further. He said that the key manifestation of the gravitational field is to bend spacetime. Any mass moving through spacetime in the gravitational field of another mass is deflected from a straight line to follow a curved path, and the closer it gets the more spacetime curves, which accounts for the motions of masses relative to each other better than a field theory alone. This theory makes many predictions that have been confirmed by observation, though it does not work when gravity is very strong, such as inside a black-hole. So, when we jump out of a plane we follow a straight line through curved spacetime. In this sense gravity does not make us fall but, travelling through spacetime warped by gravity, our path is consistent with falling and with gravity causing that fall.

Carroll says that if you take the position and velocity of two masses at one point in time, apply the laws of physics, then you will have the position and velocity at another point in time. Everything in the universe is in motion all at once. If we observe these motions over time, we notice patterns. The laws of physics are the stories we tell about patterns we see in the universe (sometimes as mathematical equations, sometimes as narratives). The universe itself just follows these patterns. Nothing about the patterns requires us to frame the story as A causes B. The universe simply evolves from one state to the next, always in motion. We can describe that evolution at various levels, but at the most fundamental level, nothing is causing anything.


Why is cause & effect Intuitive?

Despite the lack of any cause behind the patterns of evolution of the universe, locally, the patterns are often consistent with causation. Because of this it's not until we begin to closely examine events and seek their universal features that doubts being to arise. David Hume (1711 – 1776) was the first philosopher to call cause & effect into serious doubt. Hume observed that when we closely study the world, we see a sequence of events, but we never see a separate event called "causation". As Carroll puts it, "different moments in time... follow each other, according to some pattern, but no one moment causes any other" (63).

I said that our prototype for the category of causation is the exertion of mechanical force through our own bodies (Cf Lakoff 1987 on categories). We can trace this back to our earliest experiences of interacting with the world. When we are born, we are helpless. We cannot focus our eyes or control our limbs. Soon, however, our eyes start working and we are able to look around us, to direct our gaze. And not long after that we can start reaching for and grabbing stuff in our pudgy little hands. We unconsciously interpret this as our will causing our movements. Our desires and aversions set us in motion. We move towards and grab the stuff we want; move away from and fend off the stuff we don't want. Our early experiences of interacting with the world are profoundly formative for how we think about the world. And the type of belief we form in this way Justin Barrett (2004) has called non-reflective. In my exposition of Barrett's views I described non-reflective beliefs:
"They arise from assumptions about the way the world works, automatically generated by the unconscious functioning of our various mental tools (especially categorisers and describers). We often don't even think about non-reflective beliefs, to the point where we may not know that we have a belief. And non-reflective beliefs are transparent to us, which is to say that we are not aware of the process by which we come to have a non-reflective belief." - Why Are Karma and Rebirth (Still) Plausible (for Many People)? 14 Aug 2016.
These kinds of non-reflective beliefs are the basis of our understanding of cause & effect and this is how we come to have a metaphysical a priori judgement as described by Kant.

The experience of exerting our will to make things happen is one that we all have. Much of our waking life is concerned with willing, followed by gazing, grabbing and moving around (and more sophisticated versions of these as we grow into adults). And from this experience we generalise a rule of thumb to help us navigate the world: if something is happening in our environment, then we assume that something or someone is causing it to happen. This is a pretty good rule of thumb for someone living as a hunter-gatherer. I imagine that erring on the side of caution helps to keep us alive where the cost of being wrong might be inconsequential and being right means being able to avoid a predator or competitor. Here I'm adapting a similar argument made by Barrett (2004) about agency detection in the environment. This rule of thumb becomes an abstract principle: every event is caused. This is called the Principle of Sufficient Reason. The label is associated with Leibniz and Spinoza who formalised the idea, but the principle itself seems likely to have existed at least as long as there have been human beings. In summary, we assume that events must have causes, because the prototype of our "event" category is our own willed interactions with the world. Hence, our naive ontologies have cause & effect in them and, hence, it survives into some quite sophisticated ontologies as well. 

Another thing that makes cause & effect seem plausible is the arrow of time. In what is called the Past Hypothesis, we say that the early observable universe had extremely low entropy, by which we mean that the stuff in the universe was smooth and evenly distributed (this idea is supported by evidence from the cosmic microwave background). Because disorder accumulates over time, there are many more prior ordered states that can lead to subsequent disordered states than the other way around. It is highly likely that I'll drop my coffee mug and it will break. It is extremely unlikely, though technically not impossible, that a broken cup with stick itself back together and leap into my hand. In practice, however, happen we never see spontaneous reordering on this scale. Because disorder accumulates over time, the flow of events seems to us to be moving in a single definite direction. Since some classes of events always precede other classes of events, and because of how we understand the category "event", we want to say that the prior event causes the causes the later. But this is mainly an artefact of how we perceive and understand the world. We cannot get around Hume's basic insight into sequences and the fact that we never see causality. 

So cause & effect seems intuitive. It is, in fact, a good rule of thumb for human beings trying to navigate through the world as it appears to our senses. But cause & effect is not fundamental. I'm not saying that it never makes sense to talk about cause & effect. It's still a good rule of thumb, but he laws of physics hold. Lower levels do not determine higher levels, but they do constrain them. So a supernatural explanation of an event does not work because it requires us to break the laws of physics (See There is No Life After Death, Sorry). A narrative involving cause & effect does not break the laws of physics. We can say that gravity makes us fall down when we jump out of a plane. It works as an approximation of how the motion appears to us. It is a domain specific explanation that is a good approximation within its domain. It will be all most people need. But it is not applicable at other levels.


Cause & Effect and Religion

If we stipulate the intuitive proposition that everything has a cause as an axiom, then something interesting happens when the cause is not obvious or invisible. There are some mysterious events the cause of which we cannot see or are unsure about. I've previously discussed why supernatural causes are so plausible for many people. The principle of sufficient reason is another explanation for this: if everything has a cause and we cannot see all the causes of all the events we witness, then there must be unseen causes. Once we allow for natural unseen causes, then supernatural unseen causes are only minimally counter-intuitive (cf. Barrett 2004). MCI events or features are interesting and memorable, think of cartoons with talking animals, so they actually promote belief in the supernatural.

Another intuitive proposition that many people take as axiomatic—that the world is just or fair—interacts with sufficient reason in interesting ways. If we take a just world as axiomatic, and everything has a cause, then the fact that the world is just must also have a cause. So it is intuitive that something causes the world to be just. Many religions anthropomorphise this cause as a god and some combine this supernatural feature with others in a single God. Buddhists and Zoroastrians have an impersonal cause underlying the just world. Buddhists call it karma.

I've previously pointed out that a just world combined with the observation of prevalent injustice contributes to afterlife beliefs. Dualism allows for disembodied mind, which opens the door for an afterlife, but the just world axiom is what makes it seem natural. Injustice in a just world seems to demand a post-mortem balancing of the books.

Beginning with cause & effect and a small number of other axioms that most people intuitively hold to be true, we can derive most of the key religious ideas. And the complex is self-sustaining in religious thinking. Piecemeal attempts to undermine this complex are more or less doomed to fail, because, like any network, it has multiple redundancy. If we manage to sow doubt on one aspect with counterfactual information, the combination of other intuitively believed aspects will prevent the doubt from taking hold or spreading. A simplistic frontal assault will simply fail because the structure of the belief system is robust.

Which brings us to the question of how Buddhists see cause & effect.


Cause & effect in Buddhism

It's almost axiomatic to say the Buddhism is all about cause & effect. Except that it isn't. Many scholars and Buddhists will tell you that at the heart of Buddhist doctrine is a law of causality. But this isn't true, either. Make of this widespread and popular inaccuracy what you will, but there is no theory of cause & effect in Buddhism.

The doctrine of pratītyasamutpāda is not about causality, it is about presence. What it says is: the necessary condition being present, we will see the consequence of it; that condition being absent, we will not see the consequence. In fact, if we compare this with J. S. Mill's view on causation, above, we would have to conclude that this is a form of determinism, though I would not insist on this comparison.

Just as with Newton's law of motion, pratītyasamutpāda is a generalisation about presence comparable with the generalisation about force contained in the formula F = ma. We do not believe that rebirth, moral retribution, mental states, or life and death are all caused in the same way, by the same mechanism. As with the case of the magnitude of the force, here we see a very broad generalisation about a variety of processes, in which, if causation is happening, then it must be happening in very different ways. Although it is de rigueur to use mechanically literal translations of pratītyasamutpāda along the lines of "dependent origination" or "conditioned co-production", it would make more sense to call this the principle of presence. 

The principle of presence probably started out as a general observation on how mental activity appears to happen: when sense object, sense faculty, and sense cognition are present, there will be a sensation (vedanā). As I write this, as I have done many times before, a new question occurs to me. If sensations occur only in the presence of all three factors, how can we know about the individual factors? The theory assumes that objects, faculties, and cognitions can be the objects of knowledge which is distinct from sensation and not subject to prapañca (which is the next step in the chain in this model). Some form of naive realism seems to be at work here.

In any case, the arising of experience seems to be the ideal domain of application for the principle of presence. And if we stop to think about it, we will see that this says nothing whatever about causation. There is no attempt to explain how sensations are caused by the elements of perception. We are no wiser as to how the factors of perception cause sensations. When the conditions are present, sensations just happen.

The principle of presence is so general that Buddhists were able apply it to everything, including human affairs and the workings of the universe (as they understood the universe). As an ontology, the principle of presence is vague and short on detail. It doesn't tell us much. However, it still plays an important role in Buddhist methods. When we are investigating how our minds construct our world ("world" here being a metonym for "world of sensate experience") it is helpful to see how the presence of certain factors results in the presence of other factors; and how, by eliminating certain factors (particularly our sense of selfhood), we have a very different experience of the world. 

The other main place we Buddhists invoke cause & effect is in the domain of morality; however, the moral principle that roots Buddhism is not cause & effect, but action & consequence. Cause & effect is usually invoked here in the naive sense of a fundamental principle. But as we've seen, cause & effect is not fundamental. The universe does not evolve through cause & effect, but we perceive that it does, for the reasons outlined above. However, even if our naive view was accurate, morality would still not be an example of cause & effect. Our actions do have moral consequences, but they do not cause those consequences.

The principle of consequences, or karmavāda, is a fiendishly complex subject because of the huge variety of mutually conflicting theories about how it works (see Karma and Rebirth: The Basics. 6 May 2016). The basic idea is that the character of our intentions (in terms of kuśala/akuśala) in this life determines where we will be reborn in the next. How our actions cause rebirth is never discussed. The fact that karma causes rebirth is the subject of a series of increasingly speculative metaphysical narratives, but none of them survives modern scrutiny.

Most modern discussions of karma present it as something very different from this. In particular, they try to decouple karma from rebirth. In modern terms, karma is transformed into a simplistic theory of social relations: if we behave nicely, people will be nice to us; and if we behave nastily, people will be nasty to us. This is fine, as far as it goes. As a rule of thumb for getting on with people, it's not wrong. But it isn't some great or profound insight into social relations and by disconnecting karma from rebirth we have decisively broken with the Buddhist tradition (I'm for this, but it ought to be admitted).

As has become obvious, neither traditional nor modern versions of karma amount to a moral theory or system (See also David Chapman's series of essays on Buddhist Ethics). Traditionally, Buddhisms have focussed on pragmatic training regimes rather than on ethical systems. Which, again, is fine as far as it goes. As training regimes, the precepts are quite helpful in setting us up to have the experience of egolessness, at which point morality is not a matter of external systems and theories, but emerges naturally: ego itself, according to the awakened, is the condition and occasion for unskilfulness. It is common to see the promise of happiness associated with the practice of the precepts. In my experience, any causality points the other way: if I am happy I will tend to be more skilful than if I am unhappy. The trap for Buddhists is that the secondary function of karma is to explain our present mood. If I am unhappy, and prone to unskilfulness, the I must have been unskilful in the past. Karma always boils down to a blame game, however much Buddhists resist this conclusion.

Unfortunately, sets of precepts don't equip us for dealing with the complexities of morality in modern life and most of are never going to experience egolessness, even temporarily. Which is where David Chapman's ideas based on Robert Kegan's developmental model come in. I'm sorry to say that I've been too busy to find time to look into this material myself as yet. But it's on my list.

In response to the claim that cause & effect is not found in Buddhism, some people will argue that in this case "cause & effect" is a metaphor. They will say that they are mapping a source domain onto a target domain in order to say something abstract about the target domain (or at least they will say something that means this). This is a possibility, but as I have said, metaphors that cross levels or domains of description are prone to catastrophic misunderstanding. They only work when everyone is aware that the proposition is metaphorical. In the case of the principle of presence, I'm sure that most of us do not take it as metaphorical. We reify (or realify) the metaphor; i.e., we take the metaphor as real. And this is a mistake. It leads to confusion. It is better to just not go there. Buddhism is not about cause & effect. That is the simplest way of stating it.

How modern Buddhists got so mixed up about the tradition is a long story, something that David Chapman goes into in his essays. Modern influences began to creep in quite early and have accumulated gradually so that they have not always been noticed, even when they entirely replace the Asian tradition. Without a detailed study of historical doctrines, it can be difficult to spot the modern incorporations, even when modern ideas have completely taken over from traditional ideas. Of course the evidence of our texts and cultures suggests that Buddhism has always been syncretistic, always absorbing ideas, attitudes, and practices from outside the local milieu. Hence the national character of Buddhism in various countries or cultural regions.


Conclusions

Fundamentally, there is no cause & effect. The universe evolves along lines that contain regularities and patterns that make certain kinds of events predictable when we know the conditions preceding them. We don't know why. But human beings, because of our developmental path, perceive the world in terms of a template based on the exertion of will through the application of mechanical force via our bodies. For us, cause & effect seems like a natural and intuitive principle. From this we generalise to the principle of sufficient reason: things happen for a reason. But they don't. Shit just happens

We sometimes meet the reductio ad absurdum argument that if the principle of sufficient reason is wrong; i.e., if we argue against the idea that things happen for a reason, then it's all just random chaos. And since the world isn't just random chaos, then everything happens for a reason. But this is a false dichotomy in which both propositions are wrong. It's not so much a matter of finding a middle way, as ignoring two wrong arguments in favour of paying attention to what the world is really like.

This is clearly bad news for those who deal in bullshit platitudes such as "trust the universe", or "things happen for a reason", or "God has a plan". The universe evolves according to a logic of its own that is deeply counter-intuitive to most people; i.e.. causeless and aimless, but regular and (somewhat) predictable. However, just because the universe has no purpose, doesn't mean that human beings don't. Similarly, even if, fundamentally, the universe evolved in a deterministic way (with neither cause nor aim), it would not mean that humans do things without reasons. It would not change the fact that certain types of human behaviour are conducive to good social relations and others are not. Different levels have different, autonomous, features.

Cause & effect will continue to seem natural and intuitive to most people. And it will provide us with some useful narratives about how our world works (in the same way that Newtonian physics is still useful - accurate enough and precise enough). Useful, but not definitive or fundamentally true. There's an important logic to cause & effect that still makes sense, because our understanding of it stems from our embodied condition, our embodied minds. 

One of the conclusions that I draw is that we cannot trust anyone preaching metaphysical certainty, whether it be theists who attribute everything to God, economists who attribute prosperity to abstract markets, politicians attributing security to aggressive foreign policy, or Buddhists peddling liberation. Another important conclusion is that the truth is frequently counter-intuitive. We have to make allowance for this. If we really want to understand the world, then we cannot rely on intuition alone, or perhaps even at all. What seems right may, in fact, just be some kind of cognitive bias or logical fallacy, like cause & effect. So, although we find meaning on human terms, we also have to be highly skeptical when presented with pre-packaged meaning. We need to systematically investigate what "on human terms" signifies and what other terms are available.

~~oOo~~


Bibliography

Barrett, Justin L. (2004) Why Would Anyone Believe in God? Altamira Press.

Carroll, Sean. (2016). The Big Picture: On the Origins of Life, Meaning, and the Universe Itself. Dutton.

Cartwright, Jon. (2016). Quantum of Solitude. New Scientist. 16 July 2016. Online [subscription required] https://www.newscientist.com/article/mg23130820-200-collapse-has-quantum-theorys-greatest-mystery-been-solved/

Hillis, W. Daniel. (2014). What Scientific Idea is Ready for Retirement? cause & effect. The Edge. https://www.edge.org/response-detail/25435

Johnson, Mark. 1987. The Body in the Mind: The Bodily Basis of Meaning, Imagination and Reason. University of Chicago Press.

Jones, Richard H. (2013). Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

Norton, J. D. (2003) Causation as Folk Science, Philosophers' Imprint, 3(4)
http://www.pitt.edu/~jdnorton/papers/003004.pdf; reprinted in H. Price and R. Corry (eds.) (2007), Causation and the Constitution of Reality. Oxford University Press.

Russell, Bertrand (1917) On the Notion of Cause, Ch. IX in Mysticism and Logic and Other Essays. London: Unwin,1917, 1963.
Related Posts with Thumbnails