Showing posts with label Quantum. Show all posts
Showing posts with label Quantum. Show all posts

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

21 February 2025

Classical is Cooler

Many extravagant claims are made for quantum physics, and in comparison classical physics often seems to be dismissed, almost as though it is of little consequence.

Amongst other things, it has long bugged me that Buddhists hijack quantum mechanics and combine it with the worst of Buddhist philosophy—i.e. Madhyamaka—to create a monstrous form of bullshit. I've previously written three essays that try to address the perennial quantum bullshit that thrives amongst Buddhists.

Although, I don't seem to have had any appreciable effect on the levels of bullshit.

In this essay, I'm going to make an argument that classical physics is, in fact, much cooler than quantum physics, especially the bullshit quantum physics that doesn't use any mathematics


Life, the Universe, and Everything.

One way of describing the observable universe is to state the scales of mass, length, and energy it covers.

  • The total mass of the observable universe is thought to be in order or 1053 kg. From the smallest objects (electrons) to the whole universe is about 84 orders of magnitude (powers of ten).
  • The observable universe is about 4 x 1026 metres in diameter; and from the smallest possible length (the Planck length) to the whole is about 61 orders of magnitude.
  • E=mc2 gives the total energy of the universe as about 1070 joules, and covers about 61 orders of magnitude.

Human beings can perceive roughly 18 orders of magnitude of mass, 12 of length, and 11 of energy, roughly in the middle of each scale. Much of the universe is imperceptible to our naked senses. Human beings evolved and thrived for hundreds of thousands of years without knowing anything beyond what we could see, hear, smell, taste, or touch with our naked senses.

It was the invention of the ground glass lens that alerted us to the existence of both larger scales (telescope) and smaller scales (microscope). And for this reason I count the lens the most significant invention in the history of science. I know people count Copernicus as the first European scientist, but to my mind he was merely a precursor. Galileo was the first to make systematic observations and thereby discover new things about the universe, e.g. acceleration due to gravity is a constant, the moon's surface is not smooth but cratered, and that Jupiter has satellites. Note that Galileo did not have evidence or a good case for a "heliocentric universe" (and his ideas about this were wrong in several ways, but that's another story).

400 years later, we have a number of hugely successful theories of how the universe works. We've identified four fundamental forces and two kinds of particle: fermions and bosons. However, no single approach to physics can cover all the many orders of magnitude. All of our explanations are limited in their scope. Newtonian mechanics fails with large masses or high relative velocities. Relativity fails on the nanoscale and especially at the time of the big bang. Quantum physics fails on the macro-scale.

Physicists still hope to find a way of reconciling relativity and quantum physics, which they predict will produce a single mathematical formalism that can describe our universe at any scale. After more than a century of trying, we don't seem to be any closer to this. To be fair a lot of time, effort, and resources went into pursuing so-called "string theory" which has proven to be a dead end, at least as far as reconciling nano and macro physics. 

What I want to do in the rest of this essay is contrast classical physics and quantum physics.


Classical Physics

Classical physics is a primarily a description of the world that we perceive. As such, classical physics will always be salient and applicable to our lives. When we need a lever to move something, we use classical physics. When we want to describe the universe on the largest scale, we use classical physics. This means that classical physics is largely intuitive (even if the maths is not). 

Classical physics is testable and has been extensively tested. While it was never my favourite subject, I studied physics as a distinct subject for four years up to undergraduate level and in that time I did many experiments. I was able, for example, to observe the applicability of ideas like Newton's laws of motion. 

I have personally observed that m1v1 = m2v2 (i.e. momentum is conserved). And you can too, if you put your mind to it. Classical physics is highly democratic in the sense that anyone can test its predictions relatively easily.

Classical physics shows that the universe (on this scale) follows relatively simple patterns of evolution over time that can be written down as mathematical statements. In the 19th century, such expressions were called "laws". By the mid 20th century we called them "theories". Simple examples include:

  • the relationship between pressure (P), volume (V), and temperature (T) of any gas is PV/T = constant.
  • the relationship between voltage (V), current (I), and resistance (R) in a circuit is V=IR.
  • the relationship between force and acceleration of an object with mass is F=ma.

The mathematics of relativity is considerably more complex than these examples, but one gains several degrees of accuracy (≈ numbers after the decimal point) as compensation.

An interesting feature of our experience of the world is that time goes in one direction. This is a consequence of entropy. We can always tell when a film is playing backwards, for example, because the causality is all wrong. Broken cups never spontaneously reform and leap up from the floor to appear unbroken in our hands. Whole cups common fall down to the floor and smash. Once again, classical physics is intuitive.

Classical physics has never been made into an analogy by New Age gurus. No one ever compared the Heart Sutra to classical physics. No one ever says classical physics is "weird" or counter-intuitive. The fixed speed of light is a little counter-intuitive but it doesn't lend itself to the kind of Romantic flights of fancy that make religion seem interesting. If anything, religieux are apt to dismiss the whole topic of classical physics as irrelevant to "spirituality". Classical physics seems to resist being co-opted by woo-mungers.

And then there is quantum physics...


Quantum

Mathematically, quantum physics is profoundly accurate and precise method of predicting probabilities. However, unlike classical physics no one knows why it works. Literally, no one knows how the mathematics relates to reality. There are lots of ideas, each more counter-intuitive than the next and each relies on a series of assumptions that are beyond the scope of the mathematical formalism. But each set of assumptions leads to radically different metaphysics! And there is no agreement on which assumptions are valid. And at present there is no way to test these theories. I've seen Sean Carroll argue that Many Worlds does make testable predictions, but as far as I know, they have not been tested.

Einstein was of the opinion that quantum physics was incomplete. Sadly his proposed solution to this seems to have been ruled out. But still, I think the only viable stance is to consider quantum theory as incomplete until such time as we know how it relates to reality.

Which brings us to the first false claim that is commonly asserted by scientists: "the universe is deterministic." This assumes that quantum theory explains how matter behaves. But it doesn't. We don't know how mathematics relates to reality. So we don't know if the universe is deterministic or not. The claim that the the universe is deterministic goes far beyond our present state of knowledge. Most interpretations of quantum physics treat it as probabilistic rather than deterministic. And this undermines all claims that the universe is deterministic.

Another common falsehood is "quantum mechanics is a description of reality". But it should already be apparent that this is simply not true. Physicists do not know how the mathematics of quantum physics relates to reality. All they know is that the mathematics accurately assesses the probabilities of the various states that the system can be in over time. It doesn't tell us what will happen, at best it tells us what can happen.

At the popular level, quantum physics is plagued by vagueness and misleading statements. Scientists talk about "the wavefunction" as an independent thing (hypostatisation) and even as a physical thing (reification), when is it in fact an abstract mathematical function. They talk about "wave-like" behaviour without ever distinguishing this from actual wave behaviour. "Observation", so crucial to some approaches, is vague and more or less impossible to define.

We see statements like "energy is quantised" as though all energy is quantised. But this is not true. If you measure radiation from the sun, for example, it smoothly spans the entire electromagnetic spectrum (the sun glows because its hot, and that glow is blackbody radiation which is smooth rather than discreet). Energy is only quantised in atoms. And the solar spectrum is itself proof of this because the atoms in the sun absorb energy at precise wavelengths, causing the spectrum of sunlight to have darker bands when viewed at a fine enough grain.

The quantisation in atoms is explained in terms of an electron in an atom being conceived of as a standing wave - which means it can only vibrate at frequencies that allow for a whole number of wavelengths. For example, the harmonic series on a guitar string is also "quantised": the diagram shows different modes of vibration. The top shows wavelength = string length. but the string can also vibrate at twice the fundamental frequency so that 2 wavelengths = string length, then 3, 4, 5, 6, and 7 wavelengths = string length (out to infinity).

The energy levels for electrons in atoms show a similar pattern. But remember that an electron is 3 dimensional. Spherical harmonics look more like this

Which is similar to how we think electron orbitals look in Hydrogen.

Some of these results are confirmed by the shapes of molecules, which can be determined independently, for example by X-ray crystallography.

People talk about "measuring where the electron is in the atom". But this is almost pure bullshit. No one has ever measured the position of an electron in an atom. It's not possible. Within an atom, an electron is distorted into a spherical standing wave. "Position" is meaningless in this context. As are most other particle-related ideas. And remember, we cannot solve the equations when there are two or more electrons, we can only estimate (though current estimates are still very accurate).

We also see statements like "a system can exist in multiple states simultaneously", usually referred to as superposition (the "position" part is entirely misleading). This phrase is often used in popular explanations of quantum mechanics, but it’s misleading. The wavefunction describes a superposition of probability amplitudes, it does not describe a coexistence of multiple physical states. In fact, the term "state"—as it is usually used—is not applicable here at all, precisely because in normal usage it implies existence. In this context "state" confusingly means every single possible state, each with its own probability.

For example, if an electron has the wavefunction is ψ = ψ1 + ψ2 it doesn’t mean the electron is "in both states ψ1 and ψ2 at once." This is because neither ψ1 nor ψ2 is a physical state. Each is a probability distribution. So what superposition means is that, at some time, the electron's state has a probability distribution that reflects the combined amplitudes of ψ1 and ψ2. There is and can be no superposition of physical states, nor is their any theoretical possibility of observing such a thing.

All of those "interpretations" that treat the wavefunction as real simply assert its existence as axiomatic and introduce further a priori assumptions into order to try to make sense of this mess. If we make no assumptions then there is nothing about the mathematical formalism of quantum mechanics that forces us to think of the wavefunction as a real thing rather than an abstraction. It's a probability distribution. Which is an abstraction.

Which means that the idea that the wave-function can "collapse" is nonsensical. All probability distributions without exception "collapse" at the point of measurement.

If I roll a die, I get one number facing up. It can be any one of the six numbers. And each number is equally likely to be face up after a roll. Before I roll the die, the "wavefunction" of the die describes 6 possible "states" each of which is equally likely. When I roll the die I get one answer. Has anything mysterious happened? I think not. Let's say I roll a 2. I don't have to explain what happened to 1,3,4,5 and 6. Nothing happened to them, because they are not things. They are just unrealised possibilities. I get one result because only one result is physically possible. But before I know which result I have, all the possibilities have a finite probability. There's nothing "weird" or "mysterious" about this unless one first reifies the wavefunction.

Indeed, the whole idea of the "measurement problem" appears to be based on a serious misconception (as far as I can see). The measurement problem is based on the idea that the Schrödinger equation describes a system as existing in multiple physical states. But it doesn't. It describes probability distribution of possible physical states. A potentiality is not an existing state.

The only time measuring becomes problematic is when we assume that the wavefunction is a thing (reification) or that it reflects existent states rather then potential states. And these moves are simply mistakes.

Ironically, the one thing that Schrödinger's equation is not, as Nick Lucid explains, is a wave equation. The generalised wave equation contains a second-order partial differential with respect to time (a distorting force is countered by a restoring force, causing acceleration). This is a fascinating observation. I gather that using the constant i (√-1) in the Schrödinger equation allows for some "wave-like" behaviour, but no one really talks about this in lectures on quantum physics. Nor do they distinguish "wave" from "wave-like". And we still have to insist that the "wave-like" behaviour in question is a wave of probability, not a physical wave.

But then Nick Lucid, who typically is quite lucid (despite his "crazy" schtick), also introduces his video by saying "Schrödinger's equation governs the behavior of tiny quantum particles by treating them as wave functions." No equation anywhere "governs" anything. The equation describes the probability of a range of possible states. It's a descriptive law, not a prescriptive law. And as Lucid goes on to say, the equation in question is not a wave equation, it's a heat equation. The one thing that Schrödinger's equation doesn't do is "govern the behavior of tiny quantum particles".

This generalises: physics is a description, not a prescription. Abstract mathematical expressions cannot "govern" concrete entities. And in the case of quantum physics, it doesn't seem to relate to the "behaviour" either, since it only predicts the probability of any given state following from the present state. So it's not even a description of actual behaviour, just a description of potential behaviour at any point in time. With the most precise prediction as to probability, we still don't know what's going to happen next, and the actual outcome could always be the least likely outcome. That's why quantum tunneling is a thing, for example.

Unlike classical physics, which every undergraduate students proves to their own satisfaction, nano-scale physics is impossible to observe directly. It takes massive, complicated, and expensive equipment to get information from that scale. Information goes through many stages of amplification and transformation (from one kind of energy to another) before anything perceptible emerges. And that has to be processed by powerful computers before it makes any sense. And then interpreted by human beings.

That blip on the graph at 125 GeV that the LHC produced as evidence of the Higgs Boson is abstracted to the nth degree from the thing itself.

At no time was a Higgs Boson ever observed, and at no time in the future will one ever be observed. What was observed was a particular kind of decay product, which the logic of the standard model says can only be produced if a Higgs Boson decays in the way that Peter Higgs predicted. Assuming that the standard model is right. Keep in mind that the model didn't predict the energy of the Higgs particle exactly. There was actually a lot of uncertainty. And the two different detectors actually measured slightly different numbers. Moreover, do you see how wide that peak was? That width is experiment error. Maybe the energy of the Higgs is 125 GeV, or maybe its a little more or a little less?

We cannot ever see the nano-scale. And because of this, we simply cannot imagine the nano-scale.

A 1 gram diamond, for example contains in the order of 5 x 1022 atoms. How big would that diamond be if each atom of carbon was 1mm3 or roughly the size of a grain of salt? It would be 5 x 1013 cubic metres. This is roughly the volume of Mount Everest. So an atom is to a grain of salt, as a grain of salt is to Mt Everest.

Imagination simply fails.


Conclusion

In short, at least at the popular level, quantum physics is a constant source of vague or misleading information. It is plagued by careless use of language and outright false claims by scientists themselves. The philosophy of quantum physics is difficult, but on the whole it fails to adequately distinguish epistemology and metaphysics. This is made worse by kooks and charlatans leveraging the confusion to pull the wool over our eyes. Sometimes, the kooks and the scientists are in a superposition: notably Eugene Wigner's theory about "consciousness" (another abstraction) collapsing the wavefunction. Wigner won a Nobel, but he was also a serious kook. And he has been responsible for a mountain of bullshit as a result.

Most of what is said about quantum physics outside of university lecture halls is bullshit, and quite a bit that is said in them is also bullshit or at least partially digested hay. Everything that is said about Buddhism and quantum physics is mendacious bullshit.

There is no doubt that insights gained from quantum physics are important and valuable, but the whole thing is over-hyped and plagued by nonsense. The actual work is largely about approximating solutions to the insoluble mathematical equations, which at best give us probabilities. It works remarkably well, but no one knows why.

The idea that quantum physics is any kind of "description of reality" is pure bullshit. It's a probability distribution, for a reality that no understands any better now than when physics genius Richard Feynman said: "No one understands quantum mechanics".

Classical physics on the other hand is seldom vague or misleading. It resists being leveraged by kooks by being precisely and accurately defined. It can readily be tested by more or less anyone. Classical physics is much less prone to bullshit. No one ever bothers to compare Buddhism to classical physics. Which is a good sign.

Classical physics is not only cooler than quantum physics. It is way cooler. 


Coda

If anyone is still unconvinced that quantum theory has no conceivable relationship with Buddhism, then I invite you to watch this video introduction to quantum mechanics from an Oxford University undergraduate physics course. This is a no bullshit course. 



I defy anyone to connect anything said in this video to any aspect of Buddhist doctrine. 

Related Posts with Thumbnails