Showing posts with label Quantum. Show all posts
Showing posts with label Quantum. Show all posts

16 January 2026

How can a particle be in two places at once? (Superposition, Again)

Image of atom

A common question for lay people confronted with counterintuitive popular narrative accounts of quantum physics is:
 
How can a particle be in two places at once?

The idea that a particle can be in two places at once is a common enough interpretation of the idea of quantum superposition, but this is not the only possible interpretation. Some physicists suggest that superposition means that we simply don't know the position, and some say that it means that the "position" is in fact smeared out into a kind of "cloud" (not an objective cloud). However, being in two places at once is an interpretation that lay people routinely encounter, and it has become firmly established in the popular imagination.

Note that while the idea is profoundly counterintuitive, physicists often scoff at intuition. Richard Feynman once said, "The universe is under no obligation to make sense to you." I suppose this is true enough, but it lets scientists off the hook to easily. The universe might be obligation-free, but science is not. I would argue precisely that science is obligated to make sense. For the first 350 years or so, science was all about making sense of empirical data. This approach was consciously rejected by people like Werner Heisenberg, Max Born, and Niels Bohr before arriving at their anti-realist conclusions.

But here's the thing. Atoms are unambiguously and unequivocally objective (their existence and properties are independent of the observer). We even have images of individual atoms now (above right). Electrons, protons, neutrons, and neutrinos are all objective entities. They exist, they persist, they take part in causal relations, and we can measure their physical properties such as mass, spin, and charge. The spectral absorption/emission lines associated with each atom are also objective.

It was the existence of emission lines, along with the photoelectric effect, that led Planck and Einstein to propose the first quantum theory of the atom. And if these lines are objective, then we expect them to have an objective cause. And since they obviously form a harmonic series we ought to associate the lines with objective standing waves. The mathematics used to describe and predict the lines does describe a standing wave, but for reasons that are still not clear to me, physicists deny that an objective standing wave is involved. The standing wave is merely a mathematical calculation tool. Quantum mechanics is an antirealist scientific theory, which is an oxymoron. 

However, we may say that if an entity like the atom in the image above has mass, then that mass has to be somewhere at all times It may be relatively concentrated or distributed with respect to the centre of mass, but it is always somewhere. Mass is not abstract. Mass is physical and objective. Mass can definitely not be in two places at once. Similarly, electrical charge is a fundamental physical property. It also has to be somewhere. If we deny these objective facts then all of physics goes down the toilet. 

Moreover, if that entity with mass and charge is not at absolute zero, then it has kinetic energy: it is moving. If it is moving, that movement has a speed and a direction (i.e. velocity). At the nanoscale, there is built-in uncertainty regarding knowing both position and velocity at the same time, but we can, for example, know precisely where an electron is when it hits a detector (at the cost of not knowing its speed and direction at that moment).

Quantum theory treats such objective physical entities as abstractions. Bohr convinced his colleagues that we cannot have a realist theory of the subatomic. It's not something anyone can describe because it's beyond our ability to sense. This was long before images of atoms were available. 

The story of how we came to have an anti-realist theory of these objective entities and their objective behaviour would take me too far from my purpose in this essay, but it's something to contemplate. Mara Beller's book Quantum Dialogue goes into this issue in detail. Specifically, she points to the covert influence of logical positivism on the entire Copenhagen group.

The proposition that a particle can be in two places at once is not only wildly counterintuitive, but it breaks one of Aristotle's principles of reasoning: the principle of noncontradiction. Which leaves logic in tatters and reduces knowledge to trivia. Lay people can only be confused by this, but I think that, secretly, many physicists are also confused.

To be clear:

  • No particle has ever been observed to be in different locations at the same time. When we observe particles, they are always in one place and (for example, in a cloud chamber) appear to follow a trajectory. Neither the location nor the trajectory is described by quantum physics.
  • No particle has ever been predicted to be in different locations at the same time. The Schrödinger equation simply cannot give us information about where a particle is.

So the question is, why do scientists like to say that quantum physics means that a particle can be in two places, or in two "states"*, at one time? To answer this, we need to look at the procedures that are employed in quantum mechanics and note a rather strange conclusion.

* One has to be cautious of the word "state" in this context, since it refers only to the mathematical description, not to the physical state of a system. And the distinction is seldom, if ever, noted in popular accounts.

What follows will involve some high school-level maths and physics.


The Schrödinger Equation

Heisenberg and Schrödinger developed their mathematical models to try to explain why the photons emitted by atoms have a specific quantum of energy (the spectral emission lines) rather than an arbitrary energy. Heisenberg used matrices and Schrödinger used differential equations, but the two approaches amount to the same thing. Even when discussing Schrödinger's differential equation, physicists still use matrix jargon like "eigenfunctions" indiscriminately.

The Schrödinger equation can take many forms, which does not help the layperson. However, the exact form doesn't matter for my purposes. What does matter is that they all include a Greek letter psi 𝜓. Here, 𝜓 is not a variable of the type we encounter in classical physics; it is a mathematical function. Physicists call 𝜓 the wavefunction. Let's dig into what this means.


Functions

A function, often denoted by f, is a mathematical rule. In high school mathematics, we all learn about simple algebraic functions of the type:

f(x) = x + 1

This rule says: whatever the current value of x is, take that value and add 1 to it.

So if x = 1 and we apply the rule, then f(x) = 2. If x = 2.5, then f(x) = 3.5. And so on.

A function can involve any valid mathematical operation or combinations of them. And there is no theoretical limit on how complex a function can be. I've seen functions that take up whole pages of books.

We often meet this formalism in the context of a Cartesian graph. For example, if the height of a line on a graph is proportional to its length along the x-axis, then we can express this mathematically by saying that y is a function of x. In maths notation.

y = f(x); where f (x) = x + 1.

Or simply: y = x + 1.

This particular function describes a line at +45° that crosses the y-axis at y = 1. Note also that if the height (y) and length (x) are treated as the two orthogonal sides of a right-triangle, then we can begin to use trigonometry to describe how they change in relation to each other. Additionally, we can treat (x,y) as a matrix or as the description of a vector.

In physics, we would physically interpret an expression like y = x + 1 as showing how the value of y is proportional to the value of x. We also use calculus to show how one variable changes over time with respect to another, but I needn't to go into this.


Wavefunctions and Hilbert Spaces

The wavefunction 𝜓 is a mathematical rule (where 𝜓 is the Greek letter psi, pronounced like "sigh"). If we specify it in terms of location on the x-axis, 𝜓(x) gives us one complex number (ai + b; where i = √-1) for every possible value of x. And unless otherwise specified, x can be any real number, which we write as x ∈ ℝ (which we read as "x is a member of the set of real numbers"). In practice, we usually specify a limited range of values for x.

All the values of 𝜓(x), taken together, can be considered to define a vector in an abstract notional "space" we call a Hilbert space, after the mathematician David Hilbert. The quantum Hilbert space has as many dimensions as there are values of x, and since x ∈ ℝ, this means it has infinitely many dimensions. While this seems insane at first glance, since a "space" with infinitely many dimensions would be totally unwieldy, in fact, it allows physicists to treat 𝜓(x) as a single mathematical object and do maths with it. It is this property that allows us to talk about operations like adding two wavefunctions (which becomes important below).

We have to be careful here. In quantum mechanics, 𝜓 does not describe a objective, physical wave in space. Hilbert space is not an objective space. This is all just abstract mathematics. Moreover, there isn’t an a priori universal Hilbert space containing every possible 𝜓. Every system produces a distinct abstract space. 

That said, Sean Carroll and other proponents of the so-called "Many Worlds" interpretation first take the step of defining the system of interest as "the entire universe" and notionally assign this system a wavefunction 𝜓universe. However, there is no way to write down an actual mathematical function for such an entity since it would have infinitely many variables. Even if we could write it down, there is no way to compute any results from such a function: it has no practical value. In gaining a realist ontology, we lose all ability to get information without introducing massive simplifications. Formally, you can define a universal 𝜓. But in practice, to get predictions, you always reduce to a local system, which is nothing other than ordinary quantum mechanics without the Many Worlds metaphysical overlay. So in practice, Many Worlds offers no advantage over "shut up and calculate". And since the Many Worlds ontology is extremely bizarre, I fail to see the attraction.

It is axiomatic for the standard textbook approach to quantum mechanics—deriving from the so-called "Copenhagen interpretation"—that there is no objective interpretation of 𝜓. Neutrally, we may say that the maths needn't correspond to anything in the world, it just happens to give the right answers. The maths itself is agnostic; it doesn't require any physical interpretation. Bohr and co positivistically insisted that it's not possible to have a physical interpretation because we cannot know the world on that scale.

As readers likely know, the physics community is deeply divided over (a) the possibility of realist interpretations, i.e. the issue of 𝜓-ontology and (b) which, if any, realist interpretation of 𝜓 is the right one. There is a vast amount of confusion and disagreement amongst physicists themselves over what the maths represents, which does not help the layperson at all. But again, we can skip over this and stay focussed on the goal.


The Schrödinger equation in Practice

To make use of the Schrödinger equation, a physicist must carefully consider what kind of system they are interested in and define 𝜓 so that it describes that system. Obviously, this selection is crucial for getting accurate results. And this is a point we have to come back to.

When we set out to model an electron in a hydrogen atom, for example, we have to choose an expression for 𝜓 whose outputs correspond to the abstract mathematical "state" of that electron. There's no point in choosing some other expression, because it won't give accurate results. Ideally, there is one and only one expression that perfectly describes the system, but in practice, there may be many others that approximate it.

For the sake of this essay, I will discuss the case in which 𝜓 is a function of location. In one dimension, we can state this as: 𝜓(x). When working in three spatial and one time dimensions, for technical reasons, we use spherical spatial coordinates, which are two angles and a length, as well as time: 𝜓(φ,θ,x,t). The three-dimensional maths is challenging, and physicists are not generally required to be able prove the theorem. They only need to know how to apply the end results.

Schrödinger himself began by describing an electron trapped in a one-dimensional box, as perhaps the simplest example of a quantum system (this is an example of a spherical cow approximation). This is very often the first actual calculation that students of quantum mechanics perform. How do we choose the correct expression for this system? In practice, this (somewhat ironically) can involve using approximations derived from classical physics, as well as some trial and error.

We know the the electron is a wave and so we expect it to oscillate with something like harmonic motion. In simple harmonic motion, the height of the wave on the y-axis changes as the sine of the position of the particle on the x-axis.

One of the simplest equations that satisfies our requirements, therefore, would be 𝜓(x) = sin x, though we must specify lower and upper limits for x reflecting the scale of the box.

However, it is not enough to specify the wavefunction and solve it as we might do in wave mechanics. Rather, we first need to do another procedure. We apply an operator to the wavefunction.

Just as a function is a rule applied to a number to produce another number, an operator is a rule applied to a function that produces another function. In this method, we identify operators by giving them a "hat".

So, if p is momentum (for historical reasons), then the operator that we apply to the wavefunction so that it gives us information about momentum is p̂. And we can express this application as 𝜓. For my purposes, further details on operators (including Dirac notation) don't matter. However, we may say that this is a powerful mathematical approach that allows us to extract information about any measurable property for which an operator can be defined, from just one underlying function. It's actually pretty cool.

There is one more step, which is applying the Born rule. Again, for the purposes of this essay, we don't need to say more about this, except that when we solve p̂ψ, the result is a vector (a quantity + a direction). The length of this vector is proportional to the probability that, when we make a measurement at x, we will find momentum p. And applying the Born rule gives us the actual probability.

So the procedure for using the Schrödinger equation has several steps. Using the example of 𝜓(x), and finding the momentum p at some location x, we get something like this:

  • Identify an appropriate mathematical expression for the wavefunction 𝜓(x).
  • Apply the momentum operator 𝜓(x).
  • Solve the resulting function (which gives us a vector).
  • Apply the Born Rule to obtain a probability.

So far so good (I hope).

To address the question—How can a particle be in two places at once?—we need to go back to step one.


Superposition is Neither Super nor Related to Position.

It is de rigueur to portray superposition as a description of a physical situation, but this is not what was intended. For example, Dirac's famous quantum mechanics textbook presents superposition as an a priori requirement of the theory, not a consequence of it. Any wavefunction 𝜓 must, by definition, be capable of being written as a combination of two or more other wavefunctions: 𝜓 = 𝜓₁ + 𝜓₂. Dirac simply stated this as an axiom. He offers no proof, no evidence, no argument, and no rationale.

We might do this with a problem where using one 𝜓 results in overly complicated maths. For example it's common to treat the double-slit experiment as two distinct systems involving slit 1 and slit 2. For example, we might say that 𝜓₁ describes a particle going only through slit 1, and 𝜓₂ describes a particle going through slit 2. The standard defence in this context looks like this:

  • The interference pattern is real.
  • The calculation that predicts it more or less requires 𝜓 = 𝜓₁ + 𝜓₂.
  • Therefore, the physical state of the system before measurement must somehow correspond to 𝜓₁ + 𝜓₂.

But the last step is exactly the kind of logic that quantum mechanics itself has forbidden. We cannot say what the state of the system is prior to measuring it. Ergo, we cannot say where the particle is before we measure it and we definitely cannot say its in two places at once.

To be clear, 𝜓 = 𝜓₁ + 𝜓₂ is a purely mathematical exercise that has no physical objective counterpart. According to the formalism, 𝜓 is not an objective wave. So how can 𝜓₁ + 𝜓₂ have any objective meaning? It cannot. Anything said about a particle "being in multiple states at once", or "taking both/many paths", or "being in two places at once" is all just interpretive speculation. We don't know. And the historically dominant paradigm tells us that we cannot know and we should not even ask.

To be clear, the Schrödinger does not and cannot tell us what happens during the double slit experiment. It can only tell us the probable outcome. The fact that the objective effect appears to be caused by interference and the mathematical formalism involves 𝜓₁ + 𝜓₂ is entirely coincidental (according to the dominant paradigm).

Dirac has fully embraced the idea that quantum mechanics is purely about calculating probabilities and that it is not any kind of physical description. A physical description of matter on the sub-atomic scale is not possible in this view. And his goal does not involve providing any such thing. His goal is only to perfect and canonise the mathematics which Heisenberg and Born had presented as a fait accompli in 1927:

“We regard quantum mechanics as a complete theory for which the fundamental physical and mathematical hypotheses are no longer susceptible of modification.”—Report delivered at the 1927 Solvay Conference.

I noted above that we have to specify some expression for 𝜓 that makes sense for the system of interest. If the expression is for some kind of harmonic motion, then we must specify things like the amplitude, frequency, direction of travel, and phase. Our choices here are not, and cannot be, derived from first principles. Rather, they must be arbitrarily specified by the physicist.

Now, there are an almost infinite number of expressions of the type 𝜓(x) = sin (x). We can specify amplitude, etc., to any arbitrary level of detail.

  • The function 𝜓(x) = 2 sin (x) will have twice the amplitude.
  • The function 𝜓(x) = sin (2x) will have twice the frequency.
  • The function 𝜓(x) = sin (-x) will travel in the opposite direction.

And so on.

A physicist may use general knowledge and a variety of rules of thumb to decide which exact function suits their purposes. As noted, this may involve using approximations derived from classical physics. We need to be clear that nothing in the quantum mechanical formalism can tell us where a particle is at a given time or when it will arrive at a given location. Whoever is doing the calculation has to supply this information.

Obviously, there are very many expressions that could be used. But in the final analysis, we need to decide which expression is ideal, or most nearly so. 

For a function like 𝜓(x) = sin (x), for example, we can add some variables: 𝜓(x) = A sin (kx). Where A can be understood as a scaling factor for amplitude, and k as a scaling factor for frequency. Both A and k can be any real number (A ∈ ℝ and k ∈ ℝ).

Even this very simple example clearly has an infinite number of possible variations since ℝ is an infinite set. There are infinitely many possible functions 𝜓₁, 𝜓₂, 𝜓₃, ... 𝜓. Moreover, because of the nature of the mathematics involved, if 𝜓₁ and 𝜓₂ are both valid functions, then 𝜓₁ + 𝜓₂ is also a valid function. It was this property of linear differential equations that Dirac sought to canonise as superposition.

To my mind, there is an epistemic problem in that we have to identify the ideal expression from amongst the infinite possibilities. And having chosen one expression, we then perform a calculation, and it outputs probabilities for measurable quantities.

The 𝜓-ontologists try to turn this into a metaphysical problem. Sean Carroll likes to say "the wavefunction is real". 𝜓-ontologists then make the move that causes all the problems, i.e. they speculatively assert that the system is in all of these states until we specify (or measure) one. And thus "superposition" goes from being a mathematical abstraction to being an objective phenomena, and its only one more step to saying things like "a particle can be in two places at once". 

I hope I've shown that such statements are incoherent at face value. But I hope I've also made clear that such claims are incoherent in terms of quantum theory itself, since the Schrödinger equation can never under any circumstances tell us where a particle is, only the probability of finding it in some volume of space that we have to specify in advance. 


Conclusion

The idea that a particle can be in two places at once is clearly nonsense even by the criteria of the quantum mechanics formalism itself. The whole point of denying the relevance of realism was to avoid making definite statements about what is physically happening on a scale that we can neither see nor imagine (according to the logical positivists).

So coming up with a definite, objective interpretation—like particles that are in two places at once—flies in the face of the whole enterprise of quantum mechanics. The fact that the conclusion is bizarre is incidental since it is incoherent to begin with.

The problem is that while particles are objective; our theory is entirely abstract. Particles have mass. Mass is not an abstraction; mass has to be somewhere. So we need an objective theory to describe this. Quantum mechanics is simply not that theory. And nor is quantum field theory. 

I'm told that mathematically, Dirac's canonisation of superposition was a necessary move. And to be fair, the calculations do work as advertised. One can accurately and precisely calculate probabilities with this method. But no one has any idea what this means in physical terms, no one knows why it works or what causes the phenomena it is supposed to describe. When Richard Feynman said "No one understands quantum mechanics", this is what he mean. And nothing has changed since he said it.

It would help if scientists themselves could stop saying stupid things like "particles can be in two places at once". No, particles cannot be in two places at once, and nothing about quantum mechanics makes this true. There is simply no way for quantum mathematics, as we currently understand it, to tell us anything at all about where a particle is. The location of interest is something that the physicist doing the calculation has to supply for the Schrödinger equation, not something the equation can tell us (unlike in classical mechanics).

And if the equation cannot tell us the location of the particle, under any circumstances, then it certainly cannot tell us that it is in two places or many places. Simple logic alone tells us this much.

The Schrödinger equation can only provide us with probabilities. While there are a number of possible mathematical "states" the particle can be in, we do not know which one it is in until we measure it.

If we take Dirac and co at face value, then stating any pre-measurement physical fact is simply a contradiction in terms. Pretending that this is not problematic is itself a major problem. Had we been making steady progress towards some kind of resolution, it might be less ridiculous. But the fact is that a century has passed since quantum mechanics was proposed and physicists still have no idea how or why it works but still accept that "the fundamental physical and mathematical hypotheses are no longer susceptible of modification."

Feynman might have been right when he said that the universe is not obligated to make sense. But the fact is that, science is obligated to make sense. That used to be the whole point of science, and still is in every other branch of science other than quantum mechanics. No one says of evolutionary theory, for example, that it is all a mysterious blackbox that we cannot possibly understand. And no one would accept this as an answer. Indeed, a famous cartoon by Sydney Harris gently mocks this attitude...


The many metaphysical speculations that are termed "interpretations of quantum mechanics" all take the mathematical formalism that explicitly divorces quantum mechanics from realism as canonical and inviolable. And then they all fail miserably to say anything at all about reality. And this is where we are.

It is disappointing, to say the least.

~~Φ~~

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

21 February 2025

Classical is Cooler

Many extravagant claims are made for quantum physics, and in comparison classical physics often seems to be dismissed, almost as though it is of little consequence.

Amongst other things, it has long bugged me that Buddhists hijack quantum mechanics and combine it with the worst of Buddhist philosophy—i.e. Madhyamaka—to create a monstrous form of bullshit. I've previously written three essays that try to address the perennial quantum bullshit that thrives amongst Buddhists.

Although, I don't seem to have had any appreciable effect on the levels of bullshit.

In this essay, I'm going to make an argument that classical physics is, in fact, much cooler than quantum physics, especially the bullshit quantum physics that doesn't use any mathematics


Life, the Universe, and Everything.

One way of describing the observable universe is to state the scales of mass, length, and energy it covers.

  • The total mass of the observable universe is thought to be in order or 1053 kg. From the smallest objects (electrons) to the whole universe is about 84 orders of magnitude (powers of ten).
  • The observable universe is about 4 x 1026 metres in diameter; and from the smallest possible length (the Planck length) to the whole is about 61 orders of magnitude.
  • E=mc2 gives the total energy of the universe as about 1070 joules, and covers about 61 orders of magnitude.

Human beings can perceive roughly 18 orders of magnitude of mass, 12 of length, and 11 of energy, roughly in the middle of each scale. Much of the universe is imperceptible to our naked senses. Human beings evolved and thrived for hundreds of thousands of years without knowing anything beyond what we could see, hear, smell, taste, or touch with our naked senses.

It was the invention of the ground glass lens that alerted us to the existence of both larger scales (telescope) and smaller scales (microscope). And for this reason I count the lens the most significant invention in the history of science. I know people count Copernicus as the first European scientist, but to my mind he was merely a precursor. Galileo was the first to make systematic observations and thereby discover new things about the universe, e.g. acceleration due to gravity is a constant, the moon's surface is not smooth but cratered, and that Jupiter has satellites. Note that Galileo did not have evidence or a good case for a "heliocentric universe" (and his ideas about this were wrong in several ways, but that's another story).

400 years later, we have a number of hugely successful theories of how the universe works. We've identified four fundamental forces and two kinds of particle: fermions and bosons. However, no single approach to physics can cover all the many orders of magnitude. All of our explanations are limited in their scope. Newtonian mechanics fails with large masses or high relative velocities. Relativity fails on the nanoscale and especially at the time of the big bang. Quantum physics fails on the macro-scale.

Physicists still hope to find a way of reconciling relativity and quantum physics, which they predict will produce a single mathematical formalism that can describe our universe at any scale. After more than a century of trying, we don't seem to be any closer to this. To be fair a lot of time, effort, and resources went into pursuing so-called "string theory" which has proven to be a dead end, at least as far as reconciling nano and macro physics. 

What I want to do in the rest of this essay is contrast classical physics and quantum physics.


Classical Physics

Classical physics is a primarily a description of the world that we perceive. As such, classical physics will always be salient and applicable to our lives. When we need a lever to move something, we use classical physics. When we want to describe the universe on the largest scale, we use classical physics. This means that classical physics is largely intuitive (even if the maths is not). 

Classical physics is testable and has been extensively tested. While it was never my favourite subject, I studied physics as a distinct subject for four years up to undergraduate level and in that time I did many experiments. I was able, for example, to observe the applicability of ideas like Newton's laws of motion. 

I have personally observed that m1v1 = m2v2 (i.e. momentum is conserved). And you can too, if you put your mind to it. Classical physics is highly democratic in the sense that anyone can test its predictions relatively easily.

Classical physics shows that the universe (on this scale) follows relatively simple patterns of evolution over time that can be written down as mathematical statements. In the 19th century, such expressions were called "laws". By the mid 20th century we called them "theories". Simple examples include:

  • the relationship between pressure (P), volume (V), and temperature (T) of any gas is PV/T = constant.
  • the relationship between voltage (V), current (I), and resistance (R) in a circuit is V=IR.
  • the relationship between force and acceleration of an object with mass is F=ma.

The mathematics of relativity is considerably more complex than these examples, but one gains several degrees of accuracy (≈ numbers after the decimal point) as compensation.

An interesting feature of our experience of the world is that time goes in one direction. This is a consequence of entropy. We can always tell when a film is playing backwards, for example, because the causality is all wrong. Broken cups never spontaneously reform and leap up from the floor to appear unbroken in our hands. Whole cups common fall down to the floor and smash. Once again, classical physics is intuitive.

Classical physics has never been made into an analogy by New Age gurus. No one ever compared the Heart Sutra to classical physics. No one ever says classical physics is "weird" or counter-intuitive. The fixed speed of light is a little counter-intuitive but it doesn't lend itself to the kind of Romantic flights of fancy that make religion seem interesting. If anything, religieux are apt to dismiss the whole topic of classical physics as irrelevant to "spirituality". Classical physics seems to resist being co-opted by woo-mungers.

And then there is quantum physics...


Quantum

Mathematically, quantum physics is profoundly accurate and precise method of predicting probabilities. However, unlike classical physics no one knows why it works. Literally, no one knows how the mathematics relates to reality. There are lots of ideas, each more counter-intuitive than the next and each relies on a series of assumptions that are beyond the scope of the mathematical formalism. But each set of assumptions leads to radically different metaphysics! And there is no agreement on which assumptions are valid. And at present there is no way to test these theories. I've seen Sean Carroll argue that Many Worlds does make testable predictions, but as far as I know, they have not been tested.

Einstein was of the opinion that quantum physics was incomplete. Sadly his proposed solution to this seems to have been ruled out. But still, I think the only viable stance is to consider quantum theory as incomplete until such time as we know how it relates to reality.

Which brings us to the first false claim that is commonly asserted by scientists: "the universe is deterministic." This assumes that quantum theory explains how matter behaves. But it doesn't. We don't know how mathematics relates to reality. So we don't know if the universe is deterministic or not. The claim that the the universe is deterministic goes far beyond our present state of knowledge. Most interpretations of quantum physics treat it as probabilistic rather than deterministic. And this undermines all claims that the universe is deterministic.

Another common falsehood is "quantum mechanics is a description of reality". But it should already be apparent that this is simply not true. Physicists do not know how the mathematics of quantum physics relates to reality. All they know is that the mathematics accurately assesses the probabilities of the various states that the system can be in over time. It doesn't tell us what will happen, at best it tells us what can happen.

At the popular level, quantum physics is plagued by vagueness and misleading statements. Scientists talk about "the wavefunction" as an independent thing (hypostatisation) and even as a physical thing (reification), when is it in fact an abstract mathematical function. They talk about "wave-like" behaviour without ever distinguishing this from actual wave behaviour. "Observation", so crucial to some approaches, is vague and more or less impossible to define.

We see statements like "energy is quantised" as though all energy is quantised. But this is not true. If you measure radiation from the sun, for example, it smoothly spans the entire electromagnetic spectrum (the sun glows because its hot, and that glow is blackbody radiation which is smooth rather than discreet). Energy is only quantised in atoms. And the solar spectrum is itself proof of this because the atoms in the sun absorb energy at precise wavelengths, causing the spectrum of sunlight to have darker bands when viewed at a fine enough grain.

The quantisation in atoms is explained in terms of an electron in an atom being conceived of as a standing wave - which means it can only vibrate at frequencies that allow for a whole number of wavelengths. For example, the harmonic series on a guitar string is also "quantised": the diagram shows different modes of vibration. The top shows wavelength = string length. but the string can also vibrate at twice the fundamental frequency so that 2 wavelengths = string length, then 3, 4, 5, 6, and 7 wavelengths = string length (out to infinity).

The energy levels for electrons in atoms show a similar pattern. But remember that an electron is 3 dimensional. Spherical harmonics look more like this

Which is similar to how we think electron orbitals look in Hydrogen.

Some of these results are confirmed by the shapes of molecules, which can be determined independently, for example by X-ray crystallography.

People talk about "measuring where the electron is in the atom". But this is almost pure bullshit. No one has ever measured the position of an electron in an atom. It's not possible. Within an atom, an electron is distorted into a spherical standing wave. "Position" is meaningless in this context. As are most other particle-related ideas. And remember, we cannot solve the equations when there are two or more electrons, we can only estimate (though current estimates are still very accurate).

We also see statements like "a system can exist in multiple states simultaneously", usually referred to as superposition (the "position" part is entirely misleading). This phrase is often used in popular explanations of quantum mechanics, but it’s misleading. The wavefunction describes a superposition of probability amplitudes, it does not describe a coexistence of multiple physical states. In fact, the term "state"—as it is usually used—is not applicable here at all, precisely because in normal usage it implies existence. In this context "state" confusingly means every single possible state, each with its own probability.

For example, if an electron has the wavefunction is ψ = ψ1 + ψ2 it doesn’t mean the electron is "in both states ψ1 and ψ2 at once." This is because neither ψ1 nor ψ2 is a physical state. Each is a probability distribution. So what superposition means is that, at some time, the electron's state has a probability distribution that reflects the combined amplitudes of ψ1 and ψ2. There is and can be no superposition of physical states, nor is their any theoretical possibility of observing such a thing.

All of those "interpretations" that treat the wavefunction as real simply assert its existence as axiomatic and introduce further a priori assumptions into order to try to make sense of this mess. If we make no assumptions then there is nothing about the mathematical formalism of quantum mechanics that forces us to think of the wavefunction as a real thing rather than an abstraction. It's a probability distribution. Which is an abstraction.

Which means that the idea that the wave-function can "collapse" is nonsensical. All probability distributions without exception "collapse" at the point of measurement.

If I roll a die, I get one number facing up. It can be any one of the six numbers. And each number is equally likely to be face up after a roll. Before I roll the die, the "wavefunction" of the die describes 6 possible "states" each of which is equally likely. When I roll the die I get one answer. Has anything mysterious happened? I think not. Let's say I roll a 2. I don't have to explain what happened to 1,3,4,5 and 6. Nothing happened to them, because they are not things. They are just unrealised possibilities. I get one result because only one result is physically possible. But before I know which result I have, all the possibilities have a finite probability. There's nothing "weird" or "mysterious" about this unless one first reifies the wavefunction.

Indeed, the whole idea of the "measurement problem" appears to be based on a serious misconception (as far as I can see). The measurement problem is based on the idea that the Schrödinger equation describes a system as existing in multiple physical states. But it doesn't. It describes probability distribution of possible physical states. A potentiality is not an existing state.

The only time measuring becomes problematic is when we assume that the wavefunction is a thing (reification) or that it reflects existent states rather then potential states. And these moves are simply mistakes.

Ironically, the one thing that Schrödinger's equation is not, as Nick Lucid explains, is a wave equation. The generalised wave equation contains a second-order partial differential with respect to time (a distorting force is countered by a restoring force, causing acceleration). This is a fascinating observation. I gather that using the constant i (√-1) in the Schrödinger equation allows for some "wave-like" behaviour, but no one really talks about this in lectures on quantum physics. Nor do they distinguish "wave" from "wave-like". And we still have to insist that the "wave-like" behaviour in question is a wave of probability, not a physical wave.

But then Nick Lucid, who typically is quite lucid (despite his "crazy" schtick), also introduces his video by saying "Schrödinger's equation governs the behavior of tiny quantum particles by treating them as wave functions." No equation anywhere "governs" anything. The equation describes the probability of a range of possible states. It's a descriptive law, not a prescriptive law. And as Lucid goes on to say, the equation in question is not a wave equation, it's a heat equation. The one thing that Schrödinger's equation doesn't do is "govern the behavior of tiny quantum particles".

This generalises: physics is a description, not a prescription. Abstract mathematical expressions cannot "govern" concrete entities. And in the case of quantum physics, it doesn't seem to relate to the "behaviour" either, since it only predicts the probability of any given state following from the present state. So it's not even a description of actual behaviour, just a description of potential behaviour at any point in time. With the most precise prediction as to probability, we still don't know what's going to happen next, and the actual outcome could always be the least likely outcome. That's why quantum tunneling is a thing, for example.

Unlike classical physics, which every undergraduate students proves to their own satisfaction, nano-scale physics is impossible to observe directly. It takes massive, complicated, and expensive equipment to get information from that scale. Information goes through many stages of amplification and transformation (from one kind of energy to another) before anything perceptible emerges. And that has to be processed by powerful computers before it makes any sense. And then interpreted by human beings.

That blip on the graph at 125 GeV that the LHC produced as evidence of the Higgs Boson is abstracted to the nth degree from the thing itself.

At no time was a Higgs Boson ever observed, and at no time in the future will one ever be observed. What was observed was a particular kind of decay product, which the logic of the standard model says can only be produced if a Higgs Boson decays in the way that Peter Higgs predicted. Assuming that the standard model is right. Keep in mind that the model didn't predict the energy of the Higgs particle exactly. There was actually a lot of uncertainty. And the two different detectors actually measured slightly different numbers. Moreover, do you see how wide that peak was? That width is experiment error. Maybe the energy of the Higgs is 125 GeV, or maybe its a little more or a little less?

We cannot ever see the nano-scale. And because of this, we simply cannot imagine the nano-scale.

A 1 gram diamond, for example contains in the order of 5 x 1022 atoms. How big would that diamond be if each atom of carbon was 1mm3 or roughly the size of a grain of salt? It would be 5 x 1013 cubic metres. This is roughly the volume of Mount Everest. So an atom is to a grain of salt, as a grain of salt is to Mt Everest.

Imagination simply fails.


Conclusion

In short, at least at the popular level, quantum physics is a constant source of vague or misleading information. It is plagued by careless use of language and outright false claims by scientists themselves. The philosophy of quantum physics is difficult, but on the whole it fails to adequately distinguish epistemology and metaphysics. This is made worse by kooks and charlatans leveraging the confusion to pull the wool over our eyes. Sometimes, the kooks and the scientists are in a superposition: notably Eugene Wigner's theory about "consciousness" (another abstraction) collapsing the wavefunction. Wigner won a Nobel, but he was also a serious kook. And he has been responsible for a mountain of bullshit as a result.

Most of what is said about quantum physics outside of university lecture halls is bullshit, and quite a bit that is said in them is also bullshit or at least partially digested hay. Everything that is said about Buddhism and quantum physics is mendacious bullshit.

There is no doubt that insights gained from quantum physics are important and valuable, but the whole thing is over-hyped and plagued by nonsense. The actual work is largely about approximating solutions to the insoluble mathematical equations, which at best give us probabilities. It works remarkably well, but no one knows why.

The idea that quantum physics is any kind of "description of reality" is pure bullshit. It's a probability distribution, for a reality that no understands any better now than when physics genius Richard Feynman said: "No one understands quantum mechanics".

Classical physics on the other hand is seldom vague or misleading. It resists being leveraged by kooks by being precisely and accurately defined. It can readily be tested by more or less anyone. Classical physics is much less prone to bullshit. No one ever bothers to compare Buddhism to classical physics. Which is a good sign.

Classical physics is not only cooler than quantum physics. It is way cooler. 


Coda

If anyone is still unconvinced that quantum theory has no conceivable relationship with Buddhism, then I invite you to watch this video introduction to quantum mechanics from an Oxford University undergraduate physics course. This is a no bullshit course. 



I defy anyone to connect anything said in this video to any aspect of Buddhist doctrine. 

Related Posts with Thumbnails