Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

30 May 2025

Theory is Approximation

A farmer wants to increase milk production. They ask a physicist for advice. The physicist visits the farm, takes a lot of notes, draws some diagrams, then says, "OK, I need to do some calculations."

A week later, the physicist comes back and says, "I've solved the problem and I can tell you how to increase milk production".

"Great", says the farmer, "How?".

"First", says the physicist, "assume a spherical cow in a vacuum..."

What is Science?

Science is many things to many people. At times, scientists (or, at least, science enthusiasts) seem to claim that they alone know the truth of reality. Some seem to assume that "laws of science" are equivalent to laws of nature. Some go as far as stating that nature is governed by such "laws". 

Some believe that only scientific facts are true and that no metaphysics are possible. While this view is less common now, it was of major importance in the formulation of quantum theory, which still has problems admitting that reality exists. As Mara Beller (1996) notes:

Strong realistic and positivistic strands are present in the writings of the founders of the quantum revolution-Bohr, Heisenberg, Pauli and Born. Militant positivistic declarations are frequently followed by fervent denial of adherence to positivism (183). 

On the other hand, some see science as theory-laden and sociologically determined. Science is just one knowledge system amongst many of equal value. 

However, most of us understand that scientific theories are descriptive and idealised. And this is the starting point for me. 

In practising science, I had ample opportunity to witness hundreds or even thousands of objective (or observer-independent) facts about the world. The great virtue of the scientific experiment is that you get the same result, within an inherent margin of error associated with measurement, no matter who does the experiment or how many times they do it. The simplest explanation of this phenomenon is that the objective world exists and that such facts are consistent with reality. Thus, I take knowledge of such facts to constitute knowledge about reality. The usual label for this view is metaphysical realism.

However, I don't take this to be the end of the story. Realism has a major problem, identified by David Hume in the 1700s. The problem is that we cannot know reality directly; we can only know it through experience. Immanuel Kant's solution to this has been enormously influential. He argues that while reality exists, we cannot know it. In Kant's view, those qualities and quantities we take to be metaphysical—e.g. space, time, causality, etc.—actually come from our own minds. They are ideas that we impose on experience to make sense of it. This view is known as transcendental idealism. One can see how denying the possibility of metaphysics (positivism) might be seen as (one possible) extension of this view. 

It's important not to confuse this view with the idea that only mind is real. This is the basic idea of metaphysical idealism. Kant believed that there is a real world, but we can never know it. In my terms, there is no epistemic privilege.

Where Kant falls down is that he lacks any obvious mechanism to account for shared experiences and intersubjectivity (the common understanding that emerges from shared experiences). We do have shared experiences. Any scenario in which large numbers of people do coordinated movements can illustrate what I mean. For example, 10,000 spectators at a tennis match turning their heads in unison to watch a ball be batted back and forth. If the ball is not objective, or observer-independent, how do the observers manage to coordinate their movements? While Kant himself argues against solipsism, his philosophy doesn't seem to consider the possibility of comparing notes on experience, which places severe limits on his idea. I've written about this in Buddhism & The Limits of Transcendental Idealism (1 April 2016).

In a pragmatic view, then, science is not about finding absolute truths or transcendental laws. Science is about idealising problems in such a way as to make a useful approximation of reality. And constantly improving such approximations. Scientists use these approximations to suggest causal explanations for phenomena. And finally, we apply the understanding gained to our lives in the form of beliefs, practices, and technologies. 


What is an explanation?

In the 18th and 19th centuries, scientist confidently referred to their approximations as "laws". At the time, a mechanistic universe and transcendental laws seemed plausible. They were also gathering the low-hanging fruit, those processes which are most obviously consistent and amenable to mathematical treatment. By the 20th century, as mechanistic thinking waned, new approximations were referred to as "theories" (though legacy use of "law" continued). And more recently, under the influence of computers, the term "model" has become more prevalent. 

A scientific theory provides an explanation for some aspect of reality, which allows us to understand (and thus predict) how what we observe will change over time. However, even the notion of explanation requires some unpacking.

In my essay, Does Buddhism Provide Good Explanations? (3 Feb 23), I noted Faye's (2007) typology of explanation:

  • Formal-Logical Mode of Explanation: A explains B if B can be inferred from A using deduction.
  • Ontological Mode of Explanation: A explains B if A is the cause of B.
  • Pragmatic Mode of Explanation: a good explanation is an utterance that addresses a particular question, asked by a particular person whose rational needs (especially for understanding) must be satisfied by the answer.
In this essay, I'm striving towards the pragmatic mode and trying to answer my own questions. 

Much earlier (18 Feb 2011), I outlined an argument by Thomas Lawson and Robert McCauley (1990) which distinguished explanation from interpretation.

  • Explanationist: Knowledge is the discovery of causal laws, and interpretive efforts simply get in the way.
  • Interpretationist: Inquiry about human life and thought occurs in irreducible frameworks of values and subjectivity. 
"When people seek better interpretations they attempt to employ the categories they have in better ways. By contrast, when people seek better explanations they go beyond the rearrangement of categories; they generate new theories which will, if successful, replace or even eliminate the conceptual scheme with which they presently operate." (Lawson & McCauley 1990: 29)

The two camps are often hostile to each other, though some intermediate positions exist between them. As I noted, Lawson and McCauley see this as somewhat performative:

Interpretation presupposes a body of explanation (of facts and laws), and seeks to (re)organise empirical knowledge. Explanation always contains an element of interpretation, but successful explanations winnow and increase knowledge. The two processes are not mutually exclusive, but interrelated, and both are necessary.

This is especially true for physics where explanations often take the form of mathematical equations that don't make sense without commentary/interpretation.  


Scientific explanation.

Science mainly operates, or aims to operate, in the ontological/causal mode of explanation: A explains B if (and only if) A is the cause of B. However, it still has to satisfy the conditions for being a good pragmatic explanation:  "a good explanation is an utterance that addresses a particular question, asked by a particular person whose rational needs (especially for understanding) must be satisfied by the answer."

As noted in my opening anecdote, scientific models are based on idealisation, in which an intractably complex problem is idealised until it becomes tractable. For example, in kinematic problems, we often assume that the centre of mass of an object is where all the mass is. It turns out that when we treat objects as point masses in kinematics problems, the computations are much simpler and the results are sufficiently accurate and precise for most purposes. 

Another commonly used idealisation is the assumption that the universe is homogeneous or isotropic at large scales. In other words, as we peer out into the farthest depths of space, we assume that matter and energy are evenly distributed. As I will show in the forthcoming essay, this assumption seems to be both theoretically and empirically false. And it seems that so-called "dark energy" is merely an artefact of this simplifying assumption. 

Many theories have fallen because of employing a simplifying assumption that distorts answers to make them unsatisfying. 

A "spherical cow in a vacuum" sounds funny, but a good approximation can simplify a problem just enough to make it tractable and still provide sufficient accuracy and precision for our purposes. It's not that we should never idealise a scenario or make simplifying assumptions. The fact is that we always do this. All physical theories involve starting assumptions. Rather, the argument is pragmatic. The extent to which we idealise problems is determined by the ability of the model to explain phenomena to the level of accuracy and precision that our questions require. 

For example, if our question is, "How do we get a satellite into orbit around the moon?" we have a classic "three-body" problem (with four bodies: Earth, moon, sun, and satellite). Such problems are mathematically very difficult to solve. So we have to idealise and simplify the problem. For example, we can decide to ignore the gravitational attraction caused by the satellite, which is real but tiny. We can assume that space is relatively flat throughout. We can note that relativistic effects are also real but tiny. We don't have to slavishly use the most complex explanation for everything. Given our starting assumptions, we can just use Newton's law of gravitation to calculate orbits. 

We got to relativity precisely because someone asked a question that Newtonian approaches could not explain, i.e. why does the orbit of Mercury precess and at what rate? In the Newton approximation, the orbit doesn't precess. But in Einstein's reformulation of gravity as the geometry of spacetime, a precession is expected and can be calculated. 


Models

I was in a physical chemistry class in 1986 when I realised that what I had been learning through school and university was a series of increasingly sophisticated models, and the latest model (quantum physics) was still a model. At no point did we get to reality. There did seem to me to be a reality beyond the models, but it seemed to be forever out of reach. I had next to no knowledge of philosophy at that point, so I struggled to articulate this thought, and I found it dispiriting. In writing this essay, I am completing a circle that I began as a naive 20-year-old student.

This intuition about science crystallised into the idea that no one has epistemic privilege. By this I mean that no one—gurus and scientists included—has privileged access to reality. Reality is inaccessible to everyone. No one knows the nature of reality or the extent of it. 

We all accumulate data via the same array of physical senses. That data feeds virtual models of world and self created by the brain. Those models both feed information to our first-person perspective, using the sensory apparatus of the brain to present images to our mind's eye. This means that what we "see" is at least two steps removed from reality. This limit applies to everyone, all the time.

However, when we compare notes on our experience, it's clear that some aspects of experience are independent of any individual observer (objective) and some of them are particular to individual observers (subjective). By focusing on and comparing notes about the objective aspects of experience, we can make reliable inferences about how the world works. This is what rescues metaphysics from positivism on one hand and superstition on the other. 

We can all make inferences from sense data. And we are able to make inferences that prove to be reliable guides to navigating the world and allow us to make satisfying causal explanations of phenomena. Science is an extension of this capacity, with added concern for accuracy, precision, and measurement error. 

Since reality is the same for everyone, valid models of reality should point in the same direction. Perhaps different approaches will highlight different aspects of reality, but we will be able to see how those aspects are related. This is generally the case for science. A theory about one aspect of reality has to be consistent, even compliant, with all the other aspects. Or if one theory is stubbornly out of sync, then that theory has to change, or all of science has to change. Famously, Einstein discovered several ways in which science had to change. For example, Einstein proved that time is particular rather than universal.  Every point in space has its own time. And this led to a general reconsideration of the role of time in our models and explanations. 


Sources of Error

A scientific measurement is always accompanied by an estimate of the error inherent in the measurement apparatus and procedure. Which gives us a nice heuristic: If a measurement you are looking at is not accompanied by an indication of the errors, then the measurement is either not scientific, or it has been decontextualised and, with the loss of this information, has been rendered effectively unscientific.

Part of every good scientific experiment is identifying sources of error and trying to eliminate or minimise them. For example, if I measure my height with three different rulers, will they all give the same answer? Perhaps I slumped a little on the second measurement? Perhaps the factory glitched, and one of the rulers is faulty? 

In practice, a measurement is accurate to some degree, precise to some degree, and contains inherent measurement error to some degree. And each degree should be specified to the extent that it is known.

Accuracy is itself a measurement, and as a quantity reflects how close to reality the measurement is. 

Precision represents how finely we are making distinctions in quantity.

Measurement error reflects uncertainty introduced into the measurement process by the apparatus and the procedure.

Now, precision is relatively easy to know and control. We often use the heuristic that a ruler is accurate to half the smallest measure. So a ruler marked with millimetres is considered precise to 0.5 mm. 

Let's I want to measure my tea cup. I have three different rulers. But I also note that the cup has rounded edges, so knowing where to measure from is a judgment call. I estimate that this will add a further 1 mm of error. Here are my results: 

  • 83.5 ± 1.5 mm.
  • 86.0 ± 1.5 mm.
  • 84.5 ± 1.5 mm

The average is 84.6 ± 1.5 mm. So we would say that we think the true answer lies between 86.1 and 83.1 mm. And note that even though I have an outlier (86.0 mm), this is in fact within the margin of error. 

As I was measuring, I noted another potential source of error. I was guesstimating where the widest point was. And I think this probably adds another 1-2 mm of measurement error. When considering sources of error in a measurement, one's measurement procedure is often a source. In science, clearly stating one's procedure allows others to notice problems the scientists might have overlooked. Here, I might have decided to mark the cup so that I measured at the same point each time. 

Now the trick is that there is no way to get behind the measurement and check with reality. So, accuracy has to be defined pragmatically as well. One way is to rely on statistics. For example, one makes many measurements and presents the mean value and the standard deviation (which requires more than three measurements). 

The point is that error is always possible. It always has to be accounted for, preferably in advance. We can take steps to eliminate error. An approximation always relies on starting assumptions, and these are also a source of error. Keep in mind that this critique comes from scientists themselves. They haven't been blindly ignoring error all these years. 


Mathematical Models

I'm not going to dwell on this too much. But in science, our explanations and models usually take the form of an abstract symbolic mathematical equation. A simple, one-dimensional wave equation takes the general form:

y = f(x,t)

That is to say that the displacement of the wave (y) is a function of position (x) and time (t). Which is to say that changes in the displacement are proportional to changes in position in space and time. This describes a wave that, over time, moves in the x direction (left-right) and displaces in the y direction (up-down). 

More specifically, we model simple harmonic oscillations using the sine function. In this case, we know that spatial changes are a function of position and temporal changes are a function of time. 

y(x) = sin(x)
y(t) = sin(t)

It turns out that the relationship between the two functions can be expressed as 

y(x,t) = sin(x ± t).

If the wave is moving right, we subtract time, and if the wave is moving to the left, we add it. 

The sine function smoothly changes between +1 and -1, but a real wave has an amplitude, and we can scale the function by multiplying it by the amplitude.

y(x,t) = A sin(x ± t).

And so on. We keep refining the model until we get to the general formula:

y(x,t) = A sin(kx ± ωt ± ϕ).

Where A is the maximum amplitude, k is the stiffness of the waving medium, ω is the angular velocity, and ϕ is the phase.

The displacement is periodic in both space and time. Since k = 2π/λ (where λ is the wavelength), the function returns to the same spatial configuration when x/n = λ (where n is a whole number). Similarly, since ω = 2π/T (where T is the period or wavetime), the function returns to the same temporal configuration when t/n = T.

What distinguishes physics from pure maths is that, in physics, each term in an equation has a physical significance or interpretation. The maths aims to represent changes in our system over time and space. 

Of course, this is idealised. It's one-dimensional. Each oscillation is identical to the last. The model has no friction. If I add a term for friction, it will only be an approximation of what friction does. But no matter how many terms I add, the model is still a model. It's still an idealisation of the problem. And the answers it gives are still approximations.


Conclusion

No one has epistemic privilege. This means that all metaphysical views are speculative. However, we need not capitulate to solipsism (we can only rely on our own judgements), relativism (all knowledge has equal value) or positivism (no metaphysics is possible). 

Because, in some cases, we are speculating based on comparing notes about empirical data. This allows us to pragmatically define metaphysical terms like reality, space, time, and causality in such a way that our explanations provide us with reliable knowledge. That is to say, knowledge we can apply and get expected results. Every day I wake up and the physical parameters of the universe are the same, even if everything I see is different. 

Reality is the world of observer-independent phenomena. No matter who is looking, when we compare notes, we broadly agree on what we saw. There is no reason to infer that reality is perfect, absolute, or magical. It's not the case that somewhere out in the unknown, all of our problems will be solved. As a historian of religion, I recognise the urge to utopian thinking and I reject it. 

Rather, reality is seen to be consistent across observations and over time. Note that I say "consistent", not "the same". Reality is clearly changing all the time. But the changes we perceive follow patterns. And the patterns are consistent enough to be comprehensible. 

The motions of stars and planets are comprehensible: we can form explanations for these that satisfactorily answer the questions people ask. The patterns of weather are comprehensible even when unpredictable. People, on the other hand, remain incomprehensible to me.

That said, all answers to scientific questions are approximations, based on idealisations and assumptions. Which is fine if we make clear how we have idealised a situation and what assumptions we have made. This allows other people to critique our ideas and practices. As Mercier and Sperber point out, it's only in critique that humans actually use reasoning (An Argumentative Theory of Reason,10 May 2013). 

We can approximate reality, but we should not attempt to appropriate it by insisting that our approximations are reality. Our theories and mathematics are always the map, never the territory. The phenomenon may be real, but the maths never is.  

This means that if our theory doesn't fit reality (or the data), we should not change reality (or the data); we should change the theory. No mathematical approximation is so good that it demands that we redefine reality. Hence, all of the quantum Ψ-ontologies are bogus. The quantum wavefunction is a highly abstract concept; it is not real. For a deeper dive into this topic, see Chang (1997), which requires a working knowledge of how the quantum formalism works, but makes some extremely cogent points about idealised measurements.

In agreeing that the scientific method and scientific explanations have limits, I do not mean to dismiss them. Science is by far the most successful knowledge seeking enterprise in history. Science provides satisfactory answers many questions. For better or worse, science has transformed our lives (and the lives of every living thing on the planet). 

No, we don't get the kinds of answers that religion has long promised humanity. There is no certainty, we will never know the nature of reality, we still die, and so on. But then religion never had any good answers to these questions either. 

~~Φ~~


Beller, Mara. (1996). "The Rhetoric of Antirealism and the Copenhagen Spirit". Philosophy of Science 63(2): 183-204.

Chang, Hasok. (1997). "On the Applicability of the Quantum Measurement Formalism." Erkenntnis 46(2): 143-163. https://www.jstor.org/stable/20012757

Faye, Jan.(2007). "The Pragmatic-Rhetorical Theory of Explanation." In Rethinking Explanation. Boston Studies in the Philosophy of Science, 43-68. Edited by J. Persson and P. Yikoski. Dordrecht: Springer.

Lawson, E. T. and McCauley, R. N. (1990). Rethinking Religion: Connecting Cognition and Culture. Cambridge: Cambridge University Press.


Note: 14/6/25. The maths is deterministic, but does this mean that reality is deterministic? 

16 May 2025

Observations and Superpositions

The role of observation in events has been a staple of quantum physics for decades and is closely associated with "the Copenhagen interpretation". On closer inspection, it turns out that everyone connected with Bohr's lab in Copenhagen had a slightly different view on how to interpret the Schrödinger equation. Worse, those who go back and look at Bohr's publications nowadays tend to confess that they cannot tell what Bohr's view was. For example, Adam Becker speaking to Sean Carroll (time index 21:21; emphasis added):

I don't think that there is any single Copenhagen interpretation. And while Niels Bohr and Max Born and Pauli, and Heisenberg and the others may have each had their own individual positions. I don't think that you can combine all of those to make something coherent...

...Speaking of people being mad at me, this is something that some people are mad at me for, they say, "But you said the Niels Bohr had this position?" I'm like, "No, I didn't, I didn't say that Niels Bohr had any position. I don't know what position he had and neither does anybody else."

So we should be cautious about claims made for "the Copenhagen interpretation", which seem to imply a consensus that never existed at Bohr's lab in Copenhagen.

That said, the idea that observation causes the wavefunction to collapse is still a staple of quantum physics. Despite playing a central role in quantum physics, "observation" is seldom precisely defined in scientific terms, or when it is defined, it doesn't involve any actual observation (I'll come back to this). The situation was made considerably worse when (Nobel laureate) Eugene Wigner speculated that it is "consciousness" that collapses the wave function. "Consciousness" is even less well-defined than "observation". While most academic physicists instantly rejected the role of consciousness in events, outside of physics it became a popular element of science folklore and New Ageism.

The idea that "observation" or "consciousness" are involved in "collapsing the wave function" is also an attachment point for Buddhists who wish to bolster their shaky faith by aligning it with science. The result of such legitimisation strategies is rather pathetic hand waving. Many Buddhists want reality to be reductive and idealist: they want "mind" to be the fundamental substance of the universe. This would align with some modern interpretations of traditional Buddhist beliefs about mind. But the idea is also to find some rational justification for Buddhist superstitions like karma and rebirth. As I showed at length in my book Karma and Rebirth Reconsidered, it simply does not work.

In this essay, I will show that it is trivially impossible for observation to play any role in causation at any level. I'm going to start by defining observation with respect to a person and exploring the implications of this, particularly with respect to Schrödinger's cat. I will also consider the post hoc rationalisation of observation qua "interaction" (sans any actual observation).


What is "An Observation"?

We may say that an observer, Alice, observes a process P giving rise to an event E, with an outcome O, when they become aware of P, E and O. It is possible to be aware of each part individually, but in order to understand and explain what has happened, we really need to have some idea of what processes were involved, what kinds of events it engendered, and the specific outcomes of those events. 

It's instructive to ask, "How does Alice become aware of external events?" Information from the process, event, and/or outcome of interest first has to reach her in some form. The fastest way that this can happen is for light from the process, event, and/or outcome to reach Alice's eyes. It always takes a finite amount of time for the light to reach her eye.

But light reaching Alice's eye alone does not create awareness. Rather, cells in the eye convert the energy of light into electrochemical energy (a nerve impulse). That pulse of energy travels along the optic nerve to the brain and is incorporated into our virtual world model and then, finally, presented to the first person perspective. Only then we become aware of it. And this part also takes a finite amount of time. Indeed, this part takes a lot more time than the light travelling.

Therefore, the time at which Alice becomes aware of P, E, and O, is some appreciable amount of time after E happens and O is already fixed. There is no alternative definition of "observation" that avoids this limitation, since information cannot travel faster than the speed of light and the brain is always involved. The only other possibilities are, if anything, slower. Therefore:

Alice can only observe processes, events, and outcomes after the fact.

If observation is always after the fact, then observation can never play any causal role in the sequence of events because causes must precede effects, in all frames of reference. Therefore:

Observation can play no causal role
in processes, events, or outcomes.

This means that there is no way that "observation" (or "consciousness") can cause the collapse the wavefunction. Rather, the collapse of the wavefunction has to occur first, then the light from that event has to travel to Alice's eye. There is no way around this physical limitation in our universe. And given the nature of wavefunctions—the outputs if which are vectors in a complex plane—this can hardly be surprising. 

Observation is never instantaneous let alone precognitive. And this means that all talk of observation causing "wavefunctions to collapse" is trivially false.

We could simply leave it at that, but it will be instructive to re-examine the best known application of "observation".


Schrödinger's cat

Schrödinger's cat is only ever alive or dead. It is never both alive and dead. This was the point that Schrödinger attempted to make. Aristotle's law of noncontradiction applies: an object cannot both exist and not exist at the same time. We cannot prove this axiom from first principles, but if we don't accept it as an axiom, it renders all communication pointless. No matter what true statement I may state, anyone can assert that the opposite is also true.

Schrödinger proposed his thought experiment as a reductio ad absurdum argument against Bohr and the others in Copenhagen. He was trying to show that belief in quantum superpositions leads to absurd, illogical consequences. He was right, in my opinion, but he did not win the argument (and nor will I).

This argument is broadly misunderstood outside of academic physics. This is because Schrödinger's criticism was taken up by physicists as an exemplification of the very effect it was intended to debunk. "Yes," cried the fans of Copenhagen type explanations, "this idea of both-alive-and-dead at the same time is exactly what we mean. Thanks." And so we got stuck with the idea that the cat is both alive and dead at the same time (which is nonsense). Poor old Schrödinger, he hated this idea (and didn't like cats) and now it is indelibly associated with him.

The general set up of the Schrödinger's cat thought experiment is that a cat is placed in a box. Inside the box, a random event may occur. If it occurs, the event triggers the death of the cat via a nefarious contraption. Once the cat is in the box, Alice doesn't know whether the cat is alive or dead. The cat is a metaphor for subatomic particles. We are supposed to believe that they adopt a physical superposition of states: say, "spin up" and "spin down", or "position x" and "position y" at the same time before we measure them, then at the point of measurement, they randomly adopt one or the other of the superposed states.

Here's the thing. The cat goes into the box alive. If the event happens, the cat dies. If it doesn't happen the cat lives. And Alice doesn't know which until she opens the box. The uncertainty here is not metaphysical, it's epistemic. It's not that a cat can even be in a state of both-alive-and-dead, it cannot; it's only that we don't know whether it is alive or dead. So this is a bad analogy.

Moreover, even when Alice opens the box, the light from the cat still takes some time to reach her eyes. Observation always trails behind events, it cannot anticipate or participate in events. Apart from reflected light, nothing is coming out from Alice that could participate in the sequence of events happening outside her body, let alone change the outcome.

Also, the cat has eyes and a brain. It is itself an "observer". 

Epistemic uncertainty cannot be mapped back to metaphysical uncertainty without doing violence to reason. A statement, "I don't know whether the cat is alive or dead," cannot be taken to imply that the cat is both alive and dead. This is definitely a category error for cats. Schrödinger's view was that it is also a category error for electrons and photons. And again, I agree with Schrödinger (and Einstein).

In that case, why do physics textbooks still insist on the nonsensical both-alive-and-dead scenario? It seems to be related to a built-in feature of the mathematics of spherical standing waves, which are at the heart of Schrödinger's equation (and many other features of modern science). The mathematics of standing waves was developed in the 18th century (i.e. it is thoroughly classical). Below, I quote from the Mathworld article on Laplace's equation (for a spherical standing wave) by Eric Weisstein (2025. Emphasis added)

A function psi which satisfies Laplace's equation is said to be harmonic. A solution to Laplace's equation has the property that the average value over a spherical surface is equal to the value at the center of the sphere (Gauss's harmonic function theorem). Solutions have no local maxima or minima. Because Laplace's equation is linear, the superposition of any two solutions is also a solution.

The last sentence of this passage is similar to a frequently encountered claim in quantum physics. That is to say, the fact that solutions for individual quantum states can be added together and produce another valid solution for the wave equation. This is made out to be a special feature of quantum mechanics that defines the superposition of "particles".

Superposition of waves is nothing remarkable or "weird". Any time two water waves meet, for example, they superpose.


In this image, two wave fronts travel towards the viewer obliquely from the left and right at the same time (the appear to meet almost at right angles). The two waves create an interference pattern (the cross in the foreground) where the two waves are superposed. Waves routinely superpose. And this is known as the superposition principle.

The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually."
The Penguin Dictionary of Physics.

For this type of linear function, we can define superposition precisely: f(x) + f(y) = f(x+y)

In mathematical terms, each actual wave can be thought of as a solution to a wave equation. The sum of the waves must also be a solution because of the situation we see in the image, i.e. two waves physically adding together where they overlap, while at the same time retaining their identity.

I've now identified three universal properties of spherical standing waves that are frequently presented as special features of quantum physics:

  • quantisation of energy
  • harmonics = higher energy states (aka orbitals)
  • superposition (of waves)

These structural properties of standing waves are not "secret", but they are almost always left out of narrative accounts of quantum physics. And yet, these are important intuitions to bring to bear when applying wave mechanics to describing real systems.

Something else to keep in mind is that "quantisation" is an ad hoc assumption in quantum physics. It's postulated to be a fundamental feature of all quantum fields. The only problem is that all of the physical fields we know of—which is to say the fields we can actually measure—are smooth and continuous across spacetime: including gravitational fields and electromagnetic fields. Scientists have imagined discontinuous or quantized fields, but they have never actually seen one.

Moreover, as far as I know, the only physical mechanism in our universe that is known to quantize energy, create harmonics, and allow for superposition is the standing wave. The logical deduction from these facts is that it is the standing wave structure of the atom that quantizes the energy of electrons and photons and creates electron orbitals. 

Quantization is a structural property of atoms, not a substantial property of fields. (Or more conventionally and less precisely, quantization is an emergent property, not a fundamental property). 

Also, as I have already explained, the coexistence of probabilities always occurs before any event, and those probabilities always collapse at the point when an event has a definite outcome. There is nothing "weird" about this; it's not a "problem". What is weird, is the idea that hypostatizing and reifying probabilities leads to some meaningful metaphysics. It has not, and it will not.

While the superposition of waves or probabilities is an everyday occurrence. The superposition of physical objects is another story. Physical objects occupy space in an exclusive way: if one object is in that location, no other physical object can also be in that location. Physical objects cannot superpose and they are never observed to be superposed. And yet, the superposition of point particles is how physicists continue to explain the electron in an atom.

The electric field has been measured and it is found to be smooth and continuous in spacetime. Just as predicted by Maxwell. Given this, simple logic and basic geometry dictates that if—

  1. the electrostatic field of the proton has spherical symmetry, and
  2. a hydrogen atom is electrostatically neutral, and
  3. the neutrality is assumed to be the result of the electron's electrostatic field,

—then the electron can only be in one configuration: it must be a sphere (or a close approximation of a sphere) completely surrounding the proton. This is the only way to ensure that all the field lines emerging from the proton terminate at the electron. Otherwise there are unbalanced forces - a net charge rather than neutrality. And a changing electric field dissipates energy, which electrons do not. 

Unbalanced forces

Now, if the electron is both a wave and a sphere, then the electron can only be a spherical standing wave. The Bohr model of the atom was incorrect and it surprises me greatly that this problem was not identified at the time. 

And if the electron is a spherical standing wave then, because these are universal features of standing waves, we expect:

  1. The energy of the electron in the H atom will be quantised.
  2. The electron will form harmonics corresponding to higher energy states and it will jump between them when it absorbs or emits photons.
  3. When two electron waves intersect, the sum of their amplitudes is also a solution to the wave equation.

Moreover, we can now take pictures of atoms using electron microscopes. Atoms are physical objects. In every single picture, atoms appear to be approximately spherical.


And yet mainstream quantum models do not quite treat atoms as real. Quantum physics is nowadays all about probabilities. The problem is that, as I established in an earlier essay, a probability cannot possibly balance an electrostatic field to create a neutral atom. Only a real electric field can do this. Schrödinger was right to be unconvinced by the probability interpretation, even if it works. But he was wrong about modelling a particle as a wave. 

Waves are observed to superpose all the time. Solid objects are never observed to do so. The only reason we even consider superposition for "particles" is the wave-particle duality postulate, which we now know to be inaccurate. "Particles" are waves.

As I understand it, the idea that our universe consists of 17 fields in which particles are "excitations" is a widely accepted postulate. And as such, one might have expected scientists to go back over the physics predicated on wave-particle duality and recast it in terms of only waves. Having the wave equation describe a wave would be a start.

I digress. Clearly the idea that observers influence outcomes is trivially false. So now we must turn to the common fudge of removing the observer from the observation.


Interaction as Observation

One way around the problems with observation, is to redefine "observation" so that it excludes actual observations and observers. The move is to redefine "observation" to mean "some physical interaction". I'm sure I've mentioned this before because I used to think this was a good idea.

While we teach quantum physics in terms of isolated "particles" in empty, flat space, the fact is that the universe is crammed with matter and energy, especially in our part of the universe. Everything is interacting with everything that it can interact with, simultaneously in all the ways that it can interact, at every moment that it is possible to interact. Nothing in reality is ever simple.

In classical physics, we are used to being able to isolate experiments and exclude variables. This cannot ever happen at the nanoscale and below. An electron, for example, is surrounded by an electrostatic field which interacts with the fields around all other wavicles, near and far.

Electrons, for example, are all constantly pushing against each other via the electromagnetic force. If your apparatus contains electrons, their fields invariably interact with the electron you wish to study. This includes mirrors, beam-splitters, prisms, diffraction gratings, and double slits. The apparatus is not "classical", it's part of the quantum system you study. At the nanoscale and below, there are no neutral apparatus. 

Therefore, the idea that interaction causes the wavefunction to "collapse" is also untenable because in the real world wavicles are always interacting. In an H atom, for example, the electron and the proton are constantly and intensely interacting via the electromagnetic force. So the electron in an H atom could never be in a superposition.


Conclusions

Observation can only occur after the fact and is limited by the speed of light (or speed of causality).

Neither "observation" nor "consciousness" can play any role in the sequence of events, let alone a causal role.

Schrödinger's cat is never both alive and dead. And observation makes no difference to this (because observation can only ever be post hoc and acausal).

It is always the case, no matter what kind of system we are talking about, that probabilities for all possibilities coexist prior to an event and collapse as the event produces a specific outcome. But this is in no way analogous to waves superposing and should not be called "superposition".

All (linear) waves can superpose. All standing waves are quantised. All standing waves have harmonics.

Defining observation so as to eliminate the observer doesn't help as much as physicists might wish.

"Observation" is irrelevant to how we formulate physics.

The wave-particle duality postulate is still built into quantum mechanics, despite being known to be false.

For the last century, quantum physicists have been trying to change reality to fit their theory. Many different kinds of reality have been proposed to account for quantum theory: Copenhagen, Many Worlds, Qbism, etc. I submit that proposing a wholly different reality to account for your theory is tantamount to insanity. The success in predicting probabilities seems to have causes physicists to abandon science. I don't get it, and I don't like it. 

~~Φ~~


Bibliography

Weisstein, Eric W. (2025) "Laplace's Equation." MathWorld. https://mathworld.wolfram.com/LaplacesEquation.html

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

11 April 2025

Why Quantum Mechanics is Currently Wrong and How to Fix It.

It is now almost a century since "quantum mechanics" became established as the dominant paradigm for thinking about the structure and motion of matter on the nanoscale. And yet the one thing quantum mechanics cannot do is explain what it purports to describe. Sure, quantum mechanics can predict the probability of measurements. However, no one knows how it does this. 

Presently, no one understands the foundations of quantum mechanics

Feynman's quote to this effect is still accurate. It has recently been restated by David Deutsch, for example:

"So, I think that quantum theory is definitely false. I think that general relativity is definitely false." (t = 1:16:13)
"Certainly, both relativity and quantum theory are extremely good approximations in the situations where we want to apply them... So, yes, certainly, good approximations for practical purposes, but so is Newton's theory. That's also false." (t = 1:28:35)
—David Deutsch on Sean Carroll's podcast.

I listened to these striking comments again recently. This time around, I realised that my conception of quantum field theory (QFT) was entirely wrong. I have a realistic picture in my head, i.e. when I talk about "waves", something is waving. This is not what GFT says at all. The "fields" in question are entirely abstract. What is waving in quantum mechanics is the notion of the probability of a particle appearing at a certain location within the atom. Below I will show that this thinking is incoherent. 

There have been numerous attempts to reify the quantum wavefunction. And they all lead to ridiculous metaphysics. Some of the most hilarious metaphysics that quantum mechanics has produced are:

  1. The universe behaves one way when we look at it, and a completely different way when we don't.
  2. The entire universe is constantly, and instantaneously, splitting into multiple copies of itself, each located in exactly the same physical space, but with no connections between the copies.
  3. Electrons are made of waves of probability that randomly collapse to make electrons into real particles for a moment.

None of these ideas is remotely compatible with any of the others. And far from there being a consensus, the gaps between "interpretations" are still widening. Anyone familiar with my work on the Heart Sutra will recognise this statement. It's exactly what I said about interpretations of the Heart Sutra.

Physics has lost its grip on reality. It has a schizoid ("splitting") disorder. I believe I know why.


What Went Wrong?

The standard quantum model embraces wave-particle duality as a fundamental postulate. In the 1920s, experiments seemed to confirm this. This is where the problems start.

Schiff's (1968) graduate-level textbook, Quantum Mechanics, discusses the idea that particles might be considered "wave packets":

The relation (1.2) between momentum and wavelength, which is known experimentally to be valid for both photons and particles, suggests that it might be possible to use concentrated bunches of waves to describe localized particles of matter and quanta of radiation. To fix our ideas, we shall consider a wave amplitude or wave function that depends on the space coordinates x, y, z and the time t. This quantity is assumed to have three basic properties. First, it can interfere with itself, so that it can account for the results of diffraction experiments. Second, it is large in magnitude where the particle or photon is likely to be and small elsewhere. And third, will be regarded as describing the behavior of a single particle or photon, not the statistical distribution of a number of such quanta. (Schiff 1968: 14-15. Emphasis added)

I think this statement exemplifies the schizoid nature of quantum mechanics. The Schrödinger model begins with a particle, described as a "wave packet", using the mathematics of waves. The problem is that physicists still want to use the wave equation to recover the "position" or "momentum" of the electron in the atom, as though it is a particle. I have seen people dispute that this was Schrödinger's intention, but it's certainly how Schiff saw it, and his text was widely respected in its day.

The obvious problem is that, having modelled the electron as a wave, how do we then extract from it information about particles, such as position and momentum? Mathematically, the two ideas are not compatible. Wave-talk and particle-talk cannot really co-exist. 

In fact, Schrödinger was at a loss to explain this. It was Max Born who pointed out that if you take the modulus squared value of the wave function (which outputs complex-numbered vectors), you get a probability distribution that allows you to predict measurements. As I understand it, Schrödinger did not like this at all. In an attempt to discredit this approach, he formulated his classic thought experiment of the cat in the box. A polemic that failed so badly, that the Copenhagen crowd adopted Schrödinger's cat as their mascot. I'll come back to this.

However, there is a caveat here. No one has ever measured the position of an electron in an atom, and no one ever will. It's not possible. We have probes that can map out forces around atoms, but we don't have a probe that we, say, can stick into an atom and wait for the electron to run into it. This is not how things work on this scale.


Can We Do Better? (Yes We Can!)

Electric charge is thought to be a fundamental property of matter. We visualise the electric charge of a proton as a field of electric potentials with a value at every point in space, whose amplitude drops off as the square of the distance. The electric field around a proton is observed to be symmetrical in three dimensions. In two dimensions, a proton looks something like this with radiating, evenly spaced field lines:

An electron looks the same, but the arrows point inwards (the directionality of charge is purely conventional). So if the electron were a point charge, an atom would be an electric dipole, like this:

This diagram shows that if the electron were a point mass/charge, the hydrogen atom would be subject to unbalanced forces. Such an atom would be unstable. Moreover, a moving electric dipole causes fluctuations in the magnetic field that would rapidly bleed energy away from the atom, so if it didn't collapse instantaneously, it would collapse rapidly. 

Observation shows atoms to be quite stable. So, at least in an atom, an electron cannot be a point mass/charge. And therefore, in an atom, an electron is not a point mass/charge.

Observation also shows that hydrogen atoms are electrically neutral. Given that the electric field of the proton is symmetrical in three dimensions, there is only one shape the electron could be and balance the electric charge. A sphere with the charge distributed evenly over it.


The average radius of the sphere would be the estimated value of the atomic radius. Around 53 picometers (0.053 nanometers) for hydrogen. The radius of a proton is estimated to be on the order of 1 femtometer.

Niels Bohr had a similar idea. He proposed that the electron formed a "cloud" around the nucleus. And this cloud was later identified as "a cloud of probability". Which is completely meaningless. The emperor is not wearing any clothes. As David Albert says on Sean Carroll's podcast:

“… there was just this long string of brilliant people who would spend an hour with Bohr, their entire lives would be changed. And one of the ways in which their lives were changed is that they were spouting gibberish that was completely beneath them about the foundations of quantum mechanics for the rest of their lives…” (emphasis added)

We can do better, with some simple logic. We begin by postulating, along with GFT, that the electron is some kind of wave. 

If the electron is a wave, AND the electron is a sphere, AND the atom is stable, AND the atom is electrically neutral, then the electron can only be a spherical standing wave.

Now, some people may say, "But this is exactly what Schrödinger said". Almost. There is a crucial difference. In this model, the spherical standing wave is the electron. Or, looked at from the other direction, an electron (in a hydrogen atom) is a physical sphere with an average radius of ~53 pm. There is no particle, we've logically ruled out particles.

What does observation tell us about the shape of atoms? We have some quite recent data on this. For example, as reported by Lisa Grossman (2013) for New Scientist, here are some pictures of a hydrogen atom recently created by experimenters.

The original paper was in Physical Review.

Sadly, the commentary provided by Grossman is the usual nonsense. But just look at these pictures. The atom is clearly a sphere in reality, just as I predicted using simple logic. Many crafty experiments, have reported the same result. It's not just that the probability function is spherical. Atoms are spheres. Not solid spheres, by any means, but spheres nonetheless.

We begin to part ways with the old boys. And we are instantly in almost virgin territory. To the best of my knowledge, no one has ever considered this scenario before (I've been searching the literature).

The standard line is that the last input classical physics had was Rutherford's planetary model proposed in 1911, after he successfully identified that atoms have a nucleus, which contains most of the mass of the atom. This model was debunked by Bohr in 1913. And classical physics has nothing more to say. As far as any seems to know, "classical physics says the electron is a point mass". No one has ever modelled the electron in an atom as a real wave. At least no one I can find.

This means that there are no existing mathematical models I can adapt to my purpose. I have to start with the general wave equation and customise it to fit. Here is the generalised wave equation of a spherical standing wave:


Where r is the radius of the sphere, θ and φ are angles, and t = time. Notice that it is a second-order partial differential equation, and that the rates of change in each quantity are interdependent. It can be solved, but it is not easy.

The fact is that, while this approach is not identical to existing quantum formalism, it is isomorphic (i.e. has the same form). Once we clarify the concept and what we are trying to do with it, the existing formalism ought to be able to be adapted. So we don't have to abandon quantum mechanics, we just have to alter our starting assumptions and allow that to work through what we have to date. 

An important question arises: What about the whole idea of wave-particle duality?

In my view, any particle-like behaviour is a consequence of experimental design. Sticking with electrons, we may say that every electron detector relies on atoms in the detector absorbing electrons. And there are no fractional electrons. Each electron is absorbed by one and only one atom. It is this phenomenon that causes the appearance of discrete "particle-like" behaviour. At the nano-scale, any scientific apparatus is inevitably an active part of the system.

The electron is a wave. It is not a particle. 

Given the wild success of quantum mechanics (electronics, lasers, and so on), why would anyone want to debunk it? For me, it is because it doesn't explain anything. I didn't get into science so I could predict measurements, by solving abstract maths problems. I got into it so I could understand the world. Inj physics maths is supposed to represent the world and to have a physical interpretation. I'm not ready to give up on that.


The Advantages of Modelling the Electron as a (Real) Wave.

While they are sometimes reported as special features of quantum systems, the fact is that all standing waves have some characteristic features.

In all standing waves, energy is quantised. This is because a standing wave only allows whole numbers of wavelengths. We may use the example of a guitar string that vibrates in one dimension*.

*Note that if you look at a real guitar string, you will see that it vibrates in two dimensions: perpendicular to the face of the guitar and parallel to it.

The ends of the string are anchored. So the amplitude of any wave is always zero at the ends; they cannot move at all. The lowest possible frequency is when the wavelength equals the string length.

The next lowest possible frequency is when the wavelength equals half the string length. And so on.


This generalises. All standing waves are quantised in this way. This is "the music of the spheres". 

Now, spherical standing waves, with a central attractive force exist and were described ca 1782 by Pierre-Simon Laplace. These entities are mathematically very much more complicated than a string vibrating in one dimension. Modelling this is a huge challenge. 

For the purposes of this essay, we can skip to the end and show you what the general case of harmonics of a spherical standing wave looks like when the equations are solved and plotted on a graph.


Anyone familiar with physical chemistry will find these generalised shapes familiar. These are the theoretical shapes of electron orbitals for hydrogen. And this is without any attempt to account for the particular situation of an electron in an atom (the coulomb potential, the electric field interfering with itself, etc).

So not only is the sphere representing the electron naturally quantised, but the harmonics give us electron "orbitals". And, if we drop the idea of the electron as a particle, this all comes from within a classical framework (though not Rutherford's classical framework). 


Why Does Attempting to Reify Probability Lead to Chaos?

As already noted, Schrödinger tried and failed to relate his equation back to reality. Max Born discovered that the modulus squared of the wavefunction vector at a given point could be interpreted as the probability of finding the "the electron" (qua particle) at that point. This accurately predicts the probable behaviour of an electron, though not its actual behaviour. But all this requires electrons to be both waves and point-mass particles. 

Since the real oscillations I'm describing are isomorphic with the notional oscillations predicted by Schrödinger, we can intuit that if we were to try to quantify the probability of the amplitude of the (real) spherical standing wave at a certain point around the sphere, then any probability distribution we created from this would also be isomorphic with application of the Born rule to Schrödinger's equation.

What I've just done, in case it wasn't obvious, is explain the fundamentals of quantum mechanics (in philosophical terms at least) in one sentence. The predicted probabilities take the form that they do because of a physical mechanism: a spherical standing wave. And I have not done any violence to the notion of "reality" in the process. To my knowledge, this has not been done before, although I'm certainly eager to learn if it has.

However, the isomorphism is only causal in one direction. You can never get from a probability distribution to a physical description. Let me explain why by using a simple analogy that can be generalised.

Let's take the very familiar and simple case of a system in which I toss a coin in the air and, when it lands, I note which face is up. The two possible outcomes are heads H and tails T. The probabilities are well-known:

P(H) = 0.5 and P(T) = 0.5.

And as always, the sum of the probabilities of all the outcomes is 1.0. So:

P(H) + P(T) = 1.0

No matter what values we assign to P(H) and P(T), they have to add up to 1.

In physical terms, this means that if we toss 100 coins, we expect to observe heads 50 times and tails 50 times. In practice, we will most likely not get exactly 50 of each because probabilities do not determine outcomes. Still, the more times we toss the coins, the closer our actual distribution will come to the expected value.

Now imagine that I have tossed a coin, it has landed, but I have not yet observed it (call this the one-dimensional Schrödinger's cat, if you like). The standard rhetoric is to say that the coin is in a superposition of two "states". One has to be very wary of the term "state" in this context. Quantum physicists do not use it in the normal way, and it can be very confusing. But I am going to use "state" in a completely naturalistic way. The "state" of the tossed coin refers to which face is up. And it has to be in one of two possible states: H or T.  

Now let's ask what I know and think about what I can know about the coin at this moment before I observe the state of the coin.

I know that the outcome must be H or T. And I know that the odds are 50:50 that it is either one. What else can I know? Nothing. Despite knowing to 100 decimal places what the probability is, I cannot use that information to know what state the coin is in before I observe it. If I start with probabilities, I can say nothing about the fact of the matter (using a phrase David Albert uses a lot). If I reify this concept, I might be tempted to say that there is no fact of the matter

Note also that it doesn't matter if P(H) and P(T) are changing. Let us say that the probabilities change over time and that the change can be precisely described by a function of the coin: Ψ(coin). Are we any better off? Clearly not.

This analogy generalises. No matter how complex my statistical model, no matter how accurately and precisely I know the probability distribution, I still cannot tell you which side up the coin is without looking. There is undoubtedly a physical fact of the matter, but as the old joke goes, you cannot get there from here.

There are an infinite number of reasons why a coin toss will have P(H) = P(T) = 0.5. We can speculate endlessly. This is why the "interpretations" of quantum mechanics are so wildly variable and the resulting metaphysics so counter-intuitive. Such speculations are not bound by the laws of nature. In fact, all such speculations propose radical new laws of nature, like splitting the entire universe in two every time a quantum event happens. 

So the whole project of trying to extract meaningful metaphysics from a probability distribution was wrong-headed from the start. It cannot work, and it does not work. A century of effort by very smart people has not produced any workable ideas. Or any consensus on how to find a workable idea. 


Superposition and the Measurement Problem

The infamous cat experiment, in all its varieties, involves a logical error. As much as Schrödinger resisted the idea, because of his assumption about wave-particle duality, his equation only tells us about the probabilities of states; it does not and cannot tell us which state happens to be the fact of the matter. The information we get from the current formalism is a probability distribution. So the superposition in question is only a superposition of probabilities; it's emphatically not a superposition of states (in my sense). A coin cannot ever be both H and T. That state is not a possible state. 

Is the superposition of probabilities in any way weird? Nope.

The fact that P(H) = 0.5 or P(H) = Ψ(coin) and that P(T) = 0.5 or P(T) = Ψ(coin) are not weird facts. Nor is the fact that P(H) + P(T) = 1. These are common or garden facts, with no mystical implications.

If we grant that the propositions P(H) = 0.5 and P(T) = 0.5 are logically true, then it must also be logically (and mathematically) true to say that P(H) + P(T) = 1. Prior to observations all probabilities coexist at the same time.

For all systems we might meet, all the probabilities for all the outcomes always coexist prior to observing the state of the system. And the probabilities for all but one outcome collapse to zero at the moment we observe the actual state. This is true for any system: coins, cats, electrons, and everything. 

Note also that this is not a collapse of anything physical. No attempt to reify this "collapse" should be made. Probability is an idea we can quantify, but it's not an entity. No existing thing collapses when we observe an event. 

Moreover, Buddhists and hippies take note, our observing an event cannot influence the outcome. Light from the event can only enter our eye after the event has occurred, i.e. only after the probabilities have collapsed. And it takes the brain an appreciable amount of time to register the incoming nerve signal, make sense of it, and present it to the first-person perspective. Observation is always retrospective. So no, observation cannot possibly play any role in determining outcomes. 

One has to remember that probability is abstract. It's an idea about how to quantify uncertainty. Probability is not inherent in nature; it comes from our side of the subject-object divide. Unlike, say, mass or charge, probability is not what a reductionist would call "fundamental". We discover probabilities through observation of long-term trends. At the risk of flogging a dead horse, you cannot start with an abstraction and extract from it a credible metaphysics. Not in the world that we live in. And after a century of trying, the best minds in physics have signally failed in this quixotic endeavour. There is not even a working theory of how to make metaphysics from probabilities. 

The superposition or collapse of probabilities is in no way weird. And this is the only superposition predicted by quantum mechanics. 

In my model, the electron is a wave, and the wave equation that describes it applies at all times. Before, during, and after observation. 

In my model, probabilities superpose when we don't know the facts of the matter, in a completely normal way. It's just that I admit the abstract nature of probability distributions. And I don't try to break reality so that I can reify an abstraction.

On the other hand, my approach is technically classical. A classical approach that ought to predict all the important observations of quantum mechanics, but which can also explain them in physical terms. As such, there is no separation between classical and quantum in my model. It's all classical. And I believe that the implications of this will turn out to be far-reaching and will allow many other inexplicable phenomena to be easily explained.

The so-called measurement problem can be seen as a product of misguided attempts to hypostatise and reify the quantum wavefunction, which only predicts probabilities. It was only ever a problem caused by a faulty conceptualisation of the problem in terms of wave-particle duality. If we drop this obviously false axiom, things will go a lot more smoothly (though the maths is still quite fiendish).

No one ever has or ever will observe a physical superposition. I'm saying that this is because no such thing exists or could exist. It's just nonsense, and we should be brave enough to stand up and say so.

There is no "measurement problem". There's measurement and there is ill-advised metaphysical speculation based on reified abstractions.


What about other quantum weirdness?

I want to keep this essay to a manageable length, so my answer to this question must wait. But I believe that Peter Jackson's (2013) free electron model as a vortex rotating on three axes is perfectly consistent with what I outlined here. And it explains spin very elegantly. If the electron is a sphere in an atom, why not allow it to always be a sphere?

Jackson also elegantly explains why the polarised filter set-up to test Bell's inequalities is not quantum weirdness, but a result of the photon interacting with, and thus being changed by, the filter. At the nano-scale and below, there are no neutral experimental apparatus.

What about interference and the double-slit experiment? Yep, I have some ideas on this as well.

Tunnelling? I confess that I have not tried to account for tunneling just yet. At face value, I think it is likely to turn out to be a case of absorption and re-emission (like Newton's cradle) rather than Star Trek-style teleporting. Again, there is no such thing as a neutral apparatus on the nano-scale or below. If your scientific apparatus is made of matter, it is an active participant in the experiment and at the nano-scale, it changes the outcomes. 

It's time to call bullshit on quantum mechanics and rescue physicists from themselves. After a century of bad metaphysics, let's put the phys back into physics!

~~Φ~~


P.S. My book on the Heart Sutra is coming along. I have a (non-committal) expression of interest from my publisher of choice. I hope to have news to share before the end of 2025.
PPS. I'd quite like to meet a maths genius with some time on their hands...

PPPS (16 Apr). I now have an answer to the question "What is waving?". An essay on this is in progress but may take a while. 


Bibliography

Grossman, Lisa. (2013). "Smile, hydrogen atom, you're on quantum camera." New Scientist. https://www.newscientist.com/article/mg21829194-900-smile-hydrogen-atom-youre-on-quantum-camera/

Jackson, Peter. (2009). "Ridiculous Simplicity". FQXi. What is Fundamental? https://forums.fqxi.org/d/495-perfect-symmetry-by-peter-a-jackson

Schiff, Leonard I. (1968). Quantum Mechanics. 3rd Ed. McGraw-Hill.

Related Posts with Thumbnails