Showing posts with label Quantum. Show all posts
Showing posts with label Quantum. Show all posts

06 February 2026

Philosophical Detritus V: Determinism and Free Will.

I'm about write an essay about determinism and free will. No one is compelling me to do this; I just noticed that a lot of people were confused, and I hope to arrive at some clarity. I do not know in advance what each sentence is going to say or how many sentences there will be. I don't even know, as I start writing, all the ideas that I'm going to explore. I do research and learn things as I go. But I sit and write, usually in several sessions, until I think I've covered the topic adequately, and voila, another essay emerges.

English has a large vocabulary, with many nuances and synonyms. It also has a very flexible grammar, allowing ideas to be stated in many different ways with slightly different emphases. Moreover, the issues I want to write about are complex. 

There are a million essays I might have written. How did I come to write this particular one? It certainly feels like I chose the words and sentences as I sit and  deliberate on what to say and how to say it. Most especially when I write a sentence one way and then subsequently change the wording or phrasing. But what is really going on?

Do I choose words on a coolly rational basis, with no input from any other faculty, including my own emotions? Or were the words that I apparently chose to write actually predetermined by the laws of physics at the time of the Big Bang? Are either of these two widely believed possibilities plausible? Should I appeal to some middle ground, or should I find some completely different way to frame the discussion? How would I even know?

Of the legacy philosophical concepts I've commented on in this series of essays, determinism and free will are probably the least coherent. And this essay has been the most difficult to write. There are so many different approaches that even a basic overview of the main currents in this topic would be longer than I intend this essay to be. For any given statement one can make, the contrary is likely to be vigorously asserted by someone else. As before, my aim is to try to cut through the bullshit with some pragmatism. There's just so much of it in this case.

The plethora of approaches for both determinism and free will (viewed as standalone concepts) are only multiplied when they are combined into one argument, where they are sometimes mutually exclusive and sometimes compatible. There is no consensus on either term on its own, and no consensus on how the two relate. It's not just that we disagree on details. There is no consensus on how to conceptually frame this discussion. In the case of free will, those who take a determinist stance argue that it simply doesn't exist, so there is nothing to frame. The situation is not helped when commentators tacitly assume a worldview and proceed as if that view is normative, which is all too common.

In matters related to determinism and free will, there is a profound dissensus and continuing divergence of views amongst intellectuals. The issue only becomes more complex over time. This abject failure to agree is sometimes presented positively as pluralism; however, in genuine pluralism, we expect a range of coherent positions that compete to explain some phenomenon. Here, we cannot even agree that there is a phenomenon to be explained.

Discussions of this type have been documented for thousands of years. Nowhere are the failures of academic philosophy and science more starkly revealed than in such long-term unresolved issues. I agree with Einstein that concepts should be as simple as possible, but no simpler. I'm not arguing for an enforced unity or some naive oversimplification. I'm genuinely perturbed by wanting to understand such issues and finding them so hopelessly lost in the weeds. At this point, it would take considerable effort to do worse than professional philosophers.

Academic philosophy seems to have devolved into competitive sophistry, completely unrelated to the lives that most of us live. Of course, people who like arguments find competitive sophistry endlessly entertaining. While arguing can be a diverting hobby for some, the rest of us find it annoying and counterproductive: it doesn't really change anything. 

One of the main themes of these essays has been the lack of epistemic privilege. No person has privileged access to reality. Ergo, no one is in a position of authority vis-à-vis reality. And this was strongly pointed out by both David Hume (1711 – 1776) and Immanuel Kant (1724 – 1804). Rather than admit this, priests, scientists, and philosophers all seem to charge ahead regardless. And so confusion reigns. And I find this intensely irritating. Unlike some of my other suggestions about legacy concepts, I don't see anything here worth rescuing.

I think the whole, millennia-long exercise of arguing about determinism and free will has been a gigantic waste of everyone's time. If you are confused about this topic and go looking for clarity amongst philosophers, scientists, or historians, all you will find is a great deal more confusion. The topic is a tangle of shifting definitions, hidden assumptions, and conflicting ideological commitments. No layperson has any hope of finding genuine clarity, but all kinds of pseudo-clarity are on sale.

Pragmatically, we all experience making decisions and choices; we experience the impact of the choices we make and the impact of the choices that others make. This has to be our starting point. But we also have to acknowledge that we are often baffled by our own decisions. Decisions involve conscious and unconscious mechanisms. Any philosophy which does not say something constructive about these is not worth our time and energy. 


Demonic Determinism

The modern idea of determinism is often traced to the great French mathematician  Pierre Simon, Marquis de Laplace (1749–1827). In 1814, he wrote:

“We ought then to regard the present state of the universe as the effect of its anterior state and as the cause of the one which is to follow. Given for one instant an intelligence which could comprehend all the forces by which nature is animated and the respective situation of the beings who compose it—an intelligence sufficiently vast to submit these data to analysis—it would embrace in the same formula the movements of the greatest bodies of the universe and those of the lightest atom; for it, nothing would be uncertain and the future, as the past, would be present to its eyes.”
— Essai philosophique sur les probabilités. (tr. by F. W. Truscott and F. L. Emory) Chapman & Hall, 1902. p.4.

The "intelligence" (une intelligence) mentioned by Laplace somehow became known in English as "Laplace's demon". While we credit Laplace, this mechanistic idea about the universe seems to have been quite widely accepted at the time. 

These days, we usually sum up the idea by saying that if we knew the location and momentum of every particle in the universe with perfect accuracy and precision, and if we also knew all the laws of physics that govern particles to the same perfect degree, then we could perfectly predict the future. 

In this hypothetical, the word if is doing a lot of work. For example, it is assumed in this view that such knowledge is theoretically possible. Remember that Laplace was saying this a century before quantum physics had been conceived. His view of the universe was purely classical and mechanistic.

Laplace also assumes that we can always recover the past by putting in negative values for time into some mathematical description of nature. This is true of classical laws of motion, but it's not possible in statistical mechanics (and thus thermodynamics) or in quantum mechanics. And note that all we get from this exercise is knowledge of the past, not the actual past (I will come back to this point in an essay about time and time travel).

The idea of "conservation of information" is quite popular, though it's unrelated to physical conservation laws based on physical symmetries in the universe (Noether's theorem). As far as I can see, the idea that "information" is conserved relies on a series of ontological presuppositions that cannot be true, not least of which is the assumption that the universe is absolutely deterministic. Arguments along the lines that, apparently, lost information is only hidden and unrecoverable, rather than truly lost, seem to have a weird definition of "lost".

The basic idea of determinism is that events can only occur in one way. All events are absolutely predetermined in advance by the starting conditions of the universe and the combined laws of nature. This view is similar to the absolute fatalism of Advaita Vedanta theology, which attracted Erwin Schrödinger in his later years. 

Determinists believe that, even though we experience ourselves making choices, there is never any doubt about the outcome. In this view, everything can be reduced to particles following rules. Obviously, if we have no choices and make no decisions, then there can be no such thing as "free will" or any other kind of will. An important corollary of this fact is that there can be no coherent morality or ethics. If no one chooses to do actions, then no one is responsible for those actions (Buddhists who deny the existence of agents also have this problem). Indeed, the idea that evil is blameworthy is entirely negated. Determinism is a form of nihilism. Nothing we do, say, or think makes any difference. Concepts like morality, fairness and justice no longer have any meaning. Nothing matters.

My sense is that while determinists make some powerful arguments, almost no one is willing to simply abandon the concept of morality. Which means that while some people (especially some physicists) argue for an uncompromising version of determinism, most intellectuals understand that morality needs to be retained and preserved. Indeed, the mainstream of academic philosophy has always promoted so-called compatibilism: a range of ideas that embrace determinism but argue that it (somehow) does not rule out free will.

Importantly, the idea of determinism is largely absent from our judicial systems. Notions of agency and responsibility appear to be indispensable to a society. This is a theme I plan to circle back to by way of a conclusion to this series of essays.

As an aside, note that male intellectuals like to call their favoured, often uncompromising, stance on any given topic the "hard" version, and any compromise the "soft" version. So, an uncompromising approach to determinism is often called "hard determinism", and compatibilist approaches are called "soft determinism". And one cannot help but think that, while Freud was wrong in most respects, he was not totally wrong. I try to avoid penis-based terminology in my writing.

In practice, there are dozens of different perspectives on determinism and even taxonomies that are supposed to help us grapple with the definitional promiscuity. If this problem is unfamiliar, I've posted a structured list at the end as an appendix. No doubt some will find the list inadequate, which only reinforces my point about the proliferation of definitions. However, I don't find any of these approaches interesting or meaningful. I don't think the idea of metaphysical determinism is coherent or cogent, at least as far as Laplacian determinism is concerned. There are numerous problems.


Mechanics of Various Kinds

Newtonian, Hamiltonian, and Lagrangian formulations of physics are deterministic as conceived, but also incomplete: they cannot account for events in systems with very large masses, very high relative velocities, and very high energies. Einstein's relativity theories are deterministic and can account for the exceptions. However, relativity is also incomplete, since it cannot be reconciled with our theory for very small masses and it clearly makes a wrong prediction for the Big Bang. We don't know of any classical—i.e., deterministic—theories that are complete. They all break down beyond certain limits. 

Moreover, we cannot even see the entire universe, and we have no idea what lies beyond the limits imposed on us by the speed of light. We can infer that parts of the universe exist from which light will never reach us. We have no way to infer the extent or nature of those parts of the universe. We can infer that physics is the same across the visible universe, but we simply don't know if this holds beyond the limits of our knowledge. Our "universe" could be a tiny bubble in a much larger structure. 

Incidentally, I don't find any multiverse theories cogent. This is simply what happens when you canonise mathematics and adopt the procedure of bending reality to fit your theory (a procedure that has more in common with medieval theology than with empirical science). Which brings us to so-called "quantum mechanics".

As far as I can see, quantum mechanics is not deterministic at all. While some people like to assert that it is, I showed why this is not the case in my previous essay: quantum mechanics can never tell us where a particle is. Precise location information is simply not a possible output of the Schrödinger equation. Indeed, to do a location-based calculation, we have to tell the Schrödinger equation where we expect the particle to be (often based on classical approximations). And all it does is tell us the probability of finding it there. This means that Laplace's demon has no starting information, so even if it knew the laws of physics, it couldn't apply them. 

That is to say, there are no deterministic rules in quantum mechanics that govern where a particle is now or where it will be 1 second from now. But it gets worse.

The uncertainty principle says that the precision with which we know where a particle will be (based on its momentum) is inversely proportional to the precision with which we know where it is now. This means that if we could know exactly where all the particles are at some time, we would necessarily know nothing about where they are going. Even a quantum Laplace demon could not know exactly where a particle is and simultaneously know exactly how it is moving.

Another problem is that quantum mechanics is not a scalable theory. The Schrödinger equation for hydrogen, while being a complex problem in three-dimensional calculus, is nonetheless solvable. The Schrödinger equation for helium is not solvable, even in principle. Rather, in order to use quantum mechanics in a three-body system, one has to impose a series of simplifying assumptions, not least of which is treating the nucleus as a classical object. Rather than admit the implications of this for determinism, physicists simply ignore the fact and proceed as if quantum mechanics is a complete description and fully deterministic.

It's widely known that physicists themselves are deeply divided over the ontology of quantum mechanics, see:

  • Gibney, Elizabeth. (2025) "Physicists disagree wildly on what quantum mechanics says about reality, Nature survey shows." Nature News 30 July 2025.

Again, this is not simply a failure to find a consensus on details. With the mathematics treated as canonical and inviolable, physicists are left to propose increasingly bizarre speculative accounts of how "reality" might be bent to fit the maths canonical. In philosophy, we call this a Procrustean bed

If you accept canonical quantum mechanics, then you must abandon determinism.


Structure Matters

I wrote three long essays exploring the idea that both structure and scale are important factors in any description of nature (NB: I was still using the term "reality" in a reified way a lot back then; I wouldn't phrase it that way now, but the basic intuitions about structure and scale are still relevant).

Here I owe a debt to Richard Jones, see

  • Jones, Richard H. (2013). Analysis & the Fullness of Reality: An Introduction to Reductionism & Emergence. Jackson Square Books.

Incidentally, Jones is also the most underrated Nāgārjuna scholar on the planet. He has published English translations of all Nāgārjuna's major works and a good chunk of Prajñāpāramitā. His commentary on Prajñāpāramitā was a major influence on me. But like me, Jones is an outsider. 

Structure refers to (relatively) static arrangements of stuff, be it particles, bricks, or people. A structural property is a property that an object obtains by virtue of the arrangement of its parts. A good example is the buoyancy of a ship made of steel. Steel is ~8x more dense than water. A 1000 kg lump of steel would have a volume of about 125 litres, about the same volume as a bathtub. In water, it would sink like a proverbial stone. However, if you take that 1000 kg lump of steel, flatten it to about 5mm thick, and shape it into a hollow cylinder that encompasses a volume greater than 8000 litres, then that steel structure will float on water.

I use "structure" as the general term, but I mean it to include systems. Structures are relatively static and stable, while systems are relatively dynamic and can be unstable. 

Reductionism focuses on parts, aiming to find something irreducible at the bottom of the well. Metaphysical reductionism says that "reality" resides only in the lowest level of structure that cannot be further reduced to parts; the corollary being that macroscopic objects are not real. Reductionist methods aim to first eliminate structure to expose the underlying parts.

The problem with this becomes apparent in biology. Simply atomising an organism tells us little about it. Even dissecting it only tells us so much. To understand a biological organism, we have to leave it whole and observe how it interacts with surrounding structures and systems (ecology), which themselves are inevitably only parts of much larger systems all the way up to the universe as an all-encompassing structure (cosmology).

Life cannot be understood via reductionism alone. The alternative goes by several names: holism, antireductionism, and emergentism.

It seems to be true that the universe is made of atoms, for example. And that atoms are made of electrons, protons, and neutrons. And that protons and neutrons also have some structure. But just as a pile of bricks is not a house, a universe of unstructured atoms is not what we observe. Atoms form molecules. Molecules form crystals, polymers, cells, and other kinds of structures. Cells form organs. And organs form bodies. And bodies form societies.

Structure exists. It persists over time. And it confers causal properties on complex objects. These properties are sometimes vaguely called "emergent", but "structural" is more accurate and precise, and less open to abuse. Importantly, while lower levels of structure place constraints on higher levels, they do not determine higher levels (I'll come back to this).

In order to understand the universe we actually inhabit, we do need to use reductionist theories and methods to understand the substantial foundations. But on its own, this is not enough. We also have to use holist theories and methods to understand the structures that the foundations support.


Scale Matters

As we move between different scales, our explanations of nature often break down. It was larger scales made visible via telescopes that exposed the incompleteness of Newtonian physics. Structure imparts structural properties to stuff. Microscopic effects are lost at larger scales, and macroscopic effects are greater than the sum of their parts.

For example, quantum mechanics simply ignores gravitation because the impact of it on the scale of electrons and protons is so small that ignoring it has no meaningful impact on precision or accuracy, and the simplification offers a huge advantage in computability. But if your theory ignores gravitation, it has no claim to being "deterministic" in the Laplacian sense.

Scale matters because, as I noted already, substantial properties constrain but don't determine structural properties. We cannot doubt, for example, that the properties of molecules are constrained by the properties of atoms. A molecule cannot have arbitrary properties. However, the properties of water (OH₂) are also strongly related to the asymmetrical arrangement of the three atoms. It is this structure that gives the water molecule its polarity, for example. Organic chemistry is even more fascinating since the possible arrangements of carbon, nitrogen, oxygen, and hydrogen atoms are almost endless.

As we scale up, we lose track of microscopic details. In chemistry, we talk in an idealised way about individual molecules, but, actually, 1 gram of water contains ~3 x 10²² water molecules. This number is unimaginably large, and individual molecules are unimaginably small. The only way to deal with such large numbers of molecules is with abstractions and statistics. Hence, statistical mechanics and thermodynamics.

For example, the temperature of a volume of gas is proportional to the mean kinetic energy (= ½mv²) of the molecules in that volume. The pressure the gas exerts on its container is proportional to the average speed with which molecules collide with it. And so on.

In any case, determinism is a relic of reductive, mechanistic thinking about the universe. Uncompromising determinism is a castle built on sand. Physics is far less complete than it would need to be to support determinism, and quantum physics is not deterministic at all (at least in the Laplacian sense). Moreover, the absolute fatalism of determinism seems to fly in the face of experience, requiring us to abandon the whole concept of morality, which almost no one outside of academic physics is willing to do.

If anything, the situation with free will is even worse.


Free Will

We cannot even agree on how to spell this concept that may or may not exist. Three spellings are in common use: "free will", "free-will", and "freewill". Research suggests that most people opt for two words these days and that the other options are out of fashion. But the concept is singular, and the phrase seems like an obvious compound to me (in Sanskrit we'd call it a karmadhāraya compound). Sigh. 

If you look at general histories of free will, you will see claims that discussions extend back to antiquity, but my sense is that this is not quite true. People in antiquity may have speculated about how we make choices, but the particular idea of free will seems to be somewhat later.

However, we are hampered in such deliberations by the absence of a consensus on what free will means. Again, I have supplied a structured list of major views in the appendix for easy reference.

Apart from ancient discussions, ideas about free will embrace a range of influences. Early modern philosophers such as Hobbes, Descartes, Spinoza, Locke, Hume, and Kant all wrote about free will. Many scientists, such as Laplace, Darwin, and Einstein has commented on the issue, most often as a consequence of their commitment to determinism. Freud also commented on the issue. It's one of those issues on which the great and good all have (different) opinions.

One of the most striking forms of evidence that physicists cite against free will is the experiments performed by Benjamin Libet in the 1980s. This suggests that we make decisions around half a second earlier than we become aware of having made a decision. I noted in an essay titled Free Will is Back on the Menu (11 March 2016) that few of Libet's colleagues accepted his interpretation at the time, and it has been thoroughly debunked since. What Libet measured was conscious anticipation, not unconscious decision-making. See, for example:

And yet, it is still common to see Libet cited in arguments about free will, especially by physicists. Notably, when Libet is cited in this context, no other neurophysiology authors are cited, and, notably, none of the neurophysiology literature that discussed Libet's work is ever cited. Which flies in the face of scholarly methods. The "literature review" remains an essential part of any research project.

Part of the problem with free will is the idea that there is one and only one decision-making faculty. And this faculty is all or nothing; it either makes all the decisions, or we don't make any decisions. Which is not even remotely consistent with my experience of making decisions. For a start, most decisions don't involve any conscious deliberation. And according to Hugo Mercier and Dan Sperber—authors of The Enigma of Reason—the reasons we give for such unconscious decisions are merely post-hoc rationalisations, fabulated on the fly. 

One of the main sources of argument about free will is Christian theologians responding to the problem of evil, starting in the fourth century CE. The problem is relatively simple. If Jehovah is both good and omnipotent, why is there evil in the world at all? If Jehovah cannot do anything about evil, then he is not worthy of worship; if he can but does not, then Jehovah is evil. The theologians decided to blame humans, or more precisely, to blame women via their mythical progenitor Eve. God gave Adam and Eve free will, and Eve used it to disobey Jehovah's stricture not to eat the fruit of the knowledge of good and evil, thereby bringing evil into the world. Obviously, the theology of free will requires applying some rather torturous logic to some rather implausible fairy tales.

And the result of all this attention from intellectuals across centuries, if not millennia, in at least a dozen different cultures? A vague, poorly defined, hotly disputed, abstract concept that may or may not exist.

It is already clear that if one adopts determinism, then one is forced to abandon morality. This result is so appalling that many philosophers and other intellectuals have tried to have their cake and eat it. They embrace determinism, but still claim that morality is meaningful. This kind of view is called compatibilism


Compatibilism

Here is Albert Einstein in 1929 (by which time he probably knew that quantum mechanics was not deterministic, even if few other people did):

I am a determinist. As such, I do not believe in free will... I believe with Schopenhauer: We can do what we wish, but we can only wish what we must. (from an interview published in the Saturday Evening Post. 26 Oct 1929, p.114)

However, Einstein immediately contradicts himself:

Practically, I am, nevertheless, compelled to act as if freedom of the will existed. If I wish to live in a civilized community, I must act as if man is a responsible being.

If no actions are the result of decisions, if "we can only wish what we must", then no one is responsible for their actions, and thus they are not culpable for transgressions. The very idea of transgression has to be deprecated. Einstein's position is incoherent. Which just goes to show that physicists, no matter how great they are, often make lousy philosophers.

Compatibilism is not a single unified idea, but generally speaking, compatibilists do what Einstein does. They begin by claiming to accept determinism. For example, they will agree that all events, including human actions, are fixed by prior states and laws. They try to get around the morality-denying fatalism of this statement by redefining morality or some other fudge. For example, one approach is to argue that an action becomes morally significant when it flows from the agent’s internal psychological structures—desires, reasoning, character—without external compulsion.

Unfortunately, under determinism, the notion of an "agent" is incoherent. There are no agents; there are just entities evolving according to laws. Agency implies choice, and choice is eliminated by determinism.

Compatibilism is also simply incoherent.


Deciding to go Up Hill

Every adult human has vast experience of making decisions. This is something we all do all day long. Banal choices like what to wear or eat, and morally significant choices like choosing to be honest or non-violent. Life choices like where to live, who to live with, or what job to do.

As with choosing which words to write in this essay, there are almost always many options for what to do next in any situation.

Anyone who denies that we are making decisions, as Einstein did, is bound to provide an alternative explanation of what is actually happening. If that alternative explanation is determinism, then agents, free will, and responsibility are automatically eliminated, and we lose morality entirely. So, rather than explaining human behaviour, determinism simply eliminates it from consideration.

We have the category "agent" precisely because agents are not like other objects. Water has no choice but to flow downhill: water is not an agent. A thrown rock follows a parabolic arc. Rocks are not agents. A planet orbits a sun in an elliptical orbit. Planets are not agents.

Agents are not passive in the face of physics. An agent can go up hill or around hills. Some agents can fly over the hill. Humans often simply remove inconvenient hills or tunnel under them. As a being that experiences having agency, I would say that, where agents are concerned, there is something more going on than merely following laws

Agents use energy to do actions that are allowed but not favoured by the laws of physics; actions that would never happen spontaneously in nature. Agents can remain in overly energetic states over long periods of time, consuming energy to remain so. 


The Choice of Illusions

Simply saying "choice is an illusion" is not an explanation. If we go down this road, then reductio ad absurdum, all experience is an illusion. In which case, we have not explained anything. An illusion ought not to be able to participate in causality. However, it's quite clear that my choices translate into actions and events that are causal.

For example, I start writing in the morning with a flask of pǔ'ěrchá 普洱茶 or Pu'er tea (普洱 is a toponym that cannot really be translated). From time to time, I take a sip. When my cup is empty, I refill it. When my flask is empty, I make another pot of cha. Each action has objective consequences in the sense that it results in a repeatable sequence of objective events that would not happen if I chose not to do them. This is how we objectively define causation. This causal sequence of events is not an illusion. My cup being empty is objectively not the same as my cup being full. My desire for more tea causes me to refill my cup. But it doesn't compel me to refill it, nor does it compel me to fill it with tea, let alone Pu'er tea. There's no inevitability in this situation. 

It's one thing to performatively state the belief that "experience is an illusion", but in practice, people who act like experience is an illusion typically have a psychiatric problem such as dissociative disorder, and they find it difficult or impossible to function socially.

It would be weird to believe that our decisions are not influenced by our cultural conditioning, the language we speak, our peers, and environmental exigencies. The idea of a perfectly free will—sometimes called contracausal free will—is clearly nonsensical. Like the fictional "rational faculty" that operates without any input from emotion or external influences, free will in this sense is a unicorn. And yet it is precisely contracausal free will that many people tacitly have in mind if they have not thought much about it.

Framing the issue in black and white terms—either free will exists or it doesn't—virtually guarantees failure to understand decision-making. And yet this is what most commentators seem to insist on, and certainly this framing of the issue is by far the most common one amongst the general public.

A better, more pragmatic approach would be to enquire into what factors influence our decisions. I've already mentioned some of the main influences.

As I write, for example, my word choices are governed by the rules of the English language, by my vocabulary, by the style I adopt, by my knowledge of the subject and its conventions, and so on. Language itself is constrained by human anatomy and physiology. There is no arbitrary or abstract "freedom"; it's not a standalone idea. There are degrees of freedom within an elaborate set of physical and social constraints. That's what we should be talking about. 


Conclusion

Growing up, my moral education often consisted of simplistic aphorisms. This may help explain why I'm still fond of aphorisms (see my collection on the about page). One of the most common aphorisms I heard as a kid was: "Two wrongs don't make a right." In determinism and free will, we have two wrongs. Added together, they do not make a right.

Determinism seems attractive because, superficially, it offers a level of objective certainty that religious fanatics can only dream of. However, beyond the surface, determinism unravels because none of our working theories of nature is truly deterministic or complete enough to support determinism. Moreover, the tendency to combine uncompromising determinism with uncompromising reductionism creates a false picture of the universe. Importantly, such views ignore the influence of either structure or scale.

Our principal microscopic theory of matter, quantum physics (in its various manifestations), doesn't even scale from a hydrogen atom to a helium atom, let alone to the macroscopic world. The calculations are simply too complex to ever be solved without making radical assumptions like treating the nucleus as a classical object. Which, incidentally, proves that nature is not performing calculations when a helium atom comes into existence. 

The addition of layers of structure is significant. Because structure makes a qualitative and quantitative contribution. Structure is objective and causal.

Certainly, the macroscopic world is constrained by features of the microscopic, but it is not determined by them. Molecules are more than the sum of their parts. And that "more" is not mystical, magical, or emergent: it is precisely the contribution of structure. This is why reductionism fails as a universal approach.

Compatibilism is not unlike bleeding heart liberalism, the proponents of which acknowledge the evil done by capitalism, and strive to meliorate or mitigate the damage it does through acts of charity, but who nonetheless wholeheartedly embrace capitalism. 

The real problem with determinism, and the reason that even ardent determinists like Einstein adopt compatibilist approaches, is that it denies all forms of morality. The most fundamental assumption of morality is that we make choices that are reflected in our behaviour, especially our behaviour towards others. Without this assumption, all of our ideas about morality, fairness, and justice go out the window.

Religious theories of morality are even worse, since they divorce moral sensibilities from human experience. In theistic religions, morality is perceived to be imposed by some external agent. The Abrahamic religions have a very dim view of humanity. Buddhism, quite frankly, sees most people, and all non-Buddhists, as moral idiots.

I follow the primate ethologist Frans de Waal in seeing morality as structural feature of living a social lifestyle, and as rooted in the capacities of empathy and reciprocity. For a more detailed account, see my series of essays on this.

And the source:
  • Waal, Frans de. (2013). The bonobo and the Atheist: In Search of Humanism Amongst the Primates. W.W. Norton & Co.

In this view, we are naturally moral, since we inherit the capacities for empathy and reciprocity. If we are immoral, this is probably the result of deliberately suppressing empathy or subverting reciprocity. It is detrimental to the group, and the group is essential to our survival and the passing on of our genes. Ergo, the group acts to curb and prevent actions that undermine the group, which helps to keep the group functioning harmoniously. The main job of the "alpha male" chimp is to interpose in conflicts on the side of the weaker party. And to ensure that any members of the group who are in conflict find a way back to harmony. Rather than being the strongest or most violent, the alpha male is generally the most trusted and respected male in the group. 

The social primate code is "United we stand, divided we fall. All for one, and one for all." 

Any philosophy of nature that denies the centrality of morality in our (social) lives is practically useless. As I said at the outset, I don't see anything worth rescuing from this mess. Neither determinism nor free will is even a good idea. Whereas morality is a great idea. If the choice is either determinism or morality, then I choose morality without any hesitation. 

~~Φ~~


Appendix

Approaches to Determinism

  • I. Determinism Proper (what is fixed?)
    • Global determinism — the complete state of the world plus laws fixes all future states
    • Local determinism — determinism holds in some domains but not others
    • Nomological determinism — determinism relative to the laws of nature
    • Causal determinism — every event has a sufficient prior cause
    • Logical determinism — truth-values about the future fix what will occur
    • Theological determinism — divine foreknowledge or decree fixes outcomes
  • II. Indeterminism (denial of fixation)
    • Ontological indeterminism — the world itself is not fully fixed
    • Causal indeterminism — causes do not necessitate effects
    • Event-level indeterminism — some events lack sufficient causes
    • System-level indeterminism — higher-level descriptions are indeterminate
  • III. Hybrid Views (mixed structure)
    • Soft determinism — deterministic structure with explanatory slack
    • Probabilistic causation — laws constrain outcomes statistically
    • Emergent indeterminism — indeterminacy arises at higher levels
    • Chaotic determinism — determinism with practical unpredictability
  • IV. Epistemic Positions (about knowledge, not reality)
    • Epistemic determinism — the world may be deterministic even if unknowable
    • Epistemic indeterminism — indeterminacy reflects limits of description
    • Predictive scepticism — determinism undecidable in practice
  • V. Deflationary / Quietist
    • Instrumentalism — determinism as a modelling choice
    • Pragmatic determinism — determinism adopted for explanatory utility
    • Semantic deflationism — disputes about determinism are verbal or framework-relative
  • VI. Metaphysical Rejections
    • Anti-realist determinism — no fact of the matter about determinism
    • Pluralist metaphysics — multiple incompatible but adequate descriptions

Approaches to Free Will

  • I. Denial
    • Eliminativism — no such thing as free will
  • II. Deflationary / Revisionary
    • Pragmatic / practice-based — “free will” fixed by its role in responsibility practices
    • Revisionism — weakened notion retained for moral or social purposes
  • III. Accounts of Agency (what kind of thing acts?)
    • Reductive event-causal agency — actions explained by mental events
    • Non-reductive agency — agency irreducible to subpersonal processes
    • Emergent agency — agency arises at the personal level
    • Agent-causal agency — agents as primitive causes
  • IV. Accounts of Control (what makes action mine?)
    • Reasons-responsive control — sensitivity to reasons
    • Guidance control — ownership of the mechanism producing action
    • Hierarchical control — higher-order endorsement
    • Identification/ownership — identification with motives
  • V. Accounts of Sourcehood (where does action ultimately come from?)
    • Historical sourcehood — dependence on past self-shaping
    • Structural sourcehood — present-time ownership of springs of action
    • Ultimacy-based sourcehood — agent as ultimate origin
  • VI. Phenomenological / Narrative
    • Phenomenological agency — lived experience of choosing
    • Narrative identity — agency embedded in a self-narrative

16 January 2026

How can a particle be in two places at once? (Superposition, Again)

Image of atom

A common question for lay people confronted with counterintuitive popular narrative accounts of quantum physics is:
 
How can a particle be in two places at once?

The idea that a particle can be in two places at once is a common enough interpretation of the idea of quantum superposition, but this is not the only possible interpretation. Some physicists suggest that superposition means that we simply don't know the position, and some say that it means that the "position" is in fact smeared out into a kind of "cloud" (not an objective cloud). However, being in two places at once is an interpretation that lay people routinely encounter, and it has become firmly established in the popular imagination.

Note that while the idea is profoundly counterintuitive, physicists often scoff at intuition. Richard Feynman once said, "The universe is under no obligation to make sense to you." I suppose this is true enough, but it lets scientists off the hook to easily. The universe might be obligation-free, but science is not. I would argue precisely that science is obligated to make sense. For the first 350 years or so, science was all about making sense of empirical data. This approach was consciously rejected by people like Werner Heisenberg, Max Born, and Niels Bohr before arriving at their anti-realist conclusions.

But here's the thing. Atoms are unambiguously and unequivocally objective (their existence and properties are independent of the observer). We even have images of individual atoms now (above right). Electrons, protons, neutrons, and neutrinos are all objective entities. They exist, they persist, they take part in causal relations, and we can measure their physical properties such as mass, spin, and charge. The spectral absorption/emission lines associated with each atom are also objective.

It was the existence of emission lines, along with the photoelectric effect, that led Planck and Einstein to propose the first quantum theory of the atom. And if these lines are objective, then we expect them to have an objective cause. And since they obviously form a harmonic series we ought to associate the lines with objective standing waves. The mathematics used to describe and predict the lines does describe a standing wave, but for reasons that are still not clear to me, physicists deny that an objective standing wave is involved. The standing wave is merely a mathematical calculation tool. Quantum mechanics is an antirealist scientific theory, which is an oxymoron. 

However, we may say that if an entity like the atom in the image above has mass, then that mass has to be somewhere at all times It may be relatively concentrated or distributed with respect to the centre of mass, but it is always somewhere. Mass is not abstract. Mass is physical and objective. Mass can definitely not be in two places at once. Similarly, electrical charge is a fundamental physical property. It also has to be somewhere. If we deny these objective facts then all of physics goes down the toilet. 

Moreover, if that entity with mass and charge is not at absolute zero, then it has kinetic energy: it is moving. If it is moving, that movement has a speed and a direction (i.e. velocity). At the nanoscale, there is built-in uncertainty regarding knowing both position and velocity at the same time, but we can, for example, know precisely where an electron is when it hits a detector (at the cost of not knowing its speed and direction at that moment).

Quantum theory treats such objective physical entities as abstractions. Bohr convinced his colleagues that we cannot have a realist theory of the subatomic. It's not something anyone can describe because it's beyond our ability to sense. This was long before images of atoms were available. 

The story of how we came to have an anti-realist theory of these objective entities and their objective behaviour would take me too far from my purpose in this essay, but it's something to contemplate. Mara Beller's book Quantum Dialogue goes into this issue in detail. Specifically, she points to the covert influence of logical positivism on the entire Copenhagen group.

The proposition that a particle can be in two places at once is not only wildly counterintuitive, but it breaks one of Aristotle's principles of reasoning: the principle of noncontradiction. Which leaves logic in tatters and reduces knowledge to trivia. Lay people can only be confused by this, but I think that, secretly, many physicists are also confused.

To be clear:

  • No particle has ever been observed to be in different locations at the same time. When we observe particles, they are always in one place and (for example, in a cloud chamber) appear to follow a trajectory. Neither the location nor the trajectory is described by quantum physics.
  • No particle has ever been predicted to be in different locations at the same time. The Schrödinger equation simply cannot give us information about where a particle is.

So the question is, why do scientists like to say that quantum physics means that a particle can be in two places, or in two "states"*, at one time? To answer this, we need to look at the procedures that are employed in quantum mechanics and note a rather strange conclusion.

* One has to be cautious of the word "state" in this context, since it refers only to the mathematical description, not to the physical state of a system. And the distinction is seldom, if ever, noted in popular accounts.

What follows will involve some high school-level maths and physics.


The Schrödinger Equation

Heisenberg and Schrödinger developed their mathematical models to try to explain why the photons emitted by atoms have a specific quantum of energy (the spectral emission lines) rather than an arbitrary energy. Heisenberg used matrices and Schrödinger used differential equations, but the two approaches amount to the same thing. Even when discussing Schrödinger's differential equation, physicists still use matrix jargon like "eigenfunctions" indiscriminately.

The Schrödinger equation can take many forms, which does not help the layperson. However, the exact form doesn't matter for my purposes. What does matter is that they all include a Greek letter psi 𝜓. Here, 𝜓 is not a variable of the type we encounter in classical physics; it is a mathematical function. Physicists call 𝜓 the wavefunction. Let's dig into what this means.


Functions

A function, often denoted by f, is a mathematical rule. In high school mathematics, we all learn about simple algebraic functions of the type:

f(x) = x + 1

This rule says: whatever the current value of x is, take that value and add 1 to it.

So if x = 1 and we apply the rule, then f(x) = 2. If x = 2.5, then f(x) = 3.5. And so on.

A function can involve any valid mathematical operation or combinations of them. And there is no theoretical limit on how complex a function can be. I've seen functions that take up whole pages of books.

We often meet this formalism in the context of a Cartesian graph. For example, if the height of a line on a graph is proportional to its length along the x-axis, then we can express this mathematically by saying that y is a function of x. In maths notation.

y = f(x); where f (x) = x + 1.

Or simply: y = x + 1.

This particular function describes a line at +45° that crosses the y-axis at y = 1. Note also that if the height (y) and length (x) are treated as the two orthogonal sides of a right-triangle, then we can begin to use trigonometry to describe how they change in relation to each other. Additionally, we can treat (x,y) as a matrix or as the description of a vector.

In physics, we would physically interpret an expression like y = x + 1 as showing how the value of y is proportional to the value of x. We also use calculus to show how one variable changes over time with respect to another, but I needn't to go into this.


Wavefunctions and Hilbert Spaces

The wavefunction 𝜓 is a mathematical rule (where 𝜓 is the Greek letter psi, pronounced like "sigh"). If we specify it in terms of location on the x-axis, 𝜓(x) gives us one complex number (ai + b; where i = √-1) for every possible value of x. And unless otherwise specified, x can be any real number, which we write as x ∈ ℝ (which we read as "x is a member of the set of real numbers"). In practice, we usually specify a limited range of values for x.

All the values of 𝜓(x), taken together, can be considered to define a vector in an abstract notional "space" we call a Hilbert space, after the mathematician David Hilbert. The quantum Hilbert space has as many dimensions as there are values of x, and since x ∈ ℝ, this means it has infinitely many dimensions. While this seems insane at first glance, since a "space" with infinitely many dimensions would be totally unwieldy, in fact, it allows physicists to treat 𝜓(x) as a single mathematical object and do maths with it. It is this property that allows us to talk about operations like adding two wavefunctions (which becomes important below).

We have to be careful here. In quantum mechanics, 𝜓 does not describe a objective, physical wave in space. Hilbert space is not an objective space. This is all just abstract mathematics. Moreover, there isn’t an a priori universal Hilbert space containing every possible 𝜓. Every system produces a distinct abstract space. 

That said, Sean Carroll and other proponents of the so-called "Many Worlds" interpretation first take the step of defining the system of interest as "the entire universe" and notionally assign this system a wavefunction 𝜓universe. However, there is no way to write down an actual mathematical function for such an entity since it would have infinitely many variables. Even if we could write it down, there is no way to compute any results from such a function: it has no practical value. In gaining a realist ontology, we lose all ability to get information without introducing massive simplifications. Formally, you can define a universal 𝜓. But in practice, to get predictions, you always reduce to a local system, which is nothing other than ordinary quantum mechanics without the Many Worlds metaphysical overlay. So in practice, Many Worlds offers no advantage over "shut up and calculate". And since the Many Worlds ontology is extremely bizarre, I fail to see the attraction.

It is axiomatic for the standard textbook approach to quantum mechanics—deriving from the so-called "Copenhagen interpretation"—that there is no objective interpretation of 𝜓. Neutrally, we may say that the maths needn't correspond to anything in the world, it just happens to give the right answers. The maths itself is agnostic; it doesn't require any physical interpretation. Bohr and co positivistically insisted that it's not possible to have a physical interpretation because we cannot know the world on that scale.

As readers likely know, the physics community is deeply divided over (a) the possibility of realist interpretations, i.e. the issue of 𝜓-ontology and (b) which, if any, realist interpretation of 𝜓 is the right one. There is a vast amount of confusion and disagreement amongst physicists themselves over what the maths represents, which does not help the layperson at all. But again, we can skip over this and stay focussed on the goal.


The Schrödinger equation in Practice

To make use of the Schrödinger equation, a physicist must carefully consider what kind of system they are interested in and define 𝜓 so that it describes that system. Obviously, this selection is crucial for getting accurate results. And this is a point we have to come back to.

When we set out to model an electron in a hydrogen atom, for example, we have to choose an expression for 𝜓 whose outputs correspond to the abstract mathematical "state" of that electron. There's no point in choosing some other expression, because it won't give accurate results. Ideally, there is one and only one expression that perfectly describes the system, but in practice, there may be many others that approximate it.

For the sake of this essay, I will discuss the case in which 𝜓 is a function of location. In one dimension, we can state this as: 𝜓(x). When working in three spatial and one time dimensions, for technical reasons, we use spherical spatial coordinates, which are two angles and a length, as well as time: 𝜓(φ,θ,x,t). The three-dimensional maths is challenging, and physicists are not generally required to be able prove the theorem. They only need to know how to apply the end results.

Schrödinger himself began by describing an electron trapped in a one-dimensional box, as perhaps the simplest example of a quantum system (this is an example of a spherical cow approximation). This is very often the first actual calculation that students of quantum mechanics perform. How do we choose the correct expression for this system? In practice, this (somewhat ironically) can involve using approximations derived from classical physics, as well as some trial and error.

We know the the electron is a wave and so we expect it to oscillate with something like harmonic motion. In simple harmonic motion, the height of the wave on the y-axis changes as the sine of the position of the particle on the x-axis.

One of the simplest equations that satisfies our requirements, therefore, would be 𝜓(x) = sin x, though we must specify lower and upper limits for x reflecting the scale of the box.

However, it is not enough to specify the wavefunction and solve it as we might do in wave mechanics. Rather, we first need to do another procedure. We apply an operator to the wavefunction.

Just as a function is a rule applied to a number to produce another number, an operator is a rule applied to a function that produces another function. In this method, we identify operators by giving them a "hat".

So, if p is momentum (for historical reasons), then the operator that we apply to the wavefunction so that it gives us information about momentum is p̂. And we can express this application as 𝜓. For my purposes, further details on operators (including Dirac notation) don't matter. However, we may say that this is a powerful mathematical approach that allows us to extract information about any measurable property for which an operator can be defined, from just one underlying function. It's actually pretty cool.

There is one more step, which is applying the Born rule. Again, for the purposes of this essay, we don't need to say more about this, except that when we solve p̂ψ, the result is a vector (a quantity + a direction). The length of this vector is proportional to the probability that, when we make a measurement at x, we will find momentum p. And applying the Born rule gives us the actual probability.

So the procedure for using the Schrödinger equation has several steps. Using the example of 𝜓(x), and finding the momentum p at some location x, we get something like this:

  • Identify an appropriate mathematical expression for the wavefunction 𝜓(x).
  • Apply the momentum operator 𝜓(x).
  • Solve the resulting function (which gives us a vector).
  • Apply the Born Rule to obtain a probability.

So far so good (I hope).

To address the question—How can a particle be in two places at once?—we need to go back to step one.


Superposition is Neither Super nor Related to Position.

It is de rigueur to portray superposition as a description of a physical situation, but this is not what was intended. For example, Dirac's famous quantum mechanics textbook presents superposition as an a priori requirement of the theory, not a consequence of it. Any wavefunction 𝜓 must, by definition, be capable of being written as a combination of two or more other wavefunctions: 𝜓 = 𝜓₁ + 𝜓₂. Dirac simply stated this as an axiom. He offers no proof, no evidence, no argument, and no rationale.

We might do this with a problem where using one 𝜓 results in overly complicated maths. For example it's common to treat the double-slit experiment as two distinct systems involving slit 1 and slit 2. For example, we might say that 𝜓₁ describes a particle going only through slit 1, and 𝜓₂ describes a particle going through slit 2. The standard defence in this context looks like this:

  • The interference pattern is real.
  • The calculation that predicts it more or less requires 𝜓 = 𝜓₁ + 𝜓₂.
  • Therefore, the physical state of the system before measurement must somehow correspond to 𝜓₁ + 𝜓₂.

But the last step is exactly the kind of logic that quantum mechanics itself has forbidden. We cannot say what the state of the system is prior to measuring it. Ergo, we cannot say where the particle is before we measure it and we definitely cannot say its in two places at once.

To be clear, 𝜓 = 𝜓₁ + 𝜓₂ is a purely mathematical exercise that has no physical objective counterpart. According to the formalism, 𝜓 is not an objective wave. So how can 𝜓₁ + 𝜓₂ have any objective meaning? It cannot. Anything said about a particle "being in multiple states at once", or "taking both/many paths", or "being in two places at once" is all just interpretive speculation. We don't know. And the historically dominant paradigm tells us that we cannot know and we should not even ask.

To be clear, the Schrödinger does not and cannot tell us what happens during the double slit experiment. It can only tell us the probable outcome. The fact that the objective effect appears to be caused by interference and the mathematical formalism involves 𝜓₁ + 𝜓₂ is entirely coincidental (according to the dominant paradigm).

Dirac has fully embraced the idea that quantum mechanics is purely about calculating probabilities and that it is not any kind of physical description. A physical description of matter on the sub-atomic scale is not possible in this view. And his goal does not involve providing any such thing. His goal is only to perfect and canonise the mathematics which Heisenberg and Born had presented as a fait accompli in 1927:

“We regard quantum mechanics as a complete theory for which the fundamental physical and mathematical hypotheses are no longer susceptible of modification.”—Report delivered at the 1927 Solvay Conference.

I noted above that we have to specify some expression for 𝜓 that makes sense for the system of interest. If the expression is for some kind of harmonic motion, then we must specify things like the amplitude, frequency, direction of travel, and phase. Our choices here are not, and cannot be, derived from first principles. Rather, they must be arbitrarily specified by the physicist.

Now, there are an almost infinite number of expressions of the type 𝜓(x) = sin (x). We can specify amplitude, etc., to any arbitrary level of detail.

  • The function 𝜓(x) = 2 sin (x) will have twice the amplitude.
  • The function 𝜓(x) = sin (2x) will have twice the frequency.
  • The function 𝜓(x) = sin (-x) will travel in the opposite direction.

And so on.

A physicist may use general knowledge and a variety of rules of thumb to decide which exact function suits their purposes. As noted, this may involve using approximations derived from classical physics. We need to be clear that nothing in the quantum mechanical formalism can tell us where a particle is at a given time or when it will arrive at a given location. Whoever is doing the calculation has to supply this information.

Obviously, there are very many expressions that could be used. But in the final analysis, we need to decide which expression is ideal, or most nearly so. 

For a function like 𝜓(x) = sin (x), for example, we can add some variables: 𝜓(x) = A sin (kx). Where A can be understood as a scaling factor for amplitude, and k as a scaling factor for frequency. Both A and k can be any real number (A ∈ ℝ and k ∈ ℝ).

Even this very simple example clearly has an infinite number of possible variations since ℝ is an infinite set. There are infinitely many possible functions 𝜓₁, 𝜓₂, 𝜓₃, ... 𝜓. Moreover, because of the nature of the mathematics involved, if 𝜓₁ and 𝜓₂ are both valid functions, then 𝜓₁ + 𝜓₂ is also a valid function. It was this property of linear differential equations that Dirac sought to canonise as superposition.

To my mind, there is an epistemic problem in that we have to identify the ideal expression from amongst the infinite possibilities. And having chosen one expression, we then perform a calculation, and it outputs probabilities for measurable quantities.

The 𝜓-ontologists try to turn this into a metaphysical problem. Sean Carroll likes to say "the wavefunction is real". 𝜓-ontologists then make the move that causes all the problems, i.e. they speculatively assert that the system is in all of these states until we specify (or measure) one. And thus "superposition" goes from being a mathematical abstraction to being an objective phenomena, and its only one more step to saying things like "a particle can be in two places at once". 

I hope I've shown that such statements are incoherent at face value. But I hope I've also made clear that such claims are incoherent in terms of quantum theory itself, since the Schrödinger equation can never under any circumstances tell us where a particle is, only the probability of finding it in some volume of space that we have to specify in advance. 


Conclusion

The idea that a particle can be in two places at once is clearly nonsense even by the criteria of the quantum mechanics formalism itself. The whole point of denying the relevance of realism was to avoid making definite statements about what is physically happening on a scale that we can neither see nor imagine (according to the logical positivists).

So coming up with a definite, objective interpretation—like particles that are in two places at once—flies in the face of the whole enterprise of quantum mechanics. The fact that the conclusion is bizarre is incidental since it is incoherent to begin with.

The problem is that while particles are objective; our theory is entirely abstract. Particles have mass. Mass is not an abstraction; mass has to be somewhere. So we need an objective theory to describe this. Quantum mechanics is simply not that theory. And nor is quantum field theory. 

I'm told that mathematically, Dirac's canonisation of superposition was a necessary move. And to be fair, the calculations do work as advertised. One can accurately and precisely calculate probabilities with this method. But no one has any idea what this means in physical terms, no one knows why it works or what causes the phenomena it is supposed to describe. When Richard Feynman said "No one understands quantum mechanics", this is what he mean. And nothing has changed since he said it.

It would help if scientists themselves could stop saying stupid things like "particles can be in two places at once". No, particles cannot be in two places at once, and nothing about quantum mechanics makes this true. There is simply no way for quantum mathematics, as we currently understand it, to tell us anything at all about where a particle is. The location of interest is something that the physicist doing the calculation has to supply for the Schrödinger equation, not something the equation can tell us (unlike in classical mechanics).

And if the equation cannot tell us the location of the particle, under any circumstances, then it certainly cannot tell us that it is in two places or many places. Simple logic alone tells us this much.

The Schrödinger equation can only provide us with probabilities. While there are a number of possible mathematical "states" the particle can be in, we do not know which one it is in until we measure it.

If we take Dirac and co at face value, then stating any pre-measurement physical fact is simply a contradiction in terms. Pretending that this is not problematic is itself a major problem. Had we been making steady progress towards some kind of resolution, it might be less ridiculous. But the fact is that a century has passed since quantum mechanics was proposed and physicists still have no idea how or why it works but still accept that "the fundamental physical and mathematical hypotheses are no longer susceptible of modification."

Feynman might have been right when he said that the universe is not obligated to make sense. But the fact is that, science is obligated to make sense. That used to be the whole point of science, and still is in every other branch of science other than quantum mechanics. No one says of evolutionary theory, for example, that it is all a mysterious blackbox that we cannot possibly understand. And no one would accept this as an answer. Indeed, a famous cartoon by Sydney Harris gently mocks this attitude...


The many metaphysical speculations that are termed "interpretations of quantum mechanics" all take the mathematical formalism that explicitly divorces quantum mechanics from realism as canonical and inviolable. And then they all fail miserably to say anything at all about reality. And this is where we are.

It is disappointing, to say the least.

~~Φ~~

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

Related Posts with Thumbnails