Showing posts with label Philosophy. Show all posts
Showing posts with label Philosophy. Show all posts

07 November 2025

Philosophical Detritus III: Reality.

This is the third in a series of essays about abstractions in philosophy. Here, I continue the critique and extend it to another abstract concept that seems to trip many people up: reality.

Reality is one of the most common abstract metaphysical concepts used by both amateur and professional philosophers. We all like to say things like "In reality,..." We love to cite reality as the ultimate authority. "In reality..." is treated as a killer argument. And we try to ground our ideas of truth in reality.

However, these informal or common-sense uses of the term belie a deep and pervasive malaise in professional philosophy (the world over). After millennia of argument—across human cultures—there is no consensus on what "reality" is. Nor is there any consensus on what "truth" means (I'll come back to this). Metaphysics keeps promising insight into these problems, yet it never produces anything testable or even conceptually stable. New ways to approach reality keep emerging, but none of them ever manages to solve the problems it promises to solve.

And yet, at the same time, we all feel confident we know what reality is, or that we would know it when we see it.

When a problem has been argued over by clever people for a century without any consensus emerging, we may begin to suspect that we have framed the problem poorly. However, when we have argued for millennia and failed to reach any satisfactory conclusion, it calls the whole enterprise of philosophy, or at least metaphysics, into question.

Metaphysics is bunk. But why is it bunk?


Reality and Epistemic Privilege

Questions about reality are the principal topic of metaphysics.

  • What is real?
  • What does it mean for something to be real?
  • What is the nature of reality?

Reality is such a basic concept that you might expect there to be a long-standing consensus about it. After all, given how most of us use the term "reality", it ought to define itself. And as noted, we all seem to have a "common sense" view of reality. However, there is no general consensus on reality amongst philosophers, and there never has been. On the contrary, reality is one of the most disputed concepts in philosophy. As with many problems I've written about in recent years, there is not only an existing discordant dissensus, but it is growing all the time as new propositions are floated that try (and usually fail) to take the discussion in different directions.

We need to be clear about the implications of this dispute over "reality". If philosophers cannot even agree on what reality is, they cannot agree on anything else. There is a structural failure in the field of philosophy, an impasse that has existed for thousands of years. Lacking agreement on “reality,” philosophy fragments into self-contained silos with no common reference point. Nonetheless, this ambiguous and disputed concept continues to play an essential role in philosophy and daily life.

The problem with all these abstract metaphysical concepts is that we only have experience and imagination to go on. No one has privileged access to reality, so no one actually knows anything about reality. There is no epistemic privilege with respect to reality.

Everyone’s access to reality is mediated by factors such as perception, cognition, language, theory, and culture. There is no way around this mediation; no way to get unmediated access to reality, whatever it is. In my "nominalist" view, reality is an abstract concept; an idea. And, thus, the idea that we could have direct access to reality is quite bizarre.

Still, the idea that some amongst us do have epistemic privilege is widespread, especially in relation to religions. People are constantly stepping up to confidently tell us that they alone have privileged access to reality and can tell us what it is. It is very noticeable amongst Buddhists who like to invoke reality.

“We live in illusion and the appearance of things. There is a reality. We are that reality. When you understand this, you see that you are nothing, and being nothing, you are everything. That is all.”--Kalu Rinpoche

"the development of insight into the ultimate nature of reality." --Dalai Lama

“To think in terms of either pessimism or optimism oversimplifies the truth. The problem is to see reality as it is.”--Thich Nhat Hanh

Closer to home, for me, the founder of the Triratna Order, Sangharakshita (1925 – 2018), argued that imagination was our "reality sense". This idea was inspired by English Romantic poets rather than Indian Buddhists. No word in the canonical languages of Buddhism means "imagination". It's not a concept Buddhists made use of prior to contact with Europe. Moreover, the "reality" imagined by Sangharakshita and his followers is distinctly magical, vitalistic, and teleological (all of which seem unreal to me).

Buddhists are not exceptional in seeking to leverage the dissensus on reality to stake a claim to privileged metaphysical knowledge. Nor are Buddhists the only ones who meditate. Hindus also have a long history of meditation, and they have arrived at radically different conclusions about the significance of meditative states. So too with the Sāṃkhya philosophy, which most people now encounter in the context of haṭha yoga.

There is no "reality sensing faculty" and no way to know reality directly. I do not doubt that some people experience altered states in meditation, though I would say these largely arise in the context of sensory deprivation. Whatever those states are, after 13 years of intensively editing, translating, and studying Prajñāpāramitā texts, I no longer find it plausible that altered states in meditation reflect reality (or that "reality", in any European sense, was an important concept in Buddhism prior to European contact).

Obviously, if we cannot get information about reality directly, then we cannot know reality in any conventional sense. In other words, the big problem with metaphysics is epistemic.

To be more precise, there is no epistemic justification for any metaphysics. We don't know reality. And we cannot know reality. All traditional metaphysics arises from speculating about experience.

This is not a new observation. David Hume (1711–1776) came to similar conclusions. He famously noted that we never see a separate event that we could label "causation". What we call "causation" is merely a regular sequence of events. If event B is always seen to be preceded by event A, then we say that A caused B. And this generalised to metaphysics. Hume argued that all knowledge is either sensory experience or ideas about sensory experience. And experience is not reality.

I have encountered many people, mostly Buddhists, over the years who professed to believe the opposite of this, i.e. that experience is reality. The corollary is that we all have our own reality. This is solipsistic and egocentric. We can show why this is false by coming back to the main argument.

Half a century earlier, Isaac Newton (1643 – 1727) was able to formulate relatively simple expressions using calculus that made accurate predictions about the motions of objects.

(where F = force, v = velocity, p = momentum)

And since, for any event we can witness on earth, these expressions predict future motion with considerable accuracy and precision, they surely reflect some kind of knowledge about reality. We can confirm this by using the same equations to retrodict past events that have already been observed, so we know the outcome of the process in advance. Newton's laws of motion are a very robust description of motion as it could be observed in the 18th century. In the course of my formal education, I personally demonstrated the efficacy of all these laws of motion.

At the very least, this is objective knowledge.

Thus, there is a tension between Hume's observation that we cannot know reality and Newton's observations that appear to describe reality very well.

Immanuel Kant (1724 – 1804) attempted to resolve this tension by redefining "metaphysics" as a critical inquiry into the conditions that make knowledge and experience possible, rather than as speculative knowledge of things beyond experience. In his view, metaphysical ideas like causation, space, and time are a priori forms unconsciously imposed on experience by our minds. In other words, "space" and "time" are not part of reality and don't exist independently of an experiencing mind. Similarly, causation is how we explain sequences of experiences, not an aspect of reality. This means metaphysics cannot reveal things as they are in themselves, but only the necessary structures of how we must experience them.

The problem with Kant is that his view still requires a metaphysics, since he tacitly assumes that all humans experience the world in exactly the same way. This is a speculative view about human nature and requires something both universal and beyond experience. And I don't think it works because this is not something that Kant could know (even now). Worse, this is not my experience of other people. Exposed to a range of people, one of the most striking things we notice is that we do not all experience the world in the same way. Some aspects of experience are unequivocally not shared; they are subjective.

Kant might have gotten around this by emphasising that empiricism is more than just observing nature; it also crucially involves comparing notes about experience with other people. But he did not. It is precisely comparing notes about experience, noting the similarities and differences, that allows us to parse the commonalities in experience.

Kant doesn't really account for Newton's objective knowledge. Rather than positing that every human being sees the world in exactly the same way, it makes more sense to me to say that objective knowledge of this type is independent of the observer. From observations like Newton's laws of motion, we can infer that there is an objective world, which does have its own structures and systems. Still, this world can only be appreciated via mediated experience. At the very least, we may say that our ideas of space and time, for example, must be analogous to something objective, or we could not use them to predict the future in the way that we do.

Similar dissatisfaction with Kant drove the emergence of phenomenology. Husserl, for instance, wanted to suspend all assumptions about “reality itself” and focus instead on experience as it presents itself. Note that, despite the many successes of phenomenology, we all still rely on metaphysical frameworks to structure our understanding of the world. But this retreat from objectivity is also hamstrung by the fact that we can make valid inferences and predict the future.

However, despite the emergence of phenomenology, speculative metaphysics continues to dominate philosophy. Arguments about "reality" are ongoing and diversifying as time goes on.


Rescuing the Concept of Reality through Pragmatism

As far as I can see, some form of realism is inescapable. Realist explanations do the best job of predicting the future. Newton's laws of motion are still in daily use by scientists, engineers, and technologists precisely because they accurately predict the future. All the attempts to deprecate realism have ended in failure: specifically, failure to predict the future as realist explanations do.

Newton's laws of motion are objective facts. They are apparent to any observer. The laws have limits, and there are situations in which they fail to be accurate or precise enough. Still, within the well-known and accepted limits, the laws of motion apply.

And at the same time, we still only have experience to go on. Which means that realism has to have a pragmatic character. We can infer objective facts—such as Newton's laws of motion—and these allow us to predict the future. Being able to predict the future, and thus reduce the burden of uncertainty (without ever eliminating it), is nontrivial. Of course, such a pragmatic approach can never provide the kind of certainty that metaphysics promises, but then metaphysics has never delivered on such promises either.

I come back to a crucial point already made above: comparing notes. When we compare notes, especially in small groups, we immediately see that some aspects of our experience are shared and some are not. By painstaking observation and comparing notes, we can infer which things appear the same (or at least similar) to everyone and which are only apparent to ourselves.

Take the example of the acceleration due to gravity at the surface of the Earth. It doesn't matter who observes this or what method they use; it always turns out to be ~9.81 ± 0.15 ms-2. It doesn't even matter which units we use. We could measure it in fathoms per century squared, as long as we know how to convert one unit to another.

So the acceleration due to gravitation is more or less the same for everyone. And we can account for the variations with factors like elevation and the density of the underlying bedrock. We can say, therefore, that gravitation is independent of the observer. Another way of saying this is that gravitation is objective. And having inferred this, we can imagine many ways to test this with a view to falsifying it. An inference that both makes accurate and precise predictions (and retrodictions) and also survives rigorous attempts at falsification can be considered an accurate indicator of reality.

But we don't just compare notes on individual phenomena. Gravitation is a particular phenomenon, and we can compare this to other forces and how they operate. For example, we might look at gravitation in the light of observations of other kinds of motion. If we had a theory of gravitation that predicted motion that disagreed with our theory of motion, then we would be at an impasse. But this is not the case. When we investigate nature, we find that our inferences support and reinforce each other.

Newton's law of gravitation is consistent with his laws of motion. And they are both consistent with other formal regularities in the universe. And we now have explanations that go beyond the limits of Newton but which show that Newton's laws are special cases under certain limits.

Note that values obtained in this way should always include some measure of uncertainty. We aim to measure things with a high degree of accuracy and precision, but there is always some measurement error. And our inferences often rely on assumptions that may introduce inaccuracy or imprecision.

The physical sciences are inherently pragmatic. We aim to arrive at valid inferences that allow us to predict the future to a desired level of accuracy and precision. Doing this allows us to compile useful and robust inferences into a system of inferred knowledge that is highly reliable. Newton's laws are paradigmatic of such knowledge.

And when this is the case, we don't need to know what cannot be known. What we can and do infer is enough to be getting on with. We can step back from speculating about unobservable metaphysics and focus on what can be observed.

Note that this is not the same as the instrumentalism that afflicts quantum mechanics. Quantum theory was developed in a milieu profoundly influenced by logical positivism. Bohr, Heisenberg, Born, and others collectively resisted any attempt at a realist interpretation of quantum mechanics, arguing that since we could not observe the nano-scale (in the 1920-30s), we could not know it. They rejected both Schrödinger's realistic interpretation and both his and Einstein's critique of non-realist approaches. Schrödinger's cat was intended as a refutation of the Copenhagen view, but ended with Copenhagen adopting the cat as their mascot. The modern, Von Neumann-Dirac, formalism still resists realist interpretations. The majority of physicists simply accept that no realist interpretation of nanoscale physics is possible, even though we now have images of individual atoms, which allows us to affirm that atoms are objective phenomena.

While principled arguments exist for idealist or other non-realist views, such approaches have never allowed us to predict the future in the way that realism does. There is no idealist equivalent of Newton's laws of motion, let alone the systematic accumulation of useful knowledge that characterises science. To my knowledge, idealists have made no contribution to understanding the world.

Pragmatism allows for pluralism. Of course, most people do not want pluralism. They want answers. They want certainty. I sympathise. I want answers too. But after a lifetime of seeking answers, this is as close as I have come to a satisfactory solution.


Conclusion

"Reality" seems to be paradoxical. It is both intimately familiar and foundational to our worldviews and, at the same time, forever beyond our perception and understanding.

Thousands of years of argumentation over reality and the nature of reality have not resulted in a consensus; rather, it is the source of a growing dissensus.

Very many people reify reality, i.e. treat the abstract concept of reality as a thing in itself.

Since reality is an abstract concept, the nature of reality is abstract.

No one has epistemic privilege with respect to reality. No one knows. And the people who claim to know—including religious gurus—are misleading us or have themselves been misled. Sincerity is no guarantor of accuracy or precision. Belief is a feeling about an idea.

The emergence of phenomenology was not the end of metaphysics.

We can rescue this ambiguous and paradoxical abstract concept via pragmatism.

It has been apparent for some centuries that we don't have to rely on speculative metaphysics. We can and do infer objective facts about the world.

There are patterns and regularities in experience that only make sense in a realist framework. At the very least, experience must be analogous to reality, or we'd get lost and bump into things all the time. The precision and accuracy with which we can describe patterns and regularities in experience, and use these to predict the future, also argue pragmatically for adopting realism.

The most obvious of these descriptions and predictions come from physics, but we get them from all kinds of sciences, including social sciences.

Pragmatic objectivity is something we can aspire to and, at our best, approach. It's not the same as certainty, but it is good enough to be getting on with.

However, my sense is that promises of certain knowledge will always be attractive to some people. And this leaves those people open to manipulation and economic exploitation.

~~Φ~~

31 October 2025

Philosophical Detritus II: Consciousness

In the previous essay in this series, I made some fairly banal comments about abstractions from something like a nominalist point of view. Nominalism is usually couched in metaphysical terms, but my approach is epistemic and heuristic. I don't say "abstractions don't exist", I say "abstractions are ideas". 

Ideas are ontologically subjective. We can have objective knowledge about ideas, it's just a different kind of knowledge than we can have about objects. As John Searle puts it:

The ontological subjectivity of the domain [of consciousness] does not prevent us from having an epistemically objective science of that domain.

For example, ideas do not have properties such as location, extension, or orientation in space or time. Still, ideas are knowable. Counting systems and numbers are abstract and thus ontologically subjective, but 2 + 2 = 4 is an epistemically objective fact about numbers.

At the same time, we can treat ideas as metaphorically located, extended, and orientated. An apt example would be "the ideas in my head". Metaphorically, IDEAS ARE OBJECTS. With this mapping from objects to ideas in place, we can now make statements in which qualities of objects and verbs that apply to manipulating objects are applied to ideas. 

Unfortunately, this opens up the possibility of (1) treating the abstract as real (reification) and (2) treating the abstract as independent of concrete examples (hypostatisation). There is a third problem, which has no widely-accepted name, but which we can call animation, which is treating ideas as if they have their own agency (compare Freud's psychoanalytic theory which gives emotions their own agency). That is to say, without care we may conclude that ideas are real, independent, and autonomous agents.

The point of taking this approach to abstractions is pragmatic. Over the years I have participated in and witnessed many philosophical discussions. Not a few of these have concerned the nebulous abstraction consciousness. And on the vast majority of occasions the discussion is plagued by unacknowledged reification, hypostatization, and animation. In other words, the abstraction "consciousness" is routinely treated as a real, independent, and autonomous agent. For example, these are some actual questions recently asked on Quora.

  • If we did not have a consciousness, would we have thought of the idea of consciousness?
  • If science was able to clone an exact copy of me, including my consciousness, would it be me?
  • How can we upload human consciousness via AI?
  • Is there any scientific evidence that we are all one consciousness?
  • Is it possible to transfer consciousness to another body like a clone or machine?
  • Is it scientifically possible to transfer self consciousness from one body to another?
  • What is the real nature of consciousness? Can it be engineered or exported by humans, or does it exist beyond us?
  • Is consciousness a fundamental property of the universe, or is it an emergent phenomenon of complex biological systems?
  • How likely is it that humans will eventually be able to fully explain consciousness?

These kinds of questions are asked again and again with minor differences in emphasis. They are also answered over and over again. It appears that having many available answers does not reduce the desire to ask the question. I think part of the problem is that answers are wildly inconsistent. Asked a yes/no question, Quora answers often say "yes", "no", and "maybe" with equal confidence and authority.

In this essay, I will attempt to apply my heuristic to cut through some of the bullshit and bring some order to one the most confused topics in philosophy: consciousness. This is not as difficult as it sounds. 


Meaning is Use

Observing how people use this term "consciousness", my sense is that that vast majority of people use "consciousness" as a synonym for "soul". That is to say they treat "consciousness" as a nonphysical entity that is both independent of their physical body (including their brain) and, at the same time, integral to their identity and/or personality. For the majority, it seems, consciousness is a kind of secularised soul, stripped of the supernatural significance given to it by christians, but still a real, independent, autonomous agent. 

Being independent of the body means that "consciousness" is able to survive death. It is hypostatized consciousness, for example, that allows for techno-utopian ideas such as "uploading consciousness", "transferring consciousness" from one body to another (including the wildly incoherent concept of the "brain transplant"), as well as various kinds of disembodied consciousness (including post-mortem consciousness).

In these examples, "consciousness" is something that is not causally tied to a body and, as such, it can exist without a body or be moved between bodies, including non-human bodies. Like the soul, the uploaded consciousness is disembodied and effectively immortal (which explains some of the ongoing appeal of the fallacious idea). "Uploading consciousness" is an analog of the christian narrative of resurrection or the buddhist narrative of rebirth. It's an afterlife theory. In my book Karma and Rebirth Reconsidered, I argued that all religious afterlife theories are incoherent because they all contradict each other. 

A particular mistake that almost always goes with consciousness qua soul, is vitalism. This is the idea that what distinguishes living matter from non-living matter is some kind of animating principle or élan vital. In antiquity, this principle was almost always associated with breath (see, e.g. my essay on spirit as breath). In Judeo-Christian mythology, Yahweh breathes life into Adam, animating him. The word animate derives from a root meaning "breathe". Similarly, psykhē (Greek) and spīritus (Latin) originally meant "breath".

As anyone who has experienced the corpse of a loved one knows, its intuitive to think that something animated them and that their corpse is the body minus that animating principle. For example, I vividly recall seeing my father's corpse in 1990 and having this reaction.

Actually, what's missing in the experience of see the corpse of a loved one is our emotional response to them. There is a neurological condition known as Capgras Syndrome, in which localised brain damage can leave a person able to recognise familiar faces, but unable to experience emotional responses to them. They frequently arrive at the bizarre conclusion that the people they know have been replaced by doppelgangers.

My father's corpse was like an exact replica of him, that wasn't moving or responding. All the personality was gone. Like many people my first intuition was that my father's life and personality had gone somewhere. Which is to say, that they still existed apart from the body. With a lot more life experience and learning under my belt, I can now see that, while the difference between living and dead is stark, it's our own lack of emotional response to corpses that we are trying to explain.

As a teenager, I remember going to the funeral of my best friend's father who died quite young. My friend and his nuclear family were all disconcertingly smiling and happy. They were not overtly religious in the conventional sense of being members of a religious community. Nevertheless, for them, the deceased man was still a very strong presence. They felt him still there with them. They were not sad, at the time, because in their minds, the father was not gone wholly gone or inaccessible.

I get the attraction to and plausibility of vitalism. I just don't believe it. Vitalism was discredited when we discovered how to synthesise organic compounds in the late 19th century. We don't have to add any "vital principle" or "life force" to account for animate matter.

Despite being secularised and stripped of significance, the idea of a consciousness qua autonomous entity that survives death still has a religious flavour. Witness the people who assume that "consciousness" is an entity then go around seeking evidence that supports this view. 

By contrast, a rational approach would begin with concrete evidence. If we were to start over, and re-examine the evidence, no one would propose the concept of a soul.

The abstract concept “consciousness” has become a dead end.

  • All statements that treat “consciousness” as a concrete or real thing or entity are false.
  • All statements that treat “consciousness” as a separate or disembodied thing are false.
  • All statements that treat “consciousness” as an autonomous agent are false.

And from what I can see, very little of what remains is useful. Some metaphorical uses of "consciousness" are common:

  • A stream of consciousness.
  • The fabric of consciousness.
  • A field of awareness.
  • A thread of awareness
  • The tapestry of the mind
  • A vessel of thought
  • The machinery of the mind.
  • A lens of perception.

However, all of these uses are prone to hypostatisation, reification, and animation. 


Intentionality

One way around the mistakes people make is to acknowledge Dan Dennett's observation that consciousness is (almost) always intentional. We can say that consciousness (almost) always has an object or condition. Heuristically, we can say that consciousness is always consciousness of something. If we always follow "consciousness" with "of _____" and fill in the blank, we are much less likely to go wrong. For example:

  • Concrete: “I am conscious of feeling cold.” ✓
  • Abstract: “There is consciousness of feeling cold.” ✓
  • Reified:
    • “There is a consciousness.” X
    • “My consciousness...” X
    • “Consciousness is…” X
    • “Consciousness does…” X

Unfortunately, even true abstract statements about experience are likely to be misinterpreted in ways that falsify them.

The exception to conscious states being intentional is the state of "contentless awareness" sometimes experienced in sleep or meditation. See for example the discussion: "Can you be aware of nothing?" in The Conversation.

For Buddhists, note, that I now distinguish "contentless awareness" from "cessation". Following cessation there is no awareness. The state of śūnyatā (also an abstract noun) is not a conscious state. It is an unconscious state, though seemingly distinct from sleep or anaesthesia.

Contentless awareness probably corresponds to the higher āyatana stages, for example "the stage of nothingness" (ākiñcaññāyatana) or "the stage of no awareness or unawareness"(nevasaññānāsañña). Prajñāpāramitā texts make it clear that having any kind of experience or memories of experience is inconsistent with śūnyatā.


To sum up

"Consciousness" is an abstract concept. An idea. Ideas are not real, independent, and autonomous agents. Ideas are ideas. Ideas are subjective; though we can have objective knowledge about them.

Talking about consciousness as a soul is a dead loss. But, then, there is very little talk about consciousness that is not a dead loss. And this includes most of "philosophy". 

Consciousness as a abstract concept is intentional. This can be reflected in statements that include what we are conscious of.

~~Φ~~

30 May 2025

Theory is Approximation

A farmer wants to increase milk production. They ask a physicist for advice. The physicist visits the farm, takes a lot of notes, draws some diagrams, then says, "OK, I need to do some calculations."

A week later, the physicist comes back and says, "I've solved the problem and I can tell you how to increase milk production".

"Great", says the farmer, "How?".

"First", says the physicist, "assume a spherical cow in a vacuum..."

What is Science?

Science is many things to many people. At times, scientists (or, at least, science enthusiasts) seem to claim that they alone know the truth of reality. Some seem to assume that "laws of science" are equivalent to laws of nature. Some go as far as stating that nature is governed by such "laws". 

Some believe that only scientific facts are true and that no metaphysics are possible. While this view is less common now, it was of major importance in the formulation of quantum theory, which still has problems admitting that reality exists. As Mara Beller (1996) notes:

Strong realistic and positivistic strands are present in the writings of the founders of the quantum revolution-Bohr, Heisenberg, Pauli and Born. Militant positivistic declarations are frequently followed by fervent denial of adherence to positivism (183). 

On the other hand, some see science as theory-laden and sociologically determined. Science is just one knowledge system amongst many of equal value. 

However, most of us understand that scientific theories are descriptive and idealised. And this is the starting point for me. 

In practising science, I had ample opportunity to witness hundreds or even thousands of objective (or observer-independent) facts about the world. The great virtue of the scientific experiment is that you get the same result, within an inherent margin of error associated with measurement, no matter who does the experiment or how many times they do it. The simplest explanation of this phenomenon is that the objective world exists and that such facts are consistent with reality. Thus, I take knowledge of such facts to constitute knowledge about reality. The usual label for this view is metaphysical realism.

However, I don't take this to be the end of the story. Realism has a major problem, identified by David Hume in the 1700s. The problem is that we cannot know reality directly; we can only know it through experience. Immanuel Kant's solution to this has been enormously influential. He argues that while reality exists, we cannot know it. In Kant's view, those qualities and quantities we take to be metaphysical—e.g. space, time, causality, etc.—actually come from our own minds. They are ideas that we impose on experience to make sense of it. This view is known as transcendental idealism. One can see how denying the possibility of metaphysics (positivism) might be seen as (one possible) extension of this view. 

It's important not to confuse this view with the idea that only mind is real. This is the basic idea of metaphysical idealism. Kant believed that there is a real world, but we can never know it. In my terms, there is no epistemic privilege.

Where Kant falls down is that he lacks any obvious mechanism to account for shared experiences and intersubjectivity (the common understanding that emerges from shared experiences). We do have shared experiences. Any scenario in which large numbers of people do coordinated movements can illustrate what I mean. For example, 10,000 spectators at a tennis match turning their heads in unison to watch a ball be batted back and forth. If the ball is not objective, or observer-independent, how do the observers manage to coordinate their movements? While Kant himself argues against solipsism, his philosophy doesn't seem to consider the possibility of comparing notes on experience, which places severe limits on his idea. I've written about this in Buddhism & The Limits of Transcendental Idealism (1 April 2016).

In a pragmatic view, then, science is not about finding absolute truths or transcendental laws. Science is about idealising problems in such a way as to make a useful approximation of reality. And constantly improving such approximations. Scientists use these approximations to suggest causal explanations for phenomena. And finally, we apply the understanding gained to our lives in the form of beliefs, practices, and technologies. 


What is an explanation?

In the 18th and 19th centuries, scientist confidently referred to their approximations as "laws". At the time, a mechanistic universe and transcendental laws seemed plausible. They were also gathering the low-hanging fruit, those processes which are most obviously consistent and amenable to mathematical treatment. By the 20th century, as mechanistic thinking waned, new approximations were referred to as "theories" (though legacy use of "law" continued). And more recently, under the influence of computers, the term "model" has become more prevalent. 

A scientific theory provides an explanation for some aspect of reality, which allows us to understand (and thus predict) how what we observe will change over time. However, even the notion of explanation requires some unpacking.

In my essay, Does Buddhism Provide Good Explanations? (3 Feb 23), I noted Faye's (2007) typology of explanation:

  • Formal-Logical Mode of Explanation: A explains B if B can be inferred from A using deduction.
  • Ontological Mode of Explanation: A explains B if A is the cause of B.
  • Pragmatic Mode of Explanation: a good explanation is an utterance that addresses a particular question, asked by a particular person whose rational needs (especially for understanding) must be satisfied by the answer.
In this essay, I'm striving towards the pragmatic mode and trying to answer my own questions. 

Much earlier (18 Feb 2011), I outlined an argument by Thomas Lawson and Robert McCauley (1990) which distinguished explanation from interpretation.

  • Explanationist: Knowledge is the discovery of causal laws, and interpretive efforts simply get in the way.
  • Interpretationist: Inquiry about human life and thought occurs in irreducible frameworks of values and subjectivity. 
"When people seek better interpretations they attempt to employ the categories they have in better ways. By contrast, when people seek better explanations they go beyond the rearrangement of categories; they generate new theories which will, if successful, replace or even eliminate the conceptual scheme with which they presently operate." (Lawson & McCauley 1990: 29)

The two camps are often hostile to each other, though some intermediate positions exist between them. As I noted, Lawson and McCauley see this as somewhat performative:

Interpretation presupposes a body of explanation (of facts and laws), and seeks to (re)organise empirical knowledge. Explanation always contains an element of interpretation, but successful explanations winnow and increase knowledge. The two processes are not mutually exclusive, but interrelated, and both are necessary.

This is especially true for physics where explanations often take the form of mathematical equations that don't make sense without commentary/interpretation.  


Scientific explanation.

Science mainly operates, or aims to operate, in the ontological/causal mode of explanation: A explains B if (and only if) A is the cause of B. However, it still has to satisfy the conditions for being a good pragmatic explanation:  "a good explanation is an utterance that addresses a particular question, asked by a particular person whose rational needs (especially for understanding) must be satisfied by the answer."

As noted in my opening anecdote, scientific models are based on idealisation, in which an intractably complex problem is idealised until it becomes tractable. For example, in kinematic problems, we often assume that the centre of mass of an object is where all the mass is. It turns out that when we treat objects as point masses in kinematics problems, the computations are much simpler and the results are sufficiently accurate and precise for most purposes. 

Another commonly used idealisation is the assumption that the universe is homogeneous or isotropic at large scales. In other words, as we peer out into the farthest depths of space, we assume that matter and energy are evenly distributed. As I will show in the forthcoming essay, this assumption seems to be both theoretically and empirically false. And it seems that so-called "dark energy" is merely an artefact of this simplifying assumption. 

Many theories have fallen because of employing a simplifying assumption that distorts answers to make them unsatisfying. 

A "spherical cow in a vacuum" sounds funny, but a good approximation can simplify a problem just enough to make it tractable and still provide sufficient accuracy and precision for our purposes. It's not that we should never idealise a scenario or make simplifying assumptions. The fact is that we always do this. All physical theories involve starting assumptions. Rather, the argument is pragmatic. The extent to which we idealise problems is determined by the ability of the model to explain phenomena to the level of accuracy and precision that our questions require. 

For example, if our question is, "How do we get a satellite into orbit around the moon?" we have a classic "three-body" problem (with four bodies: Earth, moon, sun, and satellite). Such problems are mathematically very difficult to solve. So we have to idealise and simplify the problem. For example, we can decide to ignore the gravitational attraction caused by the satellite, which is real but tiny. We can assume that space is relatively flat throughout. We can note that relativistic effects are also real but tiny. We don't have to slavishly use the most complex explanation for everything. Given our starting assumptions, we can just use Newton's law of gravitation to calculate orbits. 

We got to relativity precisely because someone asked a question that Newtonian approaches could not explain, i.e. why does the orbit of Mercury precess and at what rate? In the Newton approximation, the orbit doesn't precess. But in Einstein's reformulation of gravity as the geometry of spacetime, a precession is expected and can be calculated. 


Models

I was in a physical chemistry class in 1986 when I realised that what I had been learning through school and university was a series of increasingly sophisticated models, and the latest model (quantum physics) was still a model. At no point did we get to reality. There did seem to me to be a reality beyond the models, but it seemed to be forever out of reach. I had next to no knowledge of philosophy at that point, so I struggled to articulate this thought, and I found it dispiriting. In writing this essay, I am completing a circle that I began as a naive 20-year-old student.

This intuition about science crystallised into the idea that no one has epistemic privilege. By this I mean that no one—gurus and scientists included—has privileged access to reality. Reality is inaccessible to everyone. No one knows the nature of reality or the extent of it. 

We all accumulate data via the same array of physical senses. That data feeds virtual models of world and self created by the brain. Those models both feed information to our first-person perspective, using the sensory apparatus of the brain to present images to our mind's eye. This means that what we "see" is at least two steps removed from reality. This limit applies to everyone, all the time.

However, when we compare notes on our experience, it's clear that some aspects of experience are independent of any individual observer (objective) and some of them are particular to individual observers (subjective). By focusing on and comparing notes about the objective aspects of experience, we can make reliable inferences about how the world works. This is what rescues metaphysics from positivism on one hand and superstition on the other. 

We can all make inferences from sense data. And we are able to make inferences that prove to be reliable guides to navigating the world and allow us to make satisfying causal explanations of phenomena. Science is an extension of this capacity, with added concern for accuracy, precision, and measurement error. 

Since reality is the same for everyone, valid models of reality should point in the same direction. Perhaps different approaches will highlight different aspects of reality, but we will be able to see how those aspects are related. This is generally the case for science. A theory about one aspect of reality has to be consistent, even compliant, with all the other aspects. Or if one theory is stubbornly out of sync, then that theory has to change, or all of science has to change. Famously, Einstein discovered several ways in which science had to change. For example, Einstein proved that time is particular rather than universal.  Every point in space has its own time. And this led to a general reconsideration of the role of time in our models and explanations. 


Sources of Error

A scientific measurement is always accompanied by an estimate of the error inherent in the measurement apparatus and procedure. Which gives us a nice heuristic: If a measurement you are looking at is not accompanied by an indication of the errors, then the measurement is either not scientific, or it has been decontextualised and, with the loss of this information, has been rendered effectively unscientific.

Part of every good scientific experiment is identifying sources of error and trying to eliminate or minimise them. For example, if I measure my height with three different rulers, will they all give the same answer? Perhaps I slumped a little on the second measurement? Perhaps the factory glitched, and one of the rulers is faulty? 

In practice, a measurement is accurate to some degree, precise to some degree, and contains inherent measurement error to some degree. And each degree should be specified to the extent that it is known.

Accuracy is itself a measurement, and as a quantity reflects how close to reality the measurement is. 

Precision represents how finely we are making distinctions in quantity.

Measurement error reflects uncertainty introduced into the measurement process by the apparatus and the procedure.

Now, precision is relatively easy to know and control. We often use the heuristic that a ruler is accurate to half the smallest measure. So a ruler marked with millimetres is considered precise to 0.5 mm. 

Let's I want to measure my tea cup. I have three different rulers. But I also note that the cup has rounded edges, so knowing where to measure from is a judgment call. I estimate that this will add a further 1 mm of error. Here are my results: 

  • 83.5 ± 1.5 mm.
  • 86.0 ± 1.5 mm.
  • 84.5 ± 1.5 mm

The average is 84.6 ± 1.5 mm. So we would say that we think the true answer lies between 86.1 and 83.1 mm. And note that even though I have an outlier (86.0 mm), this is in fact within the margin of error. 

As I was measuring, I noted another potential source of error. I was guesstimating where the widest point was. And I think this probably adds another 1-2 mm of measurement error. When considering sources of error in a measurement, one's measurement procedure is often a source. In science, clearly stating one's procedure allows others to notice problems the scientists might have overlooked. Here, I might have decided to mark the cup so that I measured at the same point each time. 

Now the trick is that there is no way to get behind the measurement and check with reality. So, accuracy has to be defined pragmatically as well. One way is to rely on statistics. For example, one makes many measurements and presents the mean value and the standard deviation (which requires more than three measurements). 

The point is that error is always possible. It always has to be accounted for, preferably in advance. We can take steps to eliminate error. An approximation always relies on starting assumptions, and these are also a source of error. Keep in mind that this critique comes from scientists themselves. They haven't been blindly ignoring error all these years. 


Mathematical Models

I'm not going to dwell on this too much. But in science, our explanations and models usually take the form of an abstract symbolic mathematical equation. A simple, one-dimensional wave equation takes the general form:

y = f(x,t)

That is to say that the displacement of the wave (y) is a function of position (x) and time (t). Which is to say that changes in the displacement are proportional to changes in position in space and time. This describes a wave that, over time, moves in the x direction (left-right) and displaces in the y direction (up-down). 

More specifically, we model simple harmonic oscillations using the sine function. In this case, we know that spatial changes are a function of position and temporal changes are a function of time. 

y(x) = sin(x)
y(t) = sin(t)

It turns out that the relationship between the two functions can be expressed as 

y(x,t) = sin(x ± t).

If the wave is moving right, we subtract time, and if the wave is moving to the left, we add it. 

The sine function smoothly changes between +1 and -1, but a real wave has an amplitude, and we can scale the function by multiplying it by the amplitude.

y(x,t) = A sin(x ± t).

And so on. We keep refining the model until we get to the general formula:

y(x,t) = A sin(kx ± ωt ± ϕ).

Where A is the maximum amplitude, k is the stiffness of the waving medium, ω is the angular velocity, and ϕ is the phase.

The displacement is periodic in both space and time. Since k = 2π/λ (where λ is the wavelength), the function returns to the same spatial configuration when x/n = λ (where n is a whole number). Similarly, since ω = 2π/T (where T is the period or wavetime), the function returns to the same temporal configuration when t/n = T.

What distinguishes physics from pure maths is that, in physics, each term in an equation has a physical significance or interpretation. The maths aims to represent changes in our system over time and space. 

Of course, this is idealised. It's one-dimensional. Each oscillation is identical to the last. The model has no friction. If I add a term for friction, it will only be an approximation of what friction does. But no matter how many terms I add, the model is still a model. It's still an idealisation of the problem. And the answers it gives are still approximations.


Conclusion

No one has epistemic privilege. This means that all metaphysical views are speculative. However, we need not capitulate to solipsism (we can only rely on our own judgements), relativism (all knowledge has equal value) or positivism (no metaphysics is possible). 

Because, in some cases, we are speculating based on comparing notes about empirical data. This allows us to pragmatically define metaphysical terms like reality, space, time, and causality in such a way that our explanations provide us with reliable knowledge. That is to say, knowledge we can apply and get expected results. Every day I wake up and the physical parameters of the universe are the same, even if everything I see is different. 

Reality is the world of observer-independent phenomena. No matter who is looking, when we compare notes, we broadly agree on what we saw. There is no reason to infer that reality is perfect, absolute, or magical. It's not the case that somewhere out in the unknown, all of our problems will be solved. As a historian of religion, I recognise the urge to utopian thinking and I reject it. 

Rather, reality is seen to be consistent across observations and over time. Note that I say "consistent", not "the same". Reality is clearly changing all the time. But the changes we perceive follow patterns. And the patterns are consistent enough to be comprehensible. 

The motions of stars and planets are comprehensible: we can form explanations for these that satisfactorily answer the questions people ask. The patterns of weather are comprehensible even when unpredictable. People, on the other hand, remain incomprehensible to me.

That said, all answers to scientific questions are approximations, based on idealisations and assumptions. Which is fine if we make clear how we have idealised a situation and what assumptions we have made. This allows other people to critique our ideas and practices. As Mercier and Sperber point out, it's only in critique that humans actually use reasoning (An Argumentative Theory of Reason,10 May 2013). 

We can approximate reality, but we should not attempt to appropriate it by insisting that our approximations are reality. Our theories and mathematics are always the map, never the territory. The phenomenon may be real, but the maths never is.  

This means that if our theory doesn't fit reality (or the data), we should not change reality (or the data); we should change the theory. No mathematical approximation is so good that it demands that we redefine reality. Hence, all of the quantum Ψ-ontologies are bogus. The quantum wavefunction is a highly abstract concept; it is not real. For a deeper dive into this topic, see Chang (1997), which requires a working knowledge of how the quantum formalism works, but makes some extremely cogent points about idealised measurements.

In agreeing that the scientific method and scientific explanations have limits, I do not mean to dismiss them. Science is by far the most successful knowledge seeking enterprise in history. Science provides satisfactory answers many questions. For better or worse, science has transformed our lives (and the lives of every living thing on the planet). 

No, we don't get the kinds of answers that religion has long promised humanity. There is no certainty, we will never know the nature of reality, we still die, and so on. But then religion never had any good answers to these questions either. 

~~Φ~~


Beller, Mara. (1996). "The Rhetoric of Antirealism and the Copenhagen Spirit". Philosophy of Science 63(2): 183-204.

Chang, Hasok. (1997). "On the Applicability of the Quantum Measurement Formalism." Erkenntnis 46(2): 143-163. https://www.jstor.org/stable/20012757

Faye, Jan.(2007). "The Pragmatic-Rhetorical Theory of Explanation." In Rethinking Explanation. Boston Studies in the Philosophy of Science, 43-68. Edited by J. Persson and P. Yikoski. Dordrecht: Springer.

Lawson, E. T. and McCauley, R. N. (1990). Rethinking Religion: Connecting Cognition and Culture. Cambridge: Cambridge University Press.


Note: 14/6/25. The maths is deterministic, but does this mean that reality is deterministic? 

16 May 2025

Observations and Superpositions

The role of observation in events has been a staple of quantum physics for decades and is closely associated with "the Copenhagen interpretation". On closer inspection, it turns out that everyone connected with Bohr's lab in Copenhagen had a slightly different view on how to interpret the Schrödinger equation. Worse, those who go back and look at Bohr's publications nowadays tend to confess that they cannot tell what Bohr's view was. For example, Adam Becker speaking to Sean Carroll (time index 21:21; emphasis added):

I don't think that there is any single Copenhagen interpretation. And while Niels Bohr and Max Born and Pauli, and Heisenberg and the others may have each had their own individual positions. I don't think that you can combine all of those to make something coherent...

...Speaking of people being mad at me, this is something that some people are mad at me for, they say, "But you said the Niels Bohr had this position?" I'm like, "No, I didn't, I didn't say that Niels Bohr had any position. I don't know what position he had and neither does anybody else."

So we should be cautious about claims made for "the Copenhagen interpretation", which seem to imply a consensus that never existed at Bohr's lab in Copenhagen.

That said, the idea that observation causes the wavefunction to collapse is still a staple of quantum physics. Despite playing a central role in quantum physics, "observation" is seldom precisely defined in scientific terms, or when it is defined, it doesn't involve any actual observation (I'll come back to this). The situation was made considerably worse when (Nobel laureate) Eugene Wigner speculated that it is "consciousness" that collapses the wave function. "Consciousness" is even less well-defined than "observation". While most academic physicists instantly rejected the role of consciousness in events, outside of physics it became a popular element of science folklore and New Ageism.

The idea that "observation" or "consciousness" are involved in "collapsing the wave function" is also an attachment point for Buddhists who wish to bolster their shaky faith by aligning it with science. The result of such legitimisation strategies is rather pathetic hand waving. Many Buddhists want reality to be reductive and idealist: they want "mind" to be the fundamental substance of the universe. This would align with some modern interpretations of traditional Buddhist beliefs about mind. But the idea is also to find some rational justification for Buddhist superstitions like karma and rebirth. As I showed at length in my book Karma and Rebirth Reconsidered, it simply does not work.

In this essay, I will show that it is trivially impossible for observation to play any role in causation at any level. I'm going to start by defining observation with respect to a person and exploring the implications of this, particularly with respect to Schrödinger's cat. I will also consider the post hoc rationalisation of observation qua "interaction" (sans any actual observation).


What is "An Observation"?

We may say that an observer, Alice, observes a process P giving rise to an event E, with an outcome O, when they become aware of P, E and O. It is possible to be aware of each part individually, but in order to understand and explain what has happened, we really need to have some idea of what processes were involved, what kinds of events it engendered, and the specific outcomes of those events. 

It's instructive to ask, "How does Alice become aware of external events?" Information from the process, event, and/or outcome of interest first has to reach her in some form. The fastest way that this can happen is for light from the process, event, and/or outcome to reach Alice's eyes. It always takes a finite amount of time for the light to reach her eye.

But light reaching Alice's eye alone does not create awareness. Rather, cells in the eye convert the energy of light into electrochemical energy (a nerve impulse). That pulse of energy travels along the optic nerve to the brain and is incorporated into our virtual world model and then, finally, presented to the first person perspective. Only then we become aware of it. And this part also takes a finite amount of time. Indeed, this part takes a lot more time than the light travelling.

Therefore, the time at which Alice becomes aware of P, E, and O, is some appreciable amount of time after E happens and O is already fixed. There is no alternative definition of "observation" that avoids this limitation, since information cannot travel faster than the speed of light and the brain is always involved. The only other possibilities are, if anything, slower. Therefore:

Alice can only observe processes, events, and outcomes after the fact.

If observation is always after the fact, then observation can never play any causal role in the sequence of events because causes must precede effects, in all frames of reference. Therefore:

Observation can play no causal role
in processes, events, or outcomes.

This means that there is no way that "observation" (or "consciousness") can cause the collapse the wavefunction. Rather, the collapse of the wavefunction has to occur first, then the light from that event has to travel to Alice's eye. There is no way around this physical limitation in our universe. And given the nature of wavefunctions—the outputs if which are vectors in a complex plane—this can hardly be surprising. 

Observation is never instantaneous let alone precognitive. And this means that all talk of observation causing "wavefunctions to collapse" is trivially false.

We could simply leave it at that, but it will be instructive to re-examine the best known application of "observation".


Schrödinger's cat

Schrödinger's cat is only ever alive or dead. It is never both alive and dead. This was the point that Schrödinger attempted to make. Aristotle's law of noncontradiction applies: an object cannot both exist and not exist at the same time. We cannot prove this axiom from first principles, but if we don't accept it as an axiom, it renders all communication pointless. No matter what true statement I may state, anyone can assert that the opposite is also true.

Schrödinger proposed his thought experiment as a reductio ad absurdum argument against Bohr and the others in Copenhagen. He was trying to show that belief in quantum superpositions leads to absurd, illogical consequences. He was right, in my opinion, but he did not win the argument (and nor will I).

This argument is broadly misunderstood outside of academic physics. This is because Schrödinger's criticism was taken up by physicists as an exemplification of the very effect it was intended to debunk. "Yes," cried the fans of Copenhagen type explanations, "this idea of both-alive-and-dead at the same time is exactly what we mean. Thanks." And so we got stuck with the idea that the cat is both alive and dead at the same time (which is nonsense). Poor old Schrödinger, he hated this idea (and didn't like cats) and now it is indelibly associated with him.

The general set up of the Schrödinger's cat thought experiment is that a cat is placed in a box. Inside the box, a random event may occur. If it occurs, the event triggers the death of the cat via a nefarious contraption. Once the cat is in the box, Alice doesn't know whether the cat is alive or dead. The cat is a metaphor for subatomic particles. We are supposed to believe that they adopt a physical superposition of states: say, "spin up" and "spin down", or "position x" and "position y" at the same time before we measure them, then at the point of measurement, they randomly adopt one or the other of the superposed states.

Here's the thing. The cat goes into the box alive. If the event happens, the cat dies. If it doesn't happen the cat lives. And Alice doesn't know which until she opens the box. The uncertainty here is not metaphysical, it's epistemic. It's not that a cat can even be in a state of both-alive-and-dead, it cannot; it's only that we don't know whether it is alive or dead. So this is a bad analogy.

Moreover, even when Alice opens the box, the light from the cat still takes some time to reach her eyes. Observation always trails behind events, it cannot anticipate or participate in events. Apart from reflected light, nothing is coming out from Alice that could participate in the sequence of events happening outside her body, let alone change the outcome.

Also, the cat has eyes and a brain. It is itself an "observer". 

Epistemic uncertainty cannot be mapped back to metaphysical uncertainty without doing violence to reason. A statement, "I don't know whether the cat is alive or dead," cannot be taken to imply that the cat is both alive and dead. This is definitely a category error for cats. Schrödinger's view was that it is also a category error for electrons and photons. And again, I agree with Schrödinger (and Einstein).

In that case, why do physics textbooks still insist on the nonsensical both-alive-and-dead scenario? It seems to be related to a built-in feature of the mathematics of spherical standing waves, which are at the heart of Schrödinger's equation (and many other features of modern science). The mathematics of standing waves was developed in the 18th century (i.e. it is thoroughly classical). Below, I quote from the Mathworld article on Laplace's equation (for a spherical standing wave) by Eric Weisstein (2025. Emphasis added)

A function psi which satisfies Laplace's equation is said to be harmonic. A solution to Laplace's equation has the property that the average value over a spherical surface is equal to the value at the center of the sphere (Gauss's harmonic function theorem). Solutions have no local maxima or minima. Because Laplace's equation is linear, the superposition of any two solutions is also a solution.

The last sentence of this passage is similar to a frequently encountered claim in quantum physics. That is to say, the fact that solutions for individual quantum states can be added together and produce another valid solution for the wave equation. This is made out to be a special feature of quantum mechanics that defines the superposition of "particles".

Superposition of waves is nothing remarkable or "weird". Any time two water waves meet, for example, they superpose.


In this image, two wave fronts travel towards the viewer obliquely from the left and right at the same time (the appear to meet almost at right angles). The two waves create an interference pattern (the cross in the foreground) where the two waves are superposed. Waves routinely superpose. And this is known as the superposition principle.

The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually."
The Penguin Dictionary of Physics.

For this type of linear function, we can define superposition precisely: f(x) + f(y) = f(x+y)

In mathematical terms, each actual wave can be thought of as a solution to a wave equation. The sum of the waves must also be a solution because of the situation we see in the image, i.e. two waves physically adding together where they overlap, while at the same time retaining their identity.

I've now identified three universal properties of spherical standing waves that are frequently presented as special features of quantum physics:

  • quantisation of energy
  • harmonics = higher energy states (aka orbitals)
  • superposition (of waves)

These structural properties of standing waves are not "secret", but they are almost always left out of narrative accounts of quantum physics. And yet, these are important intuitions to bring to bear when applying wave mechanics to describing real systems.

Something else to keep in mind is that "quantisation" is an ad hoc assumption in quantum physics. It's postulated to be a fundamental feature of all quantum fields. The only problem is that all of the physical fields we know of—which is to say the fields we can actually measure—are smooth and continuous across spacetime: including gravitational fields and electromagnetic fields. Scientists have imagined discontinuous or quantized fields, but they have never actually seen one.

Moreover, as far as I know, the only physical mechanism in our universe that is known to quantize energy, create harmonics, and allow for superposition is the standing wave. The logical deduction from these facts is that it is the standing wave structure of the atom that quantizes the energy of electrons and photons and creates electron orbitals. 

Quantization is a structural property of atoms, not a substantial property of fields. (Or more conventionally and less precisely, quantization is an emergent property, not a fundamental property). 

Also, as I have already explained, the coexistence of probabilities always occurs before any event, and those probabilities always collapse at the point when an event has a definite outcome. There is nothing "weird" about this; it's not a "problem". What is weird, is the idea that hypostatizing and reifying probabilities leads to some meaningful metaphysics. It has not, and it will not.

While the superposition of waves or probabilities is an everyday occurrence. The superposition of physical objects is another story. Physical objects occupy space in an exclusive way: if one object is in that location, no other physical object can also be in that location. Physical objects cannot superpose and they are never observed to be superposed. And yet, the superposition of point particles is how physicists continue to explain the electron in an atom.

The electric field has been measured and it is found to be smooth and continuous in spacetime. Just as predicted by Maxwell. Given this, simple logic and basic geometry dictates that if—

  1. the electrostatic field of the proton has spherical symmetry, and
  2. a hydrogen atom is electrostatically neutral, and
  3. the neutrality is assumed to be the result of the electron's electrostatic field,

—then the electron can only be in one configuration: it must be a sphere (or a close approximation of a sphere) completely surrounding the proton. This is the only way to ensure that all the field lines emerging from the proton terminate at the electron. Otherwise there are unbalanced forces - a net charge rather than neutrality. And a changing electric field dissipates energy, which electrons do not. 

Unbalanced forces

Now, if the electron is both a wave and a sphere, then the electron can only be a spherical standing wave. The Bohr model of the atom was incorrect and it surprises me greatly that this problem was not identified at the time. 

And if the electron is a spherical standing wave then, because these are universal features of standing waves, we expect:

  1. The energy of the electron in the H atom will be quantised.
  2. The electron will form harmonics corresponding to higher energy states and it will jump between them when it absorbs or emits photons.
  3. When two electron waves intersect, the sum of their amplitudes is also a solution to the wave equation.

Moreover, we can now take pictures of atoms using electron microscopes. Atoms are physical objects. In every single picture, atoms appear to be approximately spherical.


And yet mainstream quantum models do not quite treat atoms as real. Quantum physics is nowadays all about probabilities. The problem is that, as I established in an earlier essay, a probability cannot possibly balance an electrostatic field to create a neutral atom. Only a real electric field can do this. Schrödinger was right to be unconvinced by the probability interpretation, even if it works. But he was wrong about modelling a particle as a wave. 

Waves are observed to superpose all the time. Solid objects are never observed to do so. The only reason we even consider superposition for "particles" is the wave-particle duality postulate, which we now know to be inaccurate. "Particles" are waves.

As I understand it, the idea that our universe consists of 17 fields in which particles are "excitations" is a widely accepted postulate. And as such, one might have expected scientists to go back over the physics predicated on wave-particle duality and recast it in terms of only waves. Having the wave equation describe a wave would be a start.

I digress. Clearly the idea that observers influence outcomes is trivially false. So now we must turn to the common fudge of removing the observer from the observation.


Interaction as Observation

One way around the problems with observation, is to redefine "observation" so that it excludes actual observations and observers. The move is to redefine "observation" to mean "some physical interaction". I'm sure I've mentioned this before because I used to think this was a good idea.

While we teach quantum physics in terms of isolated "particles" in empty, flat space, the fact is that the universe is crammed with matter and energy, especially in our part of the universe. Everything is interacting with everything that it can interact with, simultaneously in all the ways that it can interact, at every moment that it is possible to interact. Nothing in reality is ever simple.

In classical physics, we are used to being able to isolate experiments and exclude variables. This cannot ever happen at the nanoscale and below. An electron, for example, is surrounded by an electrostatic field which interacts with the fields around all other wavicles, near and far.

Electrons, for example, are all constantly pushing against each other via the electromagnetic force. If your apparatus contains electrons, their fields invariably interact with the electron you wish to study. This includes mirrors, beam-splitters, prisms, diffraction gratings, and double slits. The apparatus is not "classical", it's part of the quantum system you study. At the nanoscale and below, there are no neutral apparatus. 

Therefore, the idea that interaction causes the wavefunction to "collapse" is also untenable because in the real world wavicles are always interacting. In an H atom, for example, the electron and the proton are constantly and intensely interacting via the electromagnetic force. So the electron in an H atom could never be in a superposition.


Conclusions

Observation can only occur after the fact and is limited by the speed of light (or speed of causality).

Neither "observation" nor "consciousness" can play any role in the sequence of events, let alone a causal role.

Schrödinger's cat is never both alive and dead. And observation makes no difference to this (because observation can only ever be post hoc and acausal).

It is always the case, no matter what kind of system we are talking about, that probabilities for all possibilities coexist prior to an event and collapse as the event produces a specific outcome. But this is in no way analogous to waves superposing and should not be called "superposition".

All (linear) waves can superpose. All standing waves are quantised. All standing waves have harmonics.

Defining observation so as to eliminate the observer doesn't help as much as physicists might wish.

"Observation" is irrelevant to how we formulate physics.

The wave-particle duality postulate is still built into quantum mechanics, despite being known to be false.

For the last century, quantum physicists have been trying to change reality to fit their theory. Many different kinds of reality have been proposed to account for quantum theory: Copenhagen, Many Worlds, Qbism, etc. I submit that proposing a wholly different reality to account for your theory is tantamount to insanity. The success in predicting probabilities seems to have causes physicists to abandon science. I don't get it, and I don't like it. 

~~Φ~~


Bibliography

Weisstein, Eric W. (2025) "Laplace's Equation." MathWorld. https://mathworld.wolfram.com/LaplacesEquation.html

02 May 2025

Ψ-ontology and the Nature of Probability

“The wave function is real—not just a theoretical thing in abstract mathematical space.”
—Sean Carroll. Something Deeply Hidden.

Harrigan & Spekkens (2010) introduced the distinction between those theories that take the quantum wave function to be real (Ψ‑ontic) and those which take it to only provide us with knowledge (Ψ‑epistemic). One needs to know that the quantum wavefunction is notated as Ψ (Greek capital Psi) which is pronounced like "sigh". So Sean Carroll's oft stated view—"the wave function is real"— is a Ψ‑ontic approach.

Harrigan & Spekkens seem not to have foreseen the consequences of this designation, since a Ψ-ontic theory is now necessarily a Ψ-ontology, and one who proposes such a theory is a Ψ-ontologist. Sean Carroll is a great example of a Ψ-ontologist. These terms are now scattered through the philosophy of science literature.

Still, Carroll's insistence that fundamentally "there are only waves", is part of what sparked the questions I've been exploring lately. The problem as I see it, is that the output of the wave function is a "probability amplitude"; or over all possible solutions, a probability distribution. What I would have expected in any Ψ-ontology is that the Ψ-ontologist would explain, as a matter of urgency, how a probability distribution, which is fundamentally abstract and epistemic, can be reified at all. In a previous essay, I noted that this didn't seem possible to me. In this essay, I pursue this line of reasoning.


Science and Metaphysics

I got interested in science roughly 50 years ago. What interested me about science as a boy was the possibility of explaining my world. At that time, my world was frequently violent, often chaotic, and always confusing. I discovered that I could understand maths and science with ease, and they became a refuge. In retrospect, what fascinated me was not the maths, but the experimentation and the philosophy that related mathematical explanations to the world and vice versa. It was the physically based understanding that I craved.

As an adult, I finally came to see that no one has epistemic privilege when it comes to metaphysics. This means that no one has certain knowledge of "reality" or the "nature of reality". Not religieux and not scientists. Anyone claiming to have such knowledge should be subjected to the most intense scrutiny and highest levels of scepticism.

While many physicists believe that we cannot understand the nanoscale world, those few physicists and philosophers who still try to explain the reality underlying quantum physics have made numerous attempts to reify the wavefunction. Such attempts are referred to as "interpretations of quantum mechanics". And the result is a series of speculative metaphysics. If the concept of reality means anything, we ought to see valid theories converging on the same answer, with what separates them being the extra assumptions that each theory makes. After a century of being examined by elite geniuses, we not only don't have a consensus about quantum reality but each new theory takes us in completely unexpected directions.

At the heart of the difficulties, in my view, is the problem of reifying probabilities. The scientific literature on this topic is strangely sparse given that all the metaphysics of quantum physics relies on reifying the wave function and several other branches rely on statistics (statistical mechanics, thermodynamics, etc)

So let us now turn to the concept of probability and try to say something concrete about the nature of it.


Probability

Consider a fair six-sided die. If I roll the die it will land with a number facing up. We can call that number the outcome of the roll. The die is designed so that outcome of a roll ought to be a random selection from the set of all possible outcomes, i.e. {1, 2, 3, 4, 5, 6}. By design the outcomes are all equally likely (this is what "fair" means in this context). So the probability of getting any single outcome is ⅙ or 0.16666...

By convention we write probabilities such that the sum of all probabilities adds up to one. The figure ⅙ means ⅙th of the total probability. This also means that a probability of 1 or 0 reflects two types of certainty:

  1. A probability of 1 tells us that an outcome is inevitable (even if it has not happened yet). The fact that if I roll a die it must land and have one face pointing upwards is reflected in the fact that the probability of each of the six possible outcomes add to 1.
  2. A probability of 0 tells us that an outcome cannot happen. The probability of rolling a 7 is 0. 

We can test this theory by rolling a die many times and recording the outcomes. Most of us did precisely this in highschool at some point. Any real distribution of outcomes will tend towards the ideal distribution.

In the case of a six-sided fair die, we can work out the probabilities in advance based on the configuration of the system because the system is idealised. Similarly, if I have a fair 4 sided die, then I can infer that the probabilities for each possible outcome {1, 2, 3, 4} is ¼. And I can use this idealisation as leverage on the real world.

For example, one can test a die to determine if it is indeed fair, by rolling it many times and comparing the actual distribution with the expected distribution. Let us say that we roll a six-sided die 100 times and for the possible states {1, 2, 3, 4, 5, 6} we count 10, 10, 10, 10, 10, and 50 occurrences.

We can use statistical analysis to determine the probability of getting such an aberration by chance. In this case, we would expect this result once in ~134 quadrillion trials of 100 throws. From this we may infer that the die is unfair. However, we are still talking probabilities. It's still possible that we did get that 1 in 134 quadrillion fluke. As Littlewood's law says:

A person can expect to experience events with odds of one in a million at the rate of about one per month.

It the end the only completely reliable way to tell if a die is fair is by physical examination. Probabilities don't give us the kind of leverage we'd like over such problems. Statistical flukes happen all the time.

These idealised situations are all very well. And they help us to understand how probability works. However, in practice we get anomalies. So for example, I recorded the results of 20 throws of a die. I expect to get 3.33 of each and got:

  1. 2
  2. 3
  3. 5
  4. 1
  5. 6
  6. 2

Is my die fair? Actually, 20 throws is not enough to be able to tell. It's not a statistically significant number of throws. So, I got ChatGPT to simulate 1 million throws and it came back with this distribution. I expect to see 166,666 of each outcome.

  1. 166741
  2. 167104
  3. 166479
  4. 166335
  5. 166524
  6. 166817

At a million throws we see the numbers converge on the expectation value (166,666). However, the outcomes of this trial vary from the ideal by ± ~1.3%. And we cannot know in advance how much a given trial will differ from the ideal. My next trial could be wildly different.

Also it is seldom the case in real world applications that we know all the possible outcomes of an event. Unintended or unexpected consequences are always possible. There is always some uncertainty in just how uncertain we are about any given fact. And this mean that if the probabilities we know add to 1, then we have almost certainly missed something out.

Moreover, in non-idealised situations, the probabilities of events change over time. Of course, probability theory has ways of dealing with this, but they are much more complex than a simple idealised model.

A very important feature of probabilities is that they all have a "measurement problem". That is to say, before a roll my fair six-sided die the probabilities all co-exist simultaneously:

  • P(1) = 0.16
  • P(2) = 0.16
  • P(3) = 0.16
  • P(4) = 0.16
  • P(5) = 0.16
  • P(6) = 0.16
Now I roll the die and the outcome is 4. Now the probabilities "collapse" so that:

  • P(1) = 0.00
  • P(2) = 0.00
  • P(3) = 0.00
  • P(4) = 1.00
  • P(5) = 0.00
  • P(6) = 0.00

This is true for any system to which probabilities can be assigned to the outcomes of an event. Before an event there are usually several possible outcomes, each with a probability. These probabilities always coexist simultaneously. But the actual event can only have one outcome. So it is always the case that as the event occurs, the pre-event probabilities collapse so that the probability of the actual outcome is 1, while the probability of the other possibilities falls instantaneously to zero.

This is precisely analogous to descriptions of the so-called Measurement Problem. The output of the Schrodinger equation is a set of probabilities, which behave in exactly the way I have outlined above. The position of the electron has a probability at every point in space, but the event localises it. Note that the event itself collapses the probabilities, not the observation of the event. The collapse of probabilities is real, but it is entirely independent of "observation".

Even if we were watching the whole time, the light from the event only reaches us after the event occurs and it takes an appreciable amount of time for the brain to register and process the information to turn it into an experience of knowing. The fact is that we experience everything in hindsight. The picture our brain presents to our first person perspective is time-compensated so that it feels as if we are experiencing things in real time. (I have an essay expanding on this theme in the pipeline)

So there is no way, even in theory, that an "observation" could possibly influence the outcome of an event. Observation is not causal with respect to outcomes because "observation" can only occur after the event. This is a good time to review the idea of causality.


Causation and Probability

Arguing to or from causation is tricky since causation is an a priori assumption about sequences of events. However, one of the general rules of relativity is that causation is preserved. If I perceive event A as causing event B, there is no frame of reference in which B would appear to cause A. This is to do with the speed of light being a limit on how fast information can travel. For this reason, some people like to refer to the speed of light as the "speed of causality".

Here I want to explore the causal potential of a probability. An entity might be said to have causal potential if its presence in the sequence of events (reliably) changes the sequence compared to its absence. We would interpret this as the entity causing a specific outcome. Any observer that the light from this event could reach, would interpret the causation in the same way.

So we might ask, for example, "Does the existence of a probability distribution for all possible outcomes alter the outcome we observe?"

Let us go back to the example of the loaded die mentioned above. In the loaded die, the probability of getting a 6 is 0.5, while the probability of all the other numbers is 0.1 each (and 0.5 in total). And the total probability is still 1.0. In real terms this tells us that there will be an outcome, and it will be one of six possibilities, but half the time, the outcome will be 6.

Let's say, in addition, that you and I are betting on the outcome. I know that the die is loaded and you don't. We role the die and I always bet on six, while you bet on a variety of numbers. And at the end of the trial, I have won the vast majority of the wagers (and you are deeply suspicious).

Now we can ask, "Did the existence of probabilities per se influence the outcome?" Or perhaps better, "Does the probability alone cause a change in the outcome?"

Clearly if you were expecting a fair game of chance, then the sequence of events (you lost most of the wagers) is unexpected and we intuit that something caused that unexpected sequence.

If a third person was analysing this game as disinterested observer, where would they assign the causality? To the skewed probabilities? I suppose this is a possible answer, but it doesn't strike me as very plausible that anyone would come up with such an answer (except to be contrarian). My sense is that the disinterested observer would be more inclined to say that the loaded die itself—and in particular the uneven distribution of mass—was what caused the outcome to vary so much from the expected value.

Probability allows us to calculate what is likely to happen. It doesn't tell us what is happening, or what has happened, or what will happen. Moreover, knowing or not knowing the probabilities makes no difference to the outcome.

So we can conclude that the probabilities themselves are not causal. If probabilities diverge from expected values, we don't blame the probabilities, rather we suspect some physical cause (a loaded die). And, I would say, that if the probabilities of known possibilities are changing, then we would also expect that to be the result of some physical process, such as unevenly distributed weight in a die.

My conclusion is this generalisation: Probabilities do not and cannot play a role in causation.

Now, there may be flaws and loopholes in the argument that I cannot see. But I think I have made a good enough case so far to seriously doubt any attempt to reify probability which does not first make a strong case for treating probabilities as real (Ψ‑ontic). I've read many accounts of quantum physics over 40 years of studying science, and I don't recall seeing even a weak argument for this.

At this point, we may also point out that probabilities are abstractions, expressed in abstract numbers. And so we next need to consider the ontology of abstractions.


Abstractions.

Without abstractions I'd not be able to articulate this argument. So I'm not a nominalist in the sense that I claim that abstractions don't exist in any way. Rather, I am a nominalist in the sense that I don't think abstractions exist in an objective sense. To paraphrase Descartes, if I am thinking about an idea, then that idea exists for me, while I think about it. The ideas in my mind are not observable from the outside, except by indirect means such as how they affect my posture or tone of voice. And these are measures of how I feel about the idea, rather than the content of the idea.

I sum up my view in an aphorism:

Abstractions are not things. Abstractions are ideas about things.

An important form of abstraction is the category, which is generalisation about a collection of things. So for example, "blue" is a category into which we can fit such colours as: navy, azure, cobalt, cerulean, indigo, sapphire, turquoise, teal, cyan, ultramarine, and periwinkle (each of which designates a distinct and recognisable colour within the category). Colours categories are quite arbitrary. In both Pāli and Ancient Greek they only have four colour categories (aka "basic colour terms"). Blue and Green are both lumped together in the category "dark". The word in Pāli that is now taken to mean "blue" (nīla) originally meant "dark". English has eleven colour categories: red, orange, yellow, green, blue, purple, pink, brown, black, white, and grey. To be clear, ancient Indians and Greeks had the same sensory apparatus as we do. And with it, the ability to see millions of colours. It's not that they couldn't see blue or even that they had no words that denoted blue. The point is about how they categorised colours. See also my essay Seeing Blue.

In this view, probability is an abstraction because it is an idea about outcomes that haven't yet occurred. Probability can also reflect our ideas about qualities like expectation, propensity, and/or uncertainty.

When we use an abstraction in conversation, we generally agree to act as if it behaves like a real thing. For example probability may be "high" or "low", reflecting a schema for the way that objects can be arranged vertically in space. The more of something we have, the higher we can pile it up. Thus, metaphorically HIGH also means "more" and LOW means "less". A "high" probability is more likely than a "low" probability, even thought probability is not a thing with a vertical dimension.

This reflects a deeper truth. Language cannot conform to reality, because we have no epistemic privilege with respect to reality. Reality can be inferred to exist; it cannot be directly known. In fact, "reality" is an other abstraction, it is an idea about things that are real. Language need only conform to experience, and in particular to the shared aspects of experience. In this (nominalist) view, "reality" and "truth" are useful ideas, for sure, as long as we don't lose sight of the fact that they are ideas rather than things.

The use of abstractions based on schemas that arise from experience, allows for sophisticated discussions, but introduces the danger of category errors, specifically :

  • hypostatisation: incorrectly treating abstract ideas as independent of subjectivity; and
  • reification: incorrectly treating abstract ideas as having physical form.

Treating abstract ideas as if they are concrete things is the basis of all abstract thought and metaphor. Treating abstract ideas as concrete things (without the "if" qualification) is simply a mistake.

Abstractions are not causal in the way that concrete objects are. They can influence my behaviour, for example, at least in the sense that belief is a feeling about an idea and thus a motivation for actions. But abstractions cannot change the outcome of rolling a die.

Since probability is expressed in numbers, I just want to touch on the ontology of numbers before concluding.


Numbers

The ontology of numbers is yet another ongoing source of argument amongst academic philosophers. But they are known to avoid consensus on principle, so we have to take everything they say with a grain of salt. Is there a real disagreement, or are they jockeying for position, trolling, or being professionally contrarian?

The question is, do numbers exist in the sense that say, my teacup exists? My answer is similar to what I've stated above, but it's tricky because numbers are clearly not entirely subjective. If I hold up two fingers, external observers see me holding up two fingers. We all agree on the facts of the matter. Thus numbers appear to be somewhat objective.

We may ask, what about a culture with no numbers? We don't find any humans with no counting numbers at all, but some people do have very few terms. In my favourite anthropology book, Don't Sleep There are Snakes, Daniel Everett notes that the Pirahã people of Brazil count: "one, two, many"; and prefer to use comparative terms like "more" and "less". So if I hold up three fingers or four fingers they would count both as "many".

However, just because a culture doesn't have a single word for 3 or 4, doesn't meant they don't recognise that 4 is more than 3. As far as I can tell, even the Pirahã would still be capable of recognising that 4 fingers is more than 3 fingers, even though they might not be able to easily make precise distinctions. So they could put 1, 2, 3, 4 of some object in order of "more" or "less" of the object. In other words, it's not that they cannot count higher quantities, it's only that they do not (for reasons unknown).

There is also some evidence that non-human animals can count. Chimps, for example, can assess that 3 bananas is more than 2 bananas. And they can do this with numbers up to 9. So they might struggle to distinguish 14 bananas from 15, but if I offered 9 bananas to one chimp and 7 to the next in line, the chimp that got fewer bananas would know this (and it would probably respond with zero grace since they expect food-sharing to be fair).

We can use numbers in a purely abstract sense, just as we can use language in a purely abstract sense. However, we define numbers in relation to experience. So two is the experience of there being one thing and another thing (the same). 1 + 1 = 2. Two apples means an apple and another apple. There is no example of "two" that is not (ultimately) connected to the idea of two of something.

In the final analysis, if we we cannot compare apples with oranges, and yet I still recognise that two apples and two oranges are both examples of "two", then the notion of "two" can only be an abstraction.

Like colours, numbers function as categories. A quantity is a member of the category "two", if there is one and another one, but no others. And this can be applied to any kind of experience. I can have two feelings, for example, or two ideas.

A feature of categories that George Lakoff brings out in Women, Fire, and Dangerous Things is that membership of a category is based on resemblance to a prototype. This builds on Wittgenstein's idea of categories as defined by "family resemblance". And prototypes can vary from person to person. Let's say I invoke the category "dog". And the image that pops into my head is a Golden Retriever. I take this as my prototype and define "dog" with reference to this image. And I consider some other animal to also be a "dog" to the extent that it resembles a Golden Retriever. Your prototype might be a schnauzer or a poodle or any other kind of dog, and is based on your experience of dogs. If you watch dogs closely, they also have a category "dog" and they are excellent at identifying other dogs, despite the wild differences in physiognomy caused by "breeding".

Edge cases are interesting. For example, in modern taxonomies, the panda is clearly not a bear. But in the 19th century it was similar enough to a bear, to be called a "panda bear". Edge cases may also be exploited for rhetorical or comic effect: "That's no moon", "Call that a dog?" or "Pigeon's are rats with wings".

That "two" is a category becomes clearer when we consider edge cases such as fractional quantities. In terms of whole numbers, what is 2.01? 2.01 ≈ 2.0 and in terms of whole numbers 2.0 = 2. For some purposes, "approximately two" can be treated as a peripheral member of the category defined by precisely two. So 2.01 is not strictly speaking a member of the category "two", but it is close enough for some purposes (it's an edge case). And 2.99 is perhaps a member of the category "two", but perhaps also a member of the category "three". Certainly when it comes to the price of some commodity, many people put 2.99 in the category two rather than three, which is why prices are so often expressed as "X.99".

Consider also the idea that the average family has 2.4 children. Since "0.4 of a child" is not a possible outcome in the real world, we can only treat this as an abstraction. And consider that a number like i = √-1 cannot physically exist, but is incredibly useful for discussing oscillating systems, since e = cos θ + i sin θ describes a circle.

Numbers are fundamentally not things, they are ideas about things. In this case, an idea about the quantity of things. And probabilities are ideas about expectation, propensity, and/or uncertainty with respect to the results of processes.


Conclusion

It is curious that physicists, as a group, are quick to insist that metaphysical ideas like "reality" and "free will" are not real, while at the same time insisting that their abstract mathematical equations are real. As I've tried to show above, this is not a tenable position.

A characteristic feature of probabilities is that they all coexist prior to an event and then collapse to zero except for the actual outcome of the event, which has a probability of 1.

Probability represents our expectations of outcomes of events, where the possibilities are known but the outcome is uncertain. Probability is an idea, not an object. Moreover, probability is not causal, it cannot affect the outcome of an event. The least likely outcome can always be the one happen to we observe.

We never observe an event as it happens, because the information about the event can only reach us at the speed of causality. And that information has to be converted into nerve impulses that the brain then interprets. All of this takes time. This means that observations, all observations, are after the fact. Physically, observation cannot be a causal factor in any event.

We can imagine a Schrodinger's demon, modelled on Maxwell's demon, equipped with perfect knowledge of the possible outcomes and the precise probability of each, with no unknown unknowns. What could could such a demon tell us about the actual state of a system or how it will evolve over time? A Schrodinger's demon could not tell us anything, except the most likely outcome.

Attempts by Ψ-ontologists to assert that the quantum wavefunction Ψ is real, lead to a diverse range of mutually exclusive speculative metaphysics. If Ψ were real, we would expect observations of reality to drive us towards a consensus. But there is a profound dissensus about Ψ. In fact, Ψ cannot be observed directly or indirectly, any more than the probability of rolling a fair six-sided die can be observed. 

What we can observe, tells us that quantum physics is incomplete and that none of the current attempts to reify the wavefunction—the so-called "interpretations"—succeeds. The association of Ψ-ontology with "Scientology" is not simply an amusing pun. It also suggests that Ψ-ontology is something like a religious cult, and as Sheldon Cooper would say, "It's funny because it's true." 

Sean Carroll has no better reason to believe "the wavefunction is real" than a Christian has to believe that Jehovah is real (or than a Buddhist has to believe that karma makes life fair). Belief is the feeling about an idea.

Probability reflects our uncertain expectations with respect to outcome of some process. But probability per se cannot be considered real, since it cannot be involved in causality and has no independence or physical form.

The wave function of quantum physics is not real because it is an abstract mathematical equation whose outputs are probabilities rather than actualities. Probabilities are abstractions. Abstractions are not things, they are ideas about things. The question is: "Now what?" 

As far as I know, Heisenberg and Schrödinger set out to describe a real phenomenon not a probability distribution. It is well known that Schrödinger was appalled by Born's probability approach and never accepted it. Einstein also remained sceptical, considering that quantum physics was incomplete. So maybe we need to comb through the original ideas to identify where it went of the rails. My bet is that the problem concerns wave-particle duality, which we can now resolve in favour of waves. 

~~Φ~~


Bibliography

Everett, Daniel L. (2009) Don’t Sleep, There Are Snakes: Life and Language in the Amazon Jungle. Pantheon Books (USA) | Profile Books (UK).

Harrigan, Nicholas & Spekkens, Robert W. (2010). "Einstein, Incompleteness, and the Epistemic View of Quantum States." Foundations of Physics 40 :125–157.

Lakoff, George. (1987). Women, Fire, and Dangerous Things: What Categories Reveal About the Mind. University of Chicago Press.

Related Posts with Thumbnails