02 January 2026

Philosophical Detritus IV: Truth

"I swear by Almighty God to tell the truth,
the whole truth, and nothing but the truth."

—Traditional British courtroom oath

In this series of essays, fuelled by questions on the Quora website, I have been questioning the value of the legacy of certain abstract concepts in philosophy. I've argued for an epistemic-nominalist approach to abstraction, i.e. abstractions are ideas about things; they are not things in their own right. And I've tried to show that this means we have to reconsider the value of traditional metaphysics generally. No one has privileged access to reality; i.e. there is no epistemic privilege. And in view of this, I have explored how a pragmatic approach can at least net us a useful concept.

So far, I have applied this to the major concepts of "consciousness" and "reality". I have tried to show that commonly used definitions, including "common sense" definitions, are hopelessly confused and unhelpful. This is fueled by the long-standing, active, and growing dissensus on these abstract concepts, amongst professional philosophers. Philosophers not only lack agreement, but on these topics, they actively and vociferously disagree and are constantly coming up with new ways to disagree. Not only is the goal of a universal definition difficult, but the methods adopted virtually guarantee failure. Hence, we often fail to agree on important matters even after thousands of years of argument.

In this essay, I will tackle another legacy metaphysical concept from philosophy: "truth". Yet again, there is a profound and ongoing dissensus about what "truth" means and what value it holds. It seems obvious to us to ask, "What is true?" and "What is the truth?" But it is surprisingly difficult to answer such questions in a satisfying way. Beware, we are in deep, shark-infested waters here. There is a serious risk of drowning or being eaten alive. Let's dive in!


Truth

"True" is used in several senses, but the underlying sense of the word is "firm, reliable, certain, trustworthy." We are particularly concerned with the idea applied to statements and propositions; i.e. with telling the truth, or veracity.

When trying to define "true" and "truth", we immediately run into the problem of epistemic privilege. No one is in a position to state the truth with absolute certainty, because no one can possibly know what it is. And, if we don't know what truth is, then we don't know if any given statement is true or not. And yet we constantly make confident pronouncements on the truth of statements. I went most of my life not realising how utterly weird this situation is. Now I cannot unsee it. But I do think I can unfuck it, to some extent.

There are numerous competing definitions of "truth" that do not converge (this is always a bad sign). For example, we might invoke:

  1. Correspondence Theory: Truth is a statement's accurate representation of objective reality.
  2. Coherence Theory: Truth is the logical consistency of a statement within a larger system of beliefs.
  3. Pragmatist Theory: Truth is what is useful, reliable, or works successfully in practice.
  4. Consensus Theory: Truth is what is agreed upon by a specified group, often through ideal discourse.
  5. Deflationary Theory: "Truth" is a redundant or logical concept that adds no substantial meaning beyond disquotation (e.g., " 'Snow is white' is true" just means snow is white).
  6. Performative Theory: To call a statement true is to perform an act of endorsement or agreement.
  7. Semantic Theory (Tarski): Truth is formally defined for a language by satisfying conditions like " 'Snow is white' is true if and only if snow is white."
  8. Epistemic Theories: Truth is what is knowable or justifiable under ideal epistemic conditions.
  9. Pluralist Theories: Different domains of discourse may require different truth properties (e.g., moral vs. factual truth).

All of these approaches have pros and cons. However, note that all the metaphysical definitions have the problem of epistemic privilege. For example, how can anything be said to represent "objective reality" when no one can possibly know what objective reality is? (If this is unclear, refer back to my essay on reality.) Defining "truth" in terms of "belief" fails because belief is a feeling about an idea, and belief can be false. And yet throwing out the concept of truth entirely seems too drastic.

I think we need to go back to basics. "Truth" is not just an abstract metaphysical concept; it's also a moral concept. Thus, we need to start by thinking about what morality is and why it has a claim on us. However, philosophy's problems also plague this topic. If anything, even after thousands of years of intellectual effort, there is an even greater dissensus around the concept of morality.

I believe we can do better than the present flailing around. To my mind, the place to start is (the late, great) Frans de Waal's work on morality in animals. Especially, his book:

  • de Waal, Frans. (2013). The bonobo and the Atheist: In Search of Humanism Amongst the Primates. W.W. Norton & Co.
De Waal's 2011 TED talk Moral Behavior in Animals is an excellent introduction to the main themes in the book and useful for the short videos of the relevant experiments. No one watching this can come away thinking that capuchin monkeys do not understand fairness, for example.

I've written at length about morality, in the light of reading de Waal:

We begin with a simple fact that I highlighted in my 20th anniversary essay: humans evolved an obligatory social lifestyle. We evolved to live in communities, and rare outliers notwithstanding, humans are obliged by our nature to live in communities. And we are not alone in this. Chimps, bonobos, gorillas, and many other mammals are obliged to live in social groups.

A social lifestyle offers numerous evolutionary advantages. We are stronger as a collective than we are as individuals. Indeed, large-scale cooperation is our evolutionary superpower. I'm aware that I assert this in a general climate of ideological individualism and a hegemonic political ideology that despises collectivism and asserts slogans such as "there is no such thing as society". Nonetheless, Humans are social creatures who live in communities and form societies that have cultures.

In brief, de Waal identified two essential capacities shared by all social mammals (and some social birds, but I'll focus on mammals to keep it simple) that do a lot of work in explaining the evolutionary origins of morality: i.e. empathy and reciprocity. These capacities are minimally required for the social lifestyle of mammals. Note that social insects are a totally different story.

Empathy allows us to intuitively know how other individuals are feeling from interpreting (and internally modelling) cues such as posture, facial expressions, tone of voice, direction of gaze, and so on. This allows us to accurately judge the emotional impact of our actions on others. And their actions on each other. And this is the basis of moral rules about how we treat others. We don't need an external standard or judge to tell us that our actions resulted in happiness or hurt feelings. We simply know from observation. While the psychopath may not care, they still know.

Reciprocity involves responding in kind. If someone shares with us, we share with them. If someone is kind to us, we respond with kindness. Social animals keep track of what kind of relations they have with others, but also the relations of the rest of the group has with each other. It's vitally important—in evolutionary terms—to know how our community is functioning, what conflicts and alliances exist, and our place in all this.

Incidentally, this means that our sense of identity is not, and cannot be, only based on an autobiographical narrative (a story we tell ourselves about ourselves). Being obligatorily social, we also require a socio-biographical narrative (a story about our community and our place in it). While I arrived at this insight through reflecting on de Waal, ChatGPT tells me that it is similar to ideas found in Canadian philosopher Charles Taylor's Sources of the Self: The Making of the Modern Identity (1989).

Empathy and reciprocity lead humans to live in networks of responsive mutual obligations. And this leads to a deontological view of morality as being based on mutual obligation. This does not preclude anyone from talking in terms of virtue ethics or consequentialism or whatever. Indeed, taking these other perspectives can be advantageous. Rather, it means that we define "virtue" deontologically: A virtuous person is one who meets or exceeds their obligations to the community. Notably, the most virtuous people are seen to help others. Similarly, we judge the consequences of a person's actions in terms of whether or not they support or undermine their obligations.

Since none of us is perfect, it makes sense to have some way to deal with breakdowns in this system.* De Waal notes, for example, that the leading male chimp is constantly called on to mediate between other male chimps. If there is a fight, he always intervenes on the side of the weaker male. He goes out of his way to console the loser of a fight and makes sure that the two get back into harmony.

* There's a potential digression into rules and rule-following here that I will pass up for now, but see also the last of my series of essays on Searle's "social reality": Norms without Conscious Rule Following. (Here, again, there is an unexplored similarity to Taylor's philosophy).

From reciprocity, we get the idea of fairness. Fairness is everyone fulfilling their obligations. Unfairness is a failure of reciprocation. And justice involves restoring fairness.

Of course, how these basic elements are elaborated into systems of morality is wide open and dependent on many factors, including the local environment. Moral rules also get mixed with etiquette to make for complex mores, even without elaborate technology.

This brief outline is probably enough to be getting on with. But check the earlier, more extensive essays if things are unclear.


Truth is Both a Metaphysical Concept and a Moral Concept.

We now have two ideas to try to integrate:

  1. My critique of metaphysical concepts applies: truth is a metaphysical concept, and no one has epistemic privilege. "The truth" as a metaphysical absolute is unknowable. And yet most people still see value in truth as a moral concept.
  2. My view of morality as essentially deontological (deriving from mutual obligation).

The first idea means that, if I am ever called to give testimony in court, it will be interesting because I cannot make the traditional oaths (including the modern secular varieties). The lack of epistemic privilege means that I cannot promise to "tell the truth, the whole truth, and nothing but the truth." This would imply that I know "the truth" and that I'm capable of communicating it. While I might have a belief about the truth, no matter how sincere I might be in holding this belief, I can always be wrong. In which case, my belief is not the truth. And after all, belief is a feeling about an idea (and an involuntary feeling at that). Which raises the question: If belief is not a reliable guide to truth, why do we privilege it?

Rather ironically, given their role in justice and history, eyewitnesses are notoriously unreliable. It is common for several people to witness an event and for them all to tell different stories about what happened. What the court really wants is not that witnesses "tell the truth", since this is an unreasonable expectation of anyone who lacks epistemic privilege. The court wants to ensure that we do not set out to deceive the court. That is to say, the court wants us to be honest. And this lesser goal turns out to be a more straightforward proposition.

One day, it might be interesting to look at how we managed to put so much emphasis on knowing the unknowable, but I want to stay on the track of extracting something workable from the existing mess.

A functioning community requires that we trust the other members of the community to fulfil their obligations. If we are standing shoulder to shoulder, driving off a leopard, for example, it only works if enough of us stand our ground. A leopard will easily kill a lone human or chimp. But a group of us is much more intimidating. Five chimps, or humans with sticks, can easily drive a leopard off if they work together. Trust requires that we not deliberately try to deceive others.

No matter how honest I am, my view could be incorrect, inaccurate, or imprecise, and I might not know it. All I can promise is that I'm not deliberately trying to deceive you. And, morally, that is all you can ask of me. So if I appear in court, the only oath I could take would be to promise to be honest. It's up to the jury to decide if what I say is salient to assigning blame for a transgression.

I think this generalises. My moral obligation is not to "tell the truth", but to refrain from deliberate deception. Or, more positively, my obligation is for honesty rather the truthfulness. This makes allowance for my "knowledge" to be imperfect or even incorrect, it allows for the vagaries of memory, it allows for unexamined bias, and so on. Being honest does not guarantee accuracy or precision.

Something we need to be wary of is the relativisation of truth, which I see as a function of ideological individualism. We see this in the idea of a "personal truth". This is something that one person believes and asserts to be true. But when contradicted, they simply assert, "that's your truth", and "my truth" is unaffected by your truth.

While the standard metaphysical definitions fail to be meaningful or useful, the idea of a "personal truth" is catastrophic. Equating opinion with truth only creates confusion and uncertainty. At least those people who try to define truth by some external standard have the goal of reducing uncertainty.

Note that, in the ideal, science is not concerned with "truth" as many lay-people imagine. Rather, scientists examine phenomena and compare notes to produce heuristics that make predictions to some arbitrary level of accuracy and precision. It's not that Newton's laws of motion are untrue and that Einstein's are. Rather the situation is that, under such conditions as we encounter here on Earth, Newton's laws are sufficiently accurate and precise for our purposes. We can predict the future with confidence. But when we start to look on larger scales of mass, length, and energy the accuracy and precision of Newton's laws declines. And we find that Einstein's laws of motion provide better accuracy and precision.

Scientists make and test inferences about phenomena by close observation and comparing notes. While such inferences are incredibly, almost miraculously reliable, we still cannot claim that they are true in any deeper sense.


Conclusion

Thousands of years of documented arguments about "truth"—from a variety of cultures—have left a legacy of dissensus and confusion. Something that seems so straightforward as "telling the truth" turns out to be impossibly complicated. Not only do we not know the truth about anything, but we cannot even agree on how we would know it if we came across it.

Questions such as "What is true?" or "What is the truth?" can never be answered in a way that will satisfy everyone.

"Truth" is another legacy of philosophy that does more harm than good. Since metaphysical knowledge requires epistemic privilege that no one can possibly have, telling "the truth, the whole truth, and nothing but the truth" is an unattainable goal.

Morality does not arise out of metaphysics or commandments from some supernatural being. It emerges pragmatically from evolving to live in social groups that require cohesion to function. Evolution equipped us to live in societies bound by mutual obligations. And the moral obligation that emerges from this is not to "tell the truth", but to be honest. That is to say, we do not deliberately set out to deceive.

The problem of the zeitgeist is less that we live in a "post-truth era" and more that we live in an era characterised by dishonesty.

Pragmatically, honesty is attainable because it only requires that we not set out to deceive. This allows that our beliefs about what is true can be sincere but mistaken.

Honesty is a virtue because it promotes the trust and cooperation necessary for a group to fulfil its evolutionary function. The consequence of dishonesty is a breakdown of trust and cohesion.

However, all of the above notwithstanding, the idea of truth and the many discourses centred on it are deeply ingrained and unlikely to change. So expect confusion to reign.

~~Φ~~

26 November 2025

Togetherness

Twenty years ago, on this day—26 November 2005—I posted the first essay on this blog. Today's post is the 647th essay (and the first one not posted on a Friday). Jayarava's Raves amounts to some millions of words. If you had told me twenty years ago that I would go on to write well over 600 essays, I would not have found that plausible. And yet, here we are.

These essays reflect my self-education not only in Buddhism, but in all the allied disciplines and fields that are required to understand religion, religieux, and religious phenomena, including: history, philosophy (metaphysics, epistemology, axiology), general linguistics, socio-linguistics, translation theory, sociology, and social psychology (none of which were included in my formal education). I've also maintained an interest in science and written about that from time to time. I've been trying to make sense of Buddhism in rational terms. 

Perhaps the most profound thought I have come across in the last 20 years is that we are not only social animals, but each individual is also a community of cells. And our surfaces—inside and out—are coated with numerous symbiotic microorganisms that make a significant contribution to processes such as digestion and immunity to pathogens. Moreover, our eukaryote cells are themselves symbiotic communities of what used to be separate organisms.

Whether we know it or not, every one of us is a community of communities. And if we go up the taxonomic hierarchy, we find humans in dependent relationships within ecosystems at every turn. Ultimately, all ecosystems contribute collectively to Gaia, the Earth's biosphere conceived of as a single (if complex) self-organising and self-regulating system.

Everywhere we look in nature, at whatever scale we choose, we see communities, cooperation, symbiosis, interdependence, and co-evolution. I find this thought both profound and beautiful. Yes, there is some conflict and competition, but Darwinian approaches to evolution massively over-emphasise conflict and almost completely ignore cooperation.

In this essay, I want to dwell on togetherness. It is, ironically, something I have seldom experienced for myself, and less and less as years go by. Nonetheless, I recognise it as the acme of human existence.


Social Animals are Moral Animals

What do you think of this slogan? Does this sound evil to you? Is this a recipe for tyranny?

All for one, and one for all;
United we stand, divided we fall.

What about these?

  • There's no 'I' in team.
  • A problem shared is a problem halved.
  • It takes a village to raise a child.
  • No one gets left behind.—US Military
  • Alone we can do so little; together we can do so much.—Helen Keller
  • Even the weak become strong when they are united.—Friedrich von Schiller
  • We must learn to live together as brothers or perish together as fools.—Martin Luther King, Jr.
  • Coming together is a beginning, staying together is progress, and working together is success.—Henry Ford
  • "Monks," said the Bhagavan, "you have no mother and no father to care for you. If you don't care for each other, then who will care for you? If you would care for me, then tend to the sick."—Vin I 301

In The Road to Serfdom (1944), Friedrich Hayek argued that all forms of collectivism inevitably lead to tyranny. Only robust individualism, especially in commerce, can save us from tyranny and deliver us to an individualist liberal utopia. If Hayek was right, then these collectivist slogans that emphasise cooperation, community, and togetherness ought to be seen as a threat.

To me, this attitude is almost incomprehensible, but Hayek is probably the most influential intellectual of the last century. Along with other prominent neoliberals—like Ludwig von Mises and Milton Friedman—Hayek's views have shaped every capitalist society on the planet. Virtually all modern politicians and businessmen are neoliberals. Revolutions around the world in the late 1970s and early 1980s aimed to implement Hayek's utopian (neo)liberal view of a society of self-sufficient individuals engaged in commerce. While these men were promoting self-interest to intellectuals and economists, mad old Ayn Rand became the patron saint of self-interest amongst technologists (thus validating the neurodivergence that made them somewhat alienated from society). Alan Greenspan, who was a central figure in US monetary policy ca. 1974 to 2006, was a personal disciple of Rand. Making Rand one of the most influential intellectuals of all time.

Of course, the anti-collectivists were helped by the horrific excesses carried out in the name of Karl Marx in the USSR and China. Stalin and Mao were undoubtedly brutal tyrants. But in terms of socialism, Hayek and company seem to ignore all of the democratic socialist nations and the very high standard of living and freedom they attained. Norway, Sweden, New Zealand, Canada, and even post-War Britain all had democratic socialist governments and free people.

The fact is that humans are social; we live in societies. Our sociology determines our psychology, not the other way around (sociology is more fundamental than psychology). And ideological individualism is a pathology for a social animal.

Some birds and most mammals have adopted a social lifestyle. I won't comment here on social insects since they work on different principles. The social lifestyle is one of the most successful evolutionary strategies in the 3.5 billion-year history of life on Earth. Certainly, the success of humans as a species is directly related to our ability to work together in large numbers for a common cause. We actually enjoy working together.

Amish men raising a barn together.

By the way, I don't cite animal examples to drag us down: "we're no better than animals". I cite animal examples to emphasise the universality of these observations about morality and togetherness. I also want to emphasise that no supernatural explanation of morality is needed

As the late, great, Frans de Waal pointed out, a social lifestyle minimally requires two capacities: empathy and reciprocity.

Empathy is the capacity to use physical cues to internally model how other people are feeling. Which means we don't just know what others feel, we also feel it in our own bodies. This is why emotions are contagious. As social animals, we monitor how the group is disposed, i.e. who is feeling what towards whom. This allows us to accurately judge the potential and actual impacts of our actions on others, and to moderate our behaviour accordingly. This is morality in a nutshell. But we don't just respond in the moment. We also keep track of and respond to how people have acted towards us, which requires the capacity for reciprocity.

Reciprocity is the capacity to form relationships of mutual obligation. It is keeping track of these obligations that creates a limit on the size of groups. The famous "Dunbar Number"—150—was derived by comparing primate group sizes with the volume of their neocortex. Robin Dunbar showed there is a strong correlation between these. Humans can keep track of the history of how members of the group interact in groups up to around 150, though there is considerable individual variation. Beyond 150, we can still form groups, but the sense of mutual obligation is more tenuous as the group size increases. With strangers, we typically do not feel a sense of obligation, except where it is imposed on us by nature: for example, the culture of hospitality common to many desert-dwelling societies.

However, reciprocity only holds a group together if there is some tendency for generosity. Someone has to start sharing, or no one would share. Social animals have to be prosocial, or sociality per se doesn't work. At the very least, mammalian mothers have to be willing to care for newborn infants, or they don't survive.

Anyone who reneges on the obligations of reciprocity has created an unfair situation. De Waal and other animal ethologists showed that social mammals are keenly aware of fairness (see especially his TED Talk). We intuitively understand that unequal rewards are unfair. We know it, and we also feel it deeply. Since the survival of the group relies on maintaining the integrity of the network of mutual obligations, we are highly motivated to be fair and to re-establish fairness when it breaks down. We call the latter "justice".

So our concepts of morality, fairness, and justice all emerge naturally from our having evolved social lifestyles and large brains. The rudiments are all visible, at least to some extent, in all social animals, suggesting universality. What may be unusual in humans is ethics, understood as abstract principles on which more concrete moral rules can be based. It is abstract ethics that allows us to adapt moral rules to new situations, for example (note that Buddhism lacks any ethical discourse, so Buddhists generally take a conservative view—no new rules—or they draw on the ethics of the surrounding culture for making ad hoc rules).

Being a member of a social species is not the only form of biological interconnection that we participate in. Let's now look at some others. 


Evolution and Exosymbiosis

I have long been a fan of Lynn Margulis (1938 – 2011). Margulis got a few things spectacularly wrong, especially later in life (notably her views on HIV were badly wrong). But her overall contribution to biology was pivotal for modern science and for my own views.

Notably, Margulis discovered endosymbiosis in 1966, which I will deal with in the next section. Margulis also advocated, in scientific and popular publications, for much greater awareness of the role of symbiosis in biology and evolution.

Margulis, Lynn. (1998). The Symbiotic Planet: A New Look at Evolution. Basic Books.

When I first studied biology, over 40 years ago, symbiosis was presented as something rather rare and unusual. Some organisms, such as lichens, enter into very close relationships in which two or more species rely on each other to survive. Lichens are the classic example. Lichens are a distinctive form of organism, but they are actually made of at least one fungus and one bacterium. Some species of lichens include both a filamentous (or hyphae-forming) fungus and a single-celled fungus (or yeast).

From quite early on, Margulis argued that symbiosis was much more common than allowed by traditional biology. Indeed, Margulis was critical of Darwin's (and the Darwinian) focus on competition and violence amongst animals (a view that Frans de Waal also rebelled against early in his academic career). 

According to Margulis, this jaundiced view was heavily influenced by the preoccupations of Victorian ruling-class men, i.e. patriarchy and imperialism. That is to say, representing nature as "red in tooth and claw" suited the ruling class men of Europe—of which Darwin was a member—because they were busy trying to conquer, appropriate, and exploit the entire world. Darwin was able to spend 20 years developing his ideas on natural selection because he was never burdened by having to work for a living. Nor did he have to accept patronage. Having inherited enough wealth to live on, he could simply focus on his gentlemanly pursuit of science and volunteer work for learned societies. And this was the norm at the time. There were no working-class scientists.

Nor was this the end of the trend. Richard Dawkins, arguably the most prominent biologist of the twentieth century, applied Hayek's neoliberal worldview to biology to come up with the "selfish gene". Cooperation, communities, symbiosis and all that were simply explained away as being "motivated by self-interest". The conclusion is too obviously ideological rather than objective. Later in life, Dawkins is famous for two things: (1) apologetics for his own unreasonable views and (2) unreasonably picking fights with religious people using arguments that are guaranteed not to change anyone's mind. Dawkins, the biologist, never even tried to understand the phenomenon of "belief".

From the time of Thomas Hobbes (1588 – 1679), liberals have seen humans not as prosocial, empathetic, and reciprocating but as vicious loners, forced by circumstances to live together, creating endless conflict and violence. Note that Hayek is clearly Hobbesian in outlook, and it is no coincidence that both of these ruling-class men lived through periods of all-out war and political chaos in Europe. They both attributed the violence of their own class and gender to the common people and argued that their own class provided stability. In psychological projection, a person projects alienated aspects of their own personality out into the world, in order to try to come into relationship with themselves. 

Liberals see competition as the great winnower of species and individuals (social Darwinism has always been part of the liberal schtick). Competition takes on a moral character in which succeeding in competition equates to moral goodness. Hence, liberals expect "winners" in any competition to be moral role models. 

According to liberalism, the apotheosis of competition means that we naturally adopt a kill-or-be-killed attitude. However, liberals also believe in Hobbes' Leviathan. This is linked to the Christian idea that God placed the ruling class in a superior position to other people, i.e. that of gamekeeper or farmer. The ruling class are the only ones who can impose order on the common people, who are otherwise nasty, brutish, and violent, but also lazy.

These views are all too obviously ideas that the ruling class of imperial Britain used to justify imperialist brutality towards societies, including their own. When a society routinely commits genocide in order to steal resources, it has to have some discourse that legitimises this. And liberalism was one of these. 

In fact, symbiosis turns out to be ubiquitous in nature, with humans themselves providing one of the most striking examples.

The "human gut microbiome" is now a household concept. We all know that many beneficial bacteria, fungi, and protists live in our gut. They very obviously contribute to digestion, for example, by breaking down cellulose, which we cannot do without them.

We now know, for example, that when a baby mammal suckles milk from its mother, it is also swallowing bacteria that will become its gut microflora. And that this is vital for the normal development of the gut and the immune system.

I suspect that part of the reason that so many modern people have "allergies" and "sensitivities" is the trend since the 1960s to bottle-feed newborns. Of course, sometimes there is no choice, so demonising bottle-feeding is counterproductive. But there must be a way to introduce bottle-fed newborns to "good bacteria", some other way, rather than leaving it to chance. I suspect that the massive rise in morbid obesity may be related to aberrant gut microflora as well, although eating to stimulate the parasympathetic nervous system (and thus reduce physiological arousal) is a huge factor. That is to say, we eat to calm down because we are hyperstimulated most of the time and have not learned any better ways. 

So beneficial are our gut symbionts that one can now receive a "faecal transplant" in which faecal matter from a healthy person—said to contain "good bacteria"—is introduced to the bowel of an unhealthy person, with a view to restoring their health. Apparently, this can work. Various foods with "good bacteria" are also popular, though whether these survive passing through the stomach is moot. Stomach acids kill the vast majority of microorganisms. 

Another very striking example of ubiquitous symbiosis is the mycorrhizal fungi that grow in and around tree roots. The fungal filaments (hyphae) live partly in the tree roots and partly in the soil. They break down the soil and transport nutrients into the roots, thus nourishing the tree. 

There is some suggestion that mycorrhizal fungi form underground networks in forests that link trees together and allow them to share resources. From what I've read, the full-on clickbait version of this story is to be taken with a grain of salt. Still, we can say that symbiotic mycorrhizal fungi are very important to the thriving of many plants.

All animals have extensive symbiotic relations with gut bacteria. But our outer surfaces are also an ecosystem. Not only are we constantly covered in microorganisms, but we also play host to organisms such as eyebrow mites that live in hair follicles. We are an ecosystem for such critters. 

Margulis also notes that bacteria evolve rapidly. They have generations of about 20 minutes. Every bacterial cell can, at least in principle, share genetic material with any other bacteria, regardless of species. Indeed, Margulis sometimes argued that one can take this to mean that bacteria are all one species. In any case, bacteria are highly promiscuous and routinely swap genes. This is how a trait like antibiotic resistance can spread rapidly in a population of bacteria.

Another feature of evolution that the Darwinists downplay is hybridisation. Again, when I was studying biology, hybridisation was presented as an exception. Fast forward 50 years, and it turns out that all humans are the result of the hybridisation of more than one human species. Most Homo sapiens carry some genes from one or more of Homo neanderthalensis, Homo naledi, Homo longi (aka Denisovans), and/or Homo floresiensis. Possibly others as well.

Margulis pointed out that where organisms fertilise eggs externally, hybridisation is very common. Some 20% of plants and 10% of fish routinely hybridise.

Finally, we can point to many examples of coevolution in which two species evolve a dependence on each other. The most obvious examples are plants and their pollinators. Some of these relationships are so specific that only one species of insect is capable of fertilising a particular flower. The plant puts considerable resources into attracting appropriate pollinators, and pollinators expend considerable resources collecting and distributing pollen. Each benefits more or less equally from the relationship, and they come to rely on each other to survive. This is surely the very opposite of competition. If the dynamic here were competitive, one of the partners would lose out. It would become a form of commensalism or parasitism.

Even parasitism is considerably more complex than it seems. For example, there is a widespread belief, backed up by robust evidence, that eradicating common human parasites in the modern world has led to the immune system being poorly calibrated, which contributes to the rise in autoimmune diseases and "allergies" in modern times. This is sometimes called the "hygiene hypothesis". We evolved to deal with common parasites and, ironically, not having them, which would intuitively be seen as wholly good, is actually a disruption of the normal order of things and leaves us maladapted. Just as faecal transplants are a thing, some doctors have tried infecting patients with relatively harmless roundworm parasites as a way of correcting an immune system imbalance. The jury is still out, but the idea is not completely mad.

While competition is certainly a factor in evolution, it is far from being the only one. Lynn Margulis convinced me that cooperation, communities, symbiosis, hybridisation, and co-evolutionary dependencies are every bit as important to evolution. Species not only diverge, but they also converge, creating evolutionary leaps. Margulis also alerted me to the ideological nature of some scientific conclusions regarding nature and evolution, especially the influence of patriarchy and neoliberalism. The story of how important symbiosis is to evolution is brought into focus by Margulis's 1967 breakthrough article.


Endosymbiosis

In the mid-1960s (around the time I was born), an early career scientist, then known as Lynn Sagan (married to celebrity scientist Carl Sagan), sent a novel paper to a series of science journals. After many rejections, the paper was eventually published as

Sagan. L. (1967). "On the origin of mitosing cells." Journal of Theoretical Biology. 14(3):255-74. Available online in numerous places.

Part of the abstract reads:

By hypothesis, three fundamental organelles: the mitochondria, the photosynthetic plastids and the (9+2) basal bodies of flagella were themselves once free-living (prokaryotic) cells. The evolution of photosynthesis under the anaerobic conditions of the early atmosphere to form anaerobic bacteria, photosynthetic bacteria and eventually blue-green algae (and protoplastids) is described. The subsequent evolution of aerobic metabolism in prokaryotes to form aerobic bacteria (protoflagella and protomitochondria) presumably occurred during the transition to the oxidizing atmosphere.

This hypothesis was subsequently tested and found to be accurate. This process, in which one single-celled organism ends up permanently and dependently living inside another, is now called endosymbiosis. In the meantime, Sagan remarried and changed her name again to Lynn Margulis, which is how I refer to her throughout.

In 1967, endosymbiosis was a radical theory, though some precedents in Russian microbiology were largely ignored in greater Europe because it was the height of the Cold War. Sixty years later, and this idea that organelles within eukaryote cells were once "free-living" is normative. This radical discovery is now such a commonplace that many modern discussions of endosymbiosis do not even mention Margulis or her role in it. Nick Lane, for example, who is at the forefront of abiogenesis research, has repeatedly downplayed the contributions of Margulis. 

It's fair to say that Margulis thought radically differently from most other people and that she was outspoken about her views. For a woman in the 1960s and 1970s, being outspoken (especially towards men) was seen as a serious character flaw. Many men were (and are) intimidated by a strong, intelligent woman. And, unfortunately, Margulis wasn't always right. However, she was right about endosymbiosis, and this is one of the most profound discoveries in the history of science. It is every bit as important as discovering DNA in terms of understanding how life and evolution work.

The prokaryotes are largely represented by bacteria and archaea (previously known as "extremophile bacteria"). Prokaryote cells have no nucleus and little internal structure. Their nuclear material is in a loop rather than a linear chromosome. 

The eukaryotes are fungi, plants, and animals. Eukaryote cells have a nucleus, with chromosomes, and many other internal structures, such as mitochondria.

Prokaryote organisms are far more numerous in biomass and variety. Animals are relatively unimportant to life on Earth; if we all disappeared, the prokaryotes would hardly notice, except those that specialise in living in/on us. Some plants rely on animals for reproduction. But not all, by any means. 

We can diagram the process by which combinations of prokaryotes led to the various eukaryote "kingdoms".

In the standard, neoDarwinian account of evolution, separated populations of a species subjected to differing environmental pressures will slowly diverge over time and become two distinct species. This has now been observed both in the lab and in nature. Evolution, per se, is a fact. Evolutionary theory is our explanation of this fact. Evolutionary theory is taught as a monoculture, at least up to undergraduate level. 

Darwin himself diagrammed the process of evolution as a branching tree, i.e. as a series of splits. This is still by far the most common way of representing evolution. I wrote a critique of this view in an essay titled: Evolution: Trees and Braids (27 December 2013). My suggestion that that we needed to represent evolution as a braided stream, since this allows for convergence and recombination.

I've already commented above on the ubiquity of exosymbiosis and hybridisation in nature. What I want to emphasise here is that endosymbiosis doesn't fit the neoDarwinian view of evolution at all because it is evolution by addition and recombination, rather than an accumulation of mutations. This alone tells us that the Darwinian view is incomplete.

In terms of my view of the world, the fact that our very cells began as small communities of cells within cells is a profound confirmation of the importance of communities and cooperation in nature at every level.

Similarly, our genome can be seen as a community of cooperating genes. The idea of individual genes, let alone "selfish" individual genes, makes little sense. Genes are always part of a genome. Even when bacteria swap genes, they incorporate new genes into their genome. We can talk theoretically and abstractly about individual genes, and we methodologically identify the corresponding function of the gene. But this is an abstract concept. In reality, genes only occur in genomes. A gene simply cannot function outside of a genome and the associated infrastructure.

The concept of the "selfish gene" is nonsensical, even as a metaphor.

So far, I've been delving down the taxonomic hierarchy into the microscopic. This is all too familiar in a reductionist environment and might have passed without comment. However, I am very critical of ideological reductionism. I believe that structure is also real and that structure anti-reductionism is a necessary counterpart to substance reductionism.

In the last section of this essay, therefore, I want to look up.


Gaia

I've already noted that social animals almost invariably live in family-oriented communities (with occasional solitary outliers). But we can also observe that each extended family exists in a network of inter-familial relations, often linked by intermarriage. 

Every human community is part of a network of communities embedded in an environment. We are also part of the local ecosystem. And the local ecosystem is part of the global ecosystem, also called the biosphere or more poetically, Gaia.

The Gaia hypothesis was first proposed by chemist James Lovelock (1919 – 2022) in 1975, with help from none other than Lynn Margulis. The classic statement of the idea appeared in book form in 1979.

Lovelock J (1979). Gaia: A New Look at Life on Earth. Oxford University Press.

The Gaia hypothesis says that the biosphere as a whole is a complex feedback mechanism that "works" to keep the surface of the earth suitable for life, i.e. at maintaining homeostasis. Lovelock introduced the idea of "daisy world" as a simple cybernetic model of how life might achieve homeostasis on a planetary scale.

Interestingly, the Gaia hypothesis emerged after Lovelock was commissioned by NASA to help them figure out how to detect extraterrestrial life. Gaia maintains surface conditions that definitely could not occur in the absence of life. For example, high levels of oxygen in the Earth's atmosphere require constant replenishment by living things. So any planet with high oxygen is a candidate for harbouring life.  

Life causes our planet to exist in a state that is very far from the (chemical) equilibrium that we see on planets with no life, like Mars or Venus.

In order to understand life, we have to take a holistic view. Rather than reducing everything to its base substance and calling that "reality", we have to see that reality includes structure. Everything we can see with human eyes is a complex object with numerous layers of structure, lending it many structural properties (sometimes vaguely referred to as "emergent properties"). To say that complex objects are "not real" or "just illusions" is not helpful (or true).

When it comes to life, every structure is embedded in larger structures, up to Gaia, which is the ultimate living structure for life on Earth. Reality is substantial, but it's also structural and systematic.

From the lowest level of description to the highest, life is structures made of structures and systems within systems. Nothing living ever exists as a standalone or independent entity. Everything is dependent on everything else. The Hobbesian, lone-wolf version of humanity really only applies to sociopaths and psychopaths (who seem to be over-represented in the ruling/commercial class). 

Biologists are generally in a better position to see this than physicists. A biologist may well dissect (or even vivisect) an organism to see what it's made of. They may well quantify what elements are found in an organism. We're mainly carbon, nitrogen, oxygen, and hydrogen. But clearly, elements like iron and magnesium play essential roles in our metabolism, as well as being potential toxins. I grew up in a region that was low in cobalt, and this meant that farmed animals would not thrive on our pastures without cobalt supplements. 

However, if a biologist wants to really understand some organism, they have to observe how it interacts with its physical and social environment. That is to say, how an organism reacts to physical stimulus, how it relates to others of its own species, and how it interacts with other species. And since the local environment is a product of the bulk environment, in the long run, we have to see all life on Earth in terms of its contribution to Gaia.

A common misconception about life is that it breaks the second law of thermodynamics. This law states that in a closed system, physical entropy always increases. The misconception stems from ignoring the words "closed system". A cell is not a closed system, since molecules are constantly entering and leaving. An organism is not a closed system. Gaia is not a closed system.

However, even if we stipulate that the second law might apply, the overall effect of Earth having a biosphere is a local increase in entropy. Visible and UV photons from the sun impact the Earth, where they are absorbed by rocks, water, and living things. Eventually, the incoming energy is radiated back out into space as infrared photons. And for every visible-UV photon arriving on Earth, twenty infrared photons are radiated back into space, with a net increase in entropy for Earth and its environment. So, if the second law applies (doubtful), then it is not broken by life. 

However, simple cybernetic feedback does not give us a complete explanation of life. For this, we have to change up a gear.


The Free Energy Principle

It's apparent, for example, that if the brain operated purely on homeostatic feedback, it would not be able to respond at the speed that it does. For this, we need to introduce the idea of allostasis. And allostasis leads us into the final big idea that is essential for understanding life: the Free Energy Principle. 

The idea of allostasis is that the brain constantly predicts what will happen next based on the present inputs and past experience. And if the expected input does not match the actual input, then the brain has two options: (1) change the prediction, i.e. update the expectation based on the new input; or (2) change the input, i.e. make some change in the world. And this enables a faster, more adaptable response.

Anyone familiar with the concept of Bayesian statistics should already recognise this paradigm. Bayesian statistics is a mathematical formalism that allows a statistician to quantify how their expectations change as new information comes in, as part of an iterative process. And this, in turn, has strong connections to information theory.

Enter Karl Friston, who primarily works on making information gleaned from medical scans into meaningful images. This involves expertise in statistical analysis and information theory.

Making these connections led Friston to propose the free energy principle. There is, as yet, no popular account of the free energy principle and the explanations that are available all rely on background knowledge of statistics and information theory that I don't have. 

See, for example:

Friston, K., et al. (2023) "The free energy principle made simpler but not too simple." Physics Reports 1024: Pages 1-29.

It is not "simple" at all unless you have the appropriate background knowledge.

This is something I'm still trying to understand, and I'm hoping to write an essay on it in the near future. But my intuition tells me that this idea is hugely important. Listening to Friston talk about it, I feel that I glimpse something significant. It's important enough to try to offer some impressionistic notes and encourage readers to follow up.

The free energy principle says that any self-organising system—living or non-living—that has a permeable boundary separating it from the general environment and that persists over time, will appear to take actions that can be mathematically described in terms of Bayesian statistics or in terms of "free energy" (a concept from information theory). Friston has shown these to be mathematically equivalent.

Where a prediction fails to match an input, Friston calls this "surprise". This is mathematically related to the informational property "free energy". Hence, "the free energy principle". It turns out that minimising surprise with respect to predictions is mathematically equivalent to minimising free energy (I suppose we might also relate this to the idea of the "path of least action" from classical physics, but I need to look into this). 

Rather than describing life as simply reacting to the environment, we can now describe all living things as iteratively predicting the future and testing predictions and optimising their responses to minimise surprise, resulting in changing predictions or changing inputs (external actions). Living systems involve both homeostasis and allostasis. 

In a sense, all the brain does is receive millions of input signals, process them in ways that are not fully understood, and generate millions of output signals, most of which are internal and only affect expectations. In her book How Emotions Are Made, Lisa Feldman-Barrett notes that 90% of the incoming connections to the visual cortex are from other parts of the brain, rather than from the eyes. 

This principle turns out to be an incredibly useful way of modelling and thus understanding living systems. It can be used to explain how even simple bacterial cells are apparently able to act intelligently (i.e. move towards food, move away from waste, or join up to form a colony). Whether there is some abstract "intelligence" behind this intelligent action is moot, but it's not an obvious conclusion, and it's not required by the free energy principle. 

I have never been a fan of panpsychism, which says that all matter is "conscious" (by degrees). It's such obvious nonsense that I find it hard to imagine why anyone takes it seriously. The free energy principle makes some broad claims, but it doesn't commit to metaphysical nonsense. The fact is that all living organisms do have a range of behaviours that they employ intelligently, without any evidence of being "conscious" or "intelligent". Intelligent behaviour is universal in living things. Being conscious of the world or self (or both) is rare. And, prior to the advent of the free energy principle, we were at a loss to explain this. This left huge gaps for "gods-of-the-gaps" style arguments for the supernatural. The free energy principle appears to plug those holes. 

I believe that, in the long run, the free energy principle will stand alongside the concepts of natural selection, symbiosis, and Gaia in terms of the history of understanding life. It offers a powerful, but also deflationary, account of the mechanisms that underpin life and mind.


Conclusion

The idea that "there is no society, there are only individuals and families" is arse-about-face. Rather, there are no individuals; there are only societies (and a family is a microcosm of a society). The individual is a mythological figure. We can talk about them in theory, but we rarely meet them in person. As Oscar Wilde said,

Most people are other people. Their thoughts are someone else's opinions, their lives a mimicry, their passions a quotation.

Me too, for the most part, but I do at least try to give credit where it is due. 

We are social animals. We evolved to live in social groups. Which means we evolved the capacity for empathy and the capacity for reciprocity. We evolved to be prosocial and moral. We evolved a sense of fairness and justice. Assuming we have not completely suppressed these capacities, we don't need anyone to tell us how to be moral. 

Competition is certainly a feature of life, but we have massively over-emphasised it for ideological reasons (patriarchy and imperialism). Consider the case of collectively making music. Music-making is not a competition, and turning it into one does not enhance it in any way. Making music actually requires selfless cooperation and is at its best when the egos of the players are not evident at all. And playing music, in an appropriate non-competitive context, brings out the best in people. It is no surprise, then, that in capitalist societies, the collective elements of music get reduced to passive consumption. And competition is enforced on musicians in ways that only detract from the music. 

Sociology is more fundamental than psychology, in the sense that we may be born with an individual temperament and/or personality that is relatively unchanging, but we develop in response to the environment we find ourselves in. We learn to be a member of the local social group in more or less the same way that we learn the language of the group we find ourselves in. 

Looking down the taxonomic hierarchy, our cells—our very genomes—are tiny, symbiotic, cooperative communities in which every component member prospers together. Looking up, we always live in families embedded within communities, embedded in societies, embedded in ecosystems, embedded in the biosphere as a whole, or Gaia.

At every level, living things are generally collectivist. And, left to their own devices, humans are naturally collectivist. Nothing could be more normal than socialism. Every group of friends I've ever been part of was leaderless. We just organised ourselves without much effort. 

I do not deny that individuals and species compete with each other, sometimes violently. However, I emphatically believe that the incidence and importance of competition has been grossly overstated by scientists with ideological—reductive, patriarchal, and imperialist—views.

We might even say the togetherness is what gives human lives meaning and purpose. The many modern people who say that they lack meaning and purpose are inevitably disconnected or alienated from society. What we all need (except for psychopaths) is a sense of connection. And it is precisely this connectedness that modern political discourse—neoliberalism and capitalism—seeks to replace with the ideas of ownership, control, and competition. This is aberrant and abhorrent in a social species. 

We are social.
We are social.
We are social.

~~Φ~~

14 November 2025

Mars Hype

For a change of pace, I want to address another common misperception on Quora. These days there's a lot of hype about "going to Mars". A large fortune is being spent on this project. A certain prominent technologist-plutocrat is committing a good deal of his extraordinary hoard of wealth to "making humanity multiplanetary". Many people seem to imagine that colonizing Mars will be relatively easy.

We need to be clear that Mars is an extremely hostile environment; far less hospitable to life than Antarctica in winter. I think it's likely that a human will walk on mars in the medium term (say ~25 years), though I'm no longer sure I'll live to see this. It's very unlikely that humans will successfully colonise Mars and I'm sure no human will ever be born on Mars. 

There are three questions to consider when thinking about the project to "colonise Mars":

  1. Was there ever life on Mars?
  2. Is there life on Mars now?
  3. Is there any prospect of Mars sustaining life in the future?

The answers (with come caveats) are probably no, no, and no. Let's look at why.


Past Life.

Mars’ geological (Areological?) history is divided into three main periods:

  • Noachian (~4.1–3.7 billion years ago): Heavy bombardment, abundant surface water, widespread clay-rich sediments.
  • Hesperian (~3.7–3.0 billion years ago): Declining water activity, formation of extensive volcanic plains, some outflow channels.
  • Amazonian (~3.0 billion years ago–present): Cold, dry, low erosion rates, mostly wind-driven surface processes.

While we don't know exactly how life began on Earth, the highest likelihood is Michael Russell's theory that it got started around warm alkaline hydrothermal vents on the ocean floor (see The Rocky Origins of Life 15 April 2016). This process relies on plate tectonics and oceans.

Mars has no plate tectonics and probably never had any. The crust is not made up of mobile plates that converge and diverge creating volcanic and earthquake zones. The crust of Mars is all one big slab of rock although it still has some local deformations and fractures. 

Furthermore, Mars hasn't had liquid surface water for at least 3 billion years. So the processes that most likely drove the emergence of life on Earth are absent from Mars. While we cannot definitively rule out past life on Mars, we can confidently say that it is extremely unlikely

This also means that fossils on the surface will be extremely rare. Fossils are formed in layers of sediment. Mars is known to have some very old sedimentary rocks, generally older than 3 billion years (while Mars had water). For reference, the chalk bedrock under Cambridge (where I live) is only around 100 million years old. On earth, evidence for life in 3 billion year old rocks is rare and often ambiguous, partly because at that time life was bacterial and left few physical traces.

There is some limited geological activity on Mars that creates some localised faulting. Craters and erosion can expose lower layers of rock. The Curiosity rover, for example, has been exploring Hesperian-era sedimentary rocks in the Gale Crater (~3.8 to 3.0 billion years old).

However, once exposed, fossils on Mars would be subject to a variety of powerful degrading processes: including solar and cosmic radiation, chemical oxidation, temperature extremes, and erosion (by dust storms). Microbial fossils are extremely delicate. A fossil on the surface might survive for millions of years, but not billions.

The surface of Mars has been dry, frozen, irradiated, and dusty since the end of the Hesperian, around 3 billion years ago. If life ever existed on Mars, it must have died out at that time. And it will have left few if any traces on the surface.


Present Life

The prospects for present life are considerably less optimistic. Life, as least as we know it, requires favourable conditions. And conditions on Mars have been decidedly unfavourable for at least 3 billion years.

The state of the mantle and core mean that Mars lost its magnetic field about 4 billion years ago (ya). This means that the surface of Mars has been subject to unfiltered cosmic and solar radiation for 4 billion years. Radiation levels on Mars today are ~200 times greater than those on Earth. 

This might not be immediately fatal to life, but for Earth-based living things it would cause very high levels of mutations, most of which would be deleterious.

Mars has almost no atmosphere and, thus, no greenhouse effect to speak of. The average surface pressure is 0.006 atmospheres (i.e. 0.6% of Earth standard). Mars' atmosphere is composed of 95% CO₂ and contains less than 1% oxygen, and less than 0.1% water vapour. All life that we know of requires liquid water to survive. While it is true that some tardigrades have survived brief exposure to vacuum, they cannot remain active in those circumstances, they cannot move around, feed, or reproduce in space or on Mars. Surviving is not thriving. 

The absence of greenhouse effect and distance from Sol means the average surface temp of Mars is around -65 °C, compared to the Earth average of +15 °C. The average temperature in the interior of Antarctica is -43.5 °C. While some lichens survive in coastal regions of Antarctica, nothing grows in the interior. No life as we know it can survive at -65 °C or below. 

There is no liquid water anywhere on Mars and has not been for billions of years. The minimal amounts of water that exist at the Martian poles are solid and have been for billions of years.

What kind of life can survive 3 billion years of extreme radiation and extreme cold, without air or water, and under harsh chemical conditions? None that we know of. None that I can conceive of. And even NASA do not expect to find life anymore. They are looking for fossils. 

These are non-trivial barriers to the existence of life on Mars in the present. No one should expect to find life—or fossils of past life—on Mars. Not after 3 billion years of the present conditions. The fact is that the surface of Mars is inimical to life. Which already raises doubts about trying to live on Mars. 


Future Life

Despite the very dim prospects of finding life or even signs of past life, plutocrats and their technologist lackeys seem enamoured of the idea that humans can "colonise" Mars or even can "terraform" it. Indeed, this is presented as an imperative (since capitalists are apparently not planning to stop destroying the Earth's biosphere before it ceases to be able to sustain human life).

The European practice of colonising, privatising, and exploiting resources has given rise to a history of brutality and extreme avarice. I grew up in a British colony and my strongest association with "colonisation" is violence. Invoking the trope of colonisation ought to ring alarm bells. No good ever came from the urge to colonise. 

The problem for human colonisation of Mars ought to be obvious by now: the surface of Mars is totally inimical to life as we know it. A human exposed to the surface conditions on Mars would die more or less instantly. So the main task on Mars would be to prevent any such exposure. This would mean permanently living inside, probably underground. 

Being exposed to the high levels of ionising radiation on Mars would cause very high rates of DNA damage, leading to many deleterious mutations. As much as anything it is radiation that would force humans underground on Mars. 

The low gravity would prevent normal development of fetuses and infants. Problems like muscle atrophy, bone demineralisation, reduced blood volume, and impaired balance (low g syndrome) would affect everyone. And if your bones never mineralise properly they cannot support your mass. This raises another problem.

A trip to Mars is going to take 6-9 months and 99.9% of that time will be spent in free-fall (microgravity). Which means astronauts travelling to Mars will suffer moderate-to-severe low g syndrome. By the time they get to Mars and 0.38 g, they’ll be seriously ill. Even with an ISS-style exercise regime (+2 hours of exercise daily) they will be severely impaired and liable to broken bones and other health problems. What's more it is not clear that 0.38 g would enough to halt this problem, let alone reverse it. We have zero data on humans living in 0.38 g. And at present, we have no way to get any data short of actually going to Mars. 

Nothing we can actually do by way of "terraforming" will address any of these problems.

Terraforming is a science-fiction idea. For example, we see colonisation enthusiasts saying that all we need to do is "drop some comets on Mars" to replace the air and water lost billions of years ago. The obvious problem with this is that the impact of comet-sized objects would throw vast amounts of Martian dust into the thin atmosphere and the low gravity would mean that it took a long time to settle. We have no idea what it would do the Martian crust. 

It's all very well casually talking about "dropping comets" but it's another thing being able to accelerate trillions of kgs of ice so that it impacts on Mars. We can just about lob 100,000 kgs into orbit. How are we going to handle a load that's at least a billion times more massive?

The fact is that we have no way to alter the trajectory of a comet such that it would precisely impact on Mars. It's not just that it's too expensive (which it would be). It's not that we fall just short of the technology required. Even if we could spare the trillions of dollars it would cost, there's no way we can change the trajectory of any comet. As a rule of thumb, the rocket fuel required to substantially change the orbit of a comet would have the same order of magnitude of mass as the comet. 

If we tried to use nukes, the water we delivered would be highly radioactive.

In practice, there's no way to do it; and even if there was, we couldn't afford it.

People talk about seeding Mars with genetically engineered superbugs that will somehow flood the atmosphere with Oxygen. But even if Frankenstein's algae could convert all the Martian CO₂, it would take a very long time because of the cold, and it would still only amount to a partial pressure of ~0.0000013 atmospheres of oxygen. Which is effectively zero and still instant death for a human trying to breath on the surface. 

Worse, there's nothing on Mars that would make the expenditure of trillions of dollars worthwhile. There is nothing on Mars that we could not obtain on Earth for a fraction of the cost. Or from robotic mining of asteroids. Those trillions of dollars would go a long way to solving the existential threats we face on Earth that plutocrats are currently in denial about.


Conclusion

The conditions on the surface of Mars—low g, high radiation, near-zero air and water—are inimical to life and have been for around 3 billion years. Dating back to the time of the last universal common ancestor of life on earth. 

The geological processes that gave rise to life on Earth are conspicuously absent from Mars. Because of the lack of plate tectonics, the surface we see on Mars is billions of years old. 

There's little or no reason to believe that Mars ever supported life. And even if we stipulate that it might have existed, there's little or no reason to believe we will find fossils of past microbial life since these are either deeply buried or severely degraded.  

The idea of terraforming is quite entertaining in speculative fiction and, at a stretch we might allow that it is possible. In practice, terraforming is not remotely technologically or economically feasible. 

I can imagine visitors to Mars. I'd love to live to see this. But I cannot imagine colonies on Mars ever being viable. The problems involved are fatal and practically insurmountable. 

Earth is unique in being the only planet we know of that supports life. There is no planet B. 

Capitalists appear to think that they can shit where they eat with impunity. They appear to believe that destroying the Earth is a fair price to pay so that a few men can be pampered as though they were gods.  They seem to believe that they can simply go and live in space once the earth is destroyed. But these fantasies are not remotely realistic. Without Earth, we are all dead. 

Here's a radical idea: 

Let's not destroy the only planet that we know can sustain life. 

~~Φ~~

07 November 2025

Philosophical Detritus III: Reality.

This is the third in a series of essays about abstractions in philosophy. Here, I continue the critique and extend it to another abstract concept that seems to trip many people up: reality.

Reality is one of the most common abstract metaphysical concepts used by both amateur and professional philosophers. We all like to say things like "In reality,..." We love to cite reality as the ultimate authority. "In reality..." is treated as a killer argument. And we try to ground our ideas of truth in reality.

However, these informal or common-sense uses of the term belie a deep and pervasive malaise in professional philosophy (the world over). After millennia of argument—across human cultures—there is no consensus on what "reality" is. Nor is there any consensus on what "truth" means (I'll come back to this). Metaphysics keeps promising insight into these problems, yet it never produces anything testable or even conceptually stable. New ways to approach reality keep emerging, but none of them ever manages to solve the problems it promises to solve.

And yet, at the same time, we all feel confident we know what reality is, or that we would know it when we see it.

When a problem has been argued over by clever people for a century without any consensus emerging, we may begin to suspect that we have framed the problem poorly. However, when we have argued for millennia and failed to reach any satisfactory conclusion, it calls the whole enterprise of philosophy, or at least metaphysics, into question.

Metaphysics is bunk. But why is it bunk?


Reality and Epistemic Privilege

Questions about reality are the principal topic of metaphysics.

  • What is real?
  • What does it mean for something to be real?
  • What is the nature of reality?

Reality is such a basic concept that you might expect there to be a long-standing consensus about it. After all, given how most of us use the term "reality", it ought to define itself. And as noted, we all seem to have a "common sense" view of reality. However, there is no general consensus on reality amongst philosophers, and there never has been. On the contrary, reality is one of the most disputed concepts in philosophy. As with many problems I've written about in recent years, there is not only an existing discordant dissensus, but it is growing all the time as new propositions are floated that try (and usually fail) to take the discussion in different directions.

We need to be clear about the implications of this dispute over "reality". If philosophers cannot even agree on what reality is, they cannot agree on anything else. There is a structural failure in the field of philosophy, an impasse that has existed for thousands of years. Lacking agreement on “reality,” philosophy fragments into self-contained silos with no common reference point. Nonetheless, this ambiguous and disputed concept continues to play an essential role in philosophy and daily life.

The problem with all these abstract metaphysical concepts is that we only have experience and imagination to go on. No one has privileged access to reality, so no one actually knows anything about reality. There is no epistemic privilege with respect to reality.

Everyone’s access to reality is mediated by factors such as perception, cognition, language, theory, and culture. There is no way around this mediation; no way to get unmediated access to reality, whatever it is. In my "nominalist" view, reality is an abstract concept; an idea. And, thus, the idea that we could have direct access to reality is quite bizarre.

Still, the idea that some amongst us do have epistemic privilege is widespread, especially in relation to religions. People are constantly stepping up to confidently tell us that they alone have privileged access to reality and can tell us what it is. It is very noticeable amongst Buddhists who like to invoke reality.

“We live in illusion and the appearance of things. There is a reality. We are that reality. When you understand this, you see that you are nothing, and being nothing, you are everything. That is all.”--Kalu Rinpoche

"the development of insight into the ultimate nature of reality." --Dalai Lama

“To think in terms of either pessimism or optimism oversimplifies the truth. The problem is to see reality as it is.”--Thich Nhat Hanh

Closer to home, for me, the founder of the Triratna Order, Sangharakshita (1925 – 2018), argued that imagination was our "reality sense". This idea was inspired by English Romantic poets rather than Indian Buddhists. No word in the canonical languages of Buddhism means "imagination". It's not a concept Buddhists made use of prior to contact with Europe. Moreover, the "reality" imagined by Sangharakshita and his followers is distinctly magical, vitalistic, and teleological (all of which seem unreal to me).

Buddhists are not exceptional in seeking to leverage the dissensus on reality to stake a claim to privileged metaphysical knowledge. Nor are Buddhists the only ones who meditate. Hindus also have a long history of meditation, and they have arrived at radically different conclusions about the significance of meditative states. So too with the Sāṃkhya philosophy, which most people now encounter in the context of haṭha yoga.

There is no "reality sensing faculty" and no way to know reality directly. I do not doubt that some people experience altered states in meditation, though I would say these largely arise in the context of sensory deprivation. Whatever those states are, after 13 years of intensively editing, translating, and studying Prajñāpāramitā texts, I no longer find it plausible that altered states in meditation reflect reality (or that "reality", in any European sense, was an important concept in Buddhism prior to European contact).

Obviously, if we cannot get information about reality directly, then we cannot know reality in any conventional sense. In other words, the big problem with metaphysics is epistemic.

To be more precise, there is no epistemic justification for any metaphysics. We don't know reality. And we cannot know reality. All traditional metaphysics arises from speculating about experience.

This is not a new observation. David Hume (1711–1776) came to similar conclusions. He famously noted that we never see a separate event that we could label "causation". What we call "causation" is merely a regular sequence of events. If event B is always seen to be preceded by event A, then we say that A caused B. And this generalised to metaphysics. Hume argued that all knowledge is either sensory experience or ideas about sensory experience. And experience is not reality.

I have encountered many people, mostly Buddhists, over the years who professed to believe the opposite of this, i.e. that experience is reality. The corollary is that we all have our own reality. This is solipsistic and egocentric. We can show why this is false by coming back to the main argument.

Half a century earlier, Isaac Newton (1643 – 1727) was able to formulate relatively simple expressions using calculus that made accurate predictions about the motions of objects.

(where F = force, v = velocity, p = momentum)

And since, for any event we can witness on earth, these expressions predict future motion with considerable accuracy and precision, they surely reflect some kind of knowledge about reality. We can confirm this by using the same equations to retrodict past events that have already been observed, so we know the outcome of the process in advance. Newton's laws of motion are a very robust description of motion as it could be observed in the 18th century. In the course of my formal education, I personally demonstrated the efficacy of all these laws of motion.

At the very least, this is objective knowledge.

Thus, there is a tension between Hume's observation that we cannot know reality and Newton's observations that appear to describe reality very well.

Immanuel Kant (1724 – 1804) attempted to resolve this tension by redefining "metaphysics" as a critical inquiry into the conditions that make knowledge and experience possible, rather than as speculative knowledge of things beyond experience. In his view, metaphysical ideas like causation, space, and time are a priori forms unconsciously imposed on experience by our minds. In other words, "space" and "time" are not part of reality and don't exist independently of an experiencing mind. Similarly, causation is how we explain sequences of experiences, not an aspect of reality. This means metaphysics cannot reveal things as they are in themselves, but only the necessary structures of how we must experience them.

The problem with Kant is that his view still requires a metaphysics, since he tacitly assumes that all humans experience the world in exactly the same way. This is a speculative view about human nature and requires something both universal and beyond experience. And I don't think it works because this is not something that Kant could know (even now). Worse, this is not my experience of other people. Exposed to a range of people, one of the most striking things we notice is that we do not all experience the world in the same way. Some aspects of experience are unequivocally not shared; they are subjective.

Kant might have gotten around this by emphasising that empiricism is more than just observing nature; it also crucially involves comparing notes about experience with other people. But he did not. It is precisely comparing notes about experience, noting the similarities and differences, that allows us to parse the commonalities in experience.

Kant doesn't really account for Newton's objective knowledge. Rather than positing that every human being sees the world in exactly the same way, it makes more sense to me to say that objective knowledge of this type is independent of the observer. From observations like Newton's laws of motion, we can infer that there is an objective world, which does have its own structures and systems. Still, this world can only be appreciated via mediated experience. At the very least, we may say that our ideas of space and time, for example, must be analogous to something objective, or we could not use them to predict the future in the way that we do.

Similar dissatisfaction with Kant drove the emergence of phenomenology. Husserl, for instance, wanted to suspend all assumptions about “reality itself” and focus instead on experience as it presents itself. Note that, despite the many successes of phenomenology, we all still rely on metaphysical frameworks to structure our understanding of the world. But this retreat from objectivity is also hamstrung by the fact that we can make valid inferences and predict the future.

However, despite the emergence of phenomenology, speculative metaphysics continues to dominate philosophy. Arguments about "reality" are ongoing and diversifying as time goes on.


Rescuing the Concept of Reality through Pragmatism

As far as I can see, some form of realism is inescapable. Realist explanations do the best job of predicting the future. Newton's laws of motion are still in daily use by scientists, engineers, and technologists precisely because they accurately predict the future. All the attempts to deprecate realism have ended in failure: specifically, failure to predict the future as realist explanations do.

Newton's laws of motion are objective facts. They are apparent to any observer. The laws have limits, and there are situations in which they fail to be accurate or precise enough. Still, within the well-known and accepted limits, the laws of motion apply.

And at the same time, we still only have experience to go on. Which means that realism has to have a pragmatic character. We can infer objective facts—such as Newton's laws of motion—and these allow us to predict the future. Being able to predict the future, and thus reduce the burden of uncertainty (without ever eliminating it), is nontrivial. Of course, such a pragmatic approach can never provide the kind of certainty that metaphysics promises, but then metaphysics has never delivered on such promises either.

I come back to a crucial point already made above: comparing notes. When we compare notes, especially in small groups, we immediately see that some aspects of our experience are shared and some are not. By painstaking observation and comparing notes, we can infer which things appear the same (or at least similar) to everyone and which are only apparent to ourselves.

Take the example of the acceleration due to gravity at the surface of the Earth. It doesn't matter who observes this or what method they use; it always turns out to be ~9.81 ± 0.15 ms-2. It doesn't even matter which units we use. We could measure it in fathoms per century squared, as long as we know how to convert one unit to another.

So the acceleration due to gravitation is more or less the same for everyone. And we can account for the variations with factors like elevation and the density of the underlying bedrock. We can say, therefore, that gravitation is independent of the observer. Another way of saying this is that gravitation is objective. And having inferred this, we can imagine many ways to test this with a view to falsifying it. An inference that both makes accurate and precise predictions (and retrodictions) and also survives rigorous attempts at falsification can be considered an accurate indicator of reality.

But we don't just compare notes on individual phenomena. Gravitation is a particular phenomenon, and we can compare this to other forces and how they operate. For example, we might look at gravitation in the light of observations of other kinds of motion. If we had a theory of gravitation that predicted motion that disagreed with our theory of motion, then we would be at an impasse. But this is not the case. When we investigate nature, we find that our inferences support and reinforce each other.

Newton's law of gravitation is consistent with his laws of motion. And they are both consistent with other formal regularities in the universe. And we now have explanations that go beyond the limits of Newton but which show that Newton's laws are special cases under certain limits.

Note that values obtained in this way should always include some measure of uncertainty. We aim to measure things with a high degree of accuracy and precision, but there is always some measurement error. And our inferences often rely on assumptions that may introduce inaccuracy or imprecision.

The physical sciences are inherently pragmatic. We aim to arrive at valid inferences that allow us to predict the future to a desired level of accuracy and precision. Doing this allows us to compile useful and robust inferences into a system of inferred knowledge that is highly reliable. Newton's laws are paradigmatic of such knowledge.

And when this is the case, we don't need to know what cannot be known. What we can and do infer is enough to be getting on with. We can step back from speculating about unobservable metaphysics and focus on what can be observed.

Note that this is not the same as the instrumentalism that afflicts quantum mechanics. Quantum theory was developed in a milieu profoundly influenced by logical positivism. Bohr, Heisenberg, Born, and others collectively resisted any attempt at a realist interpretation of quantum mechanics, arguing that since we could not observe the nano-scale (in the 1920-30s), we could not know it. They rejected both Schrödinger's realistic interpretation and both his and Einstein's critique of non-realist approaches. Schrödinger's cat was intended as a refutation of the Copenhagen view, but ended with Copenhagen adopting the cat as their mascot. The modern, Von Neumann-Dirac, formalism still resists realist interpretations. The majority of physicists simply accept that no realist interpretation of nanoscale physics is possible, even though we now have images of individual atoms, which allows us to affirm that atoms are objective phenomena.

While principled arguments exist for idealist or other non-realist views, such approaches have never allowed us to predict the future in the way that realism does. There is no idealist equivalent of Newton's laws of motion, let alone the systematic accumulation of useful knowledge that characterises science. To my knowledge, idealists have made no contribution to understanding the world.

Pragmatism allows for pluralism. Of course, most people do not want pluralism. They want answers. They want certainty. I sympathise. I want answers too. But after a lifetime of seeking answers, this is as close as I have come to a satisfactory solution.


Conclusion

"Reality" seems to be paradoxical. It is both intimately familiar and foundational to our worldviews and, at the same time, forever beyond our perception and understanding.

Thousands of years of argumentation over reality and the nature of reality have not resulted in a consensus; rather, it is the source of a growing dissensus.

Very many people reify reality, i.e. treat the abstract concept of reality as a thing in itself.

Since reality is an abstract concept, the nature of reality is abstract.

No one has epistemic privilege with respect to reality. No one knows. And the people who claim to know—including religious gurus—are misleading us or have themselves been misled. Sincerity is no guarantor of accuracy or precision. Belief is a feeling about an idea.

The emergence of phenomenology was not the end of metaphysics.

We can rescue this ambiguous and paradoxical abstract concept via pragmatism.

It has been apparent for some centuries that we don't have to rely on speculative metaphysics. We can and do infer objective facts about the world.

There are patterns and regularities in experience that only make sense in a realist framework. At the very least, experience must be analogous to reality, or we'd get lost and bump into things all the time. The precision and accuracy with which we can describe patterns and regularities in experience, and use these to predict the future, also argue pragmatically for adopting realism.

The most obvious of these descriptions and predictions come from physics, but we get them from all kinds of sciences, including social sciences.

Pragmatic objectivity is something we can aspire to and, at our best, approach. It's not the same as certainty, but it is good enough to be getting on with.

However, my sense is that promises of certain knowledge will always be attractive to some people. And this leaves those people open to manipulation and economic exploitation.

~~Φ~~

31 October 2025

Philosophical Detritus II: Consciousness

In the previous essay in this series, I made some fairly banal comments about abstractions from something like a nominalist point of view. Nominalism is usually couched in metaphysical terms, but my approach is epistemic and heuristic. I don't say "abstractions don't exist", I say "abstractions are ideas". 

Ideas are ontologically subjective. We can have objective knowledge about ideas, it's just a different kind of knowledge than we can have about objects. As John Searle puts it:

The ontological subjectivity of the domain [of consciousness] does not prevent us from having an epistemically objective science of that domain.

For example, ideas do not have properties such as location, extension, or orientation in space or time. Still, ideas are knowable. Counting systems and numbers are abstract and thus ontologically subjective, but 2 + 2 = 4 is an epistemically objective fact about numbers.

At the same time, we can treat ideas as metaphorically located, extended, and orientated. An apt example would be "the ideas in my head". Metaphorically, IDEAS ARE OBJECTS. With this mapping from objects to ideas in place, we can now make statements in which qualities of objects and verbs that apply to manipulating objects are applied to ideas. 

Unfortunately, this opens up the possibility of (1) treating the abstract as real (reification) and (2) treating the abstract as independent of concrete examples (hypostatisation). There is a third problem, which has no widely-accepted name, but which we can call animation, which is treating ideas as if they have their own agency (compare Freud's psychoanalytic theory which gives emotions their own agency). That is to say, without care we may conclude that ideas are real, independent, and autonomous agents.

The point of taking this approach to abstractions is pragmatic. Over the years I have participated in and witnessed many philosophical discussions. Not a few of these have concerned the nebulous abstraction consciousness. And on the vast majority of occasions the discussion is plagued by unacknowledged reification, hypostatization, and animation. In other words, the abstraction "consciousness" is routinely treated as a real, independent, and autonomous agent. For example, these are some actual questions recently asked on Quora.

  • If we did not have a consciousness, would we have thought of the idea of consciousness?
  • If science was able to clone an exact copy of me, including my consciousness, would it be me?
  • How can we upload human consciousness via AI?
  • Is there any scientific evidence that we are all one consciousness?
  • Is it possible to transfer consciousness to another body like a clone or machine?
  • Is it scientifically possible to transfer self consciousness from one body to another?
  • What is the real nature of consciousness? Can it be engineered or exported by humans, or does it exist beyond us?
  • Is consciousness a fundamental property of the universe, or is it an emergent phenomenon of complex biological systems?
  • How likely is it that humans will eventually be able to fully explain consciousness?

These kinds of questions are asked again and again with minor differences in emphasis. They are also answered over and over again. It appears that having many available answers does not reduce the desire to ask the question. I think part of the problem is that answers are wildly inconsistent. Asked a yes/no question, Quora answers often say "yes", "no", and "maybe" with equal confidence and authority.

In this essay, I will attempt to apply my heuristic to cut through some of the bullshit and bring some order to one the most confused topics in philosophy: consciousness. This is not as difficult as it sounds. 


Meaning is Use

Observing how people use this term "consciousness", my sense is that that vast majority of people use "consciousness" as a synonym for "soul". That is to say they treat "consciousness" as a nonphysical entity that is both independent of their physical body (including their brain) and, at the same time, integral to their identity and/or personality. For the majority, it seems, consciousness is a kind of secularised soul, stripped of the supernatural significance given to it by christians, but still a real, independent, autonomous agent. 

Being independent of the body means that "consciousness" is able to survive death. It is hypostatized consciousness, for example, that allows for techno-utopian ideas such as "uploading consciousness", "transferring consciousness" from one body to another (including the wildly incoherent concept of the "brain transplant"), as well as various kinds of disembodied consciousness (including post-mortem consciousness).

In these examples, "consciousness" is something that is not causally tied to a body and, as such, it can exist without a body or be moved between bodies, including non-human bodies. Like the soul, the uploaded consciousness is disembodied and effectively immortal (which explains some of the ongoing appeal of the fallacious idea). "Uploading consciousness" is an analog of the christian narrative of resurrection or the buddhist narrative of rebirth. It's an afterlife theory. In my book Karma and Rebirth Reconsidered, I argued that all religious afterlife theories are incoherent because they all contradict each other. 

A particular mistake that almost always goes with consciousness qua soul, is vitalism. This is the idea that what distinguishes living matter from non-living matter is some kind of animating principle or élan vital. In antiquity, this principle was almost always associated with breath (see, e.g. my essay on spirit as breath). In Judeo-Christian mythology, Yahweh breathes life into Adam, animating him. The word animate derives from a root meaning "breathe". Similarly, psykhē (Greek) and spīritus (Latin) originally meant "breath".

As anyone who has experienced the corpse of a loved one knows, its intuitive to think that something animated them and that their corpse is the body minus that animating principle. For example, I vividly recall seeing my father's corpse in 1990 and having this reaction.

Actually, what's missing in the experience of see the corpse of a loved one is our emotional response to them. There is a neurological condition known as Capgras Syndrome, in which localised brain damage can leave a person able to recognise familiar faces, but unable to experience emotional responses to them. They frequently arrive at the bizarre conclusion that the people they know have been replaced by doppelgangers.

My father's corpse was like an exact replica of him, that wasn't moving or responding. All the personality was gone. Like many people my first intuition was that my father's life and personality had gone somewhere. Which is to say, that they still existed apart from the body. With a lot more life experience and learning under my belt, I can now see that, while the difference between living and dead is stark, it's our own lack of emotional response to corpses that we are trying to explain.

As a teenager, I remember going to the funeral of my best friend's father who died quite young. My friend and his nuclear family were all disconcertingly smiling and happy. They were not overtly religious in the conventional sense of being members of a religious community. Nevertheless, for them, the deceased man was still a very strong presence. They felt him still there with them. They were not sad, at the time, because in their minds, the father was not gone wholly gone or inaccessible.

I get the attraction to and plausibility of vitalism. I just don't believe it. Vitalism was discredited when we discovered how to synthesise organic compounds in the late 19th century. We don't have to add any "vital principle" or "life force" to account for animate matter.

Despite being secularised and stripped of significance, the idea of a consciousness qua autonomous entity that survives death still has a religious flavour. Witness the people who assume that "consciousness" is an entity then go around seeking evidence that supports this view. 

By contrast, a rational approach would begin with concrete evidence. If we were to start over, and re-examine the evidence, no one would propose the concept of a soul.

The abstract concept “consciousness” has become a dead end.

  • All statements that treat “consciousness” as a concrete or real thing or entity are false.
  • All statements that treat “consciousness” as a separate or disembodied thing are false.
  • All statements that treat “consciousness” as an autonomous agent are false.

And from what I can see, very little of what remains is useful. Some metaphorical uses of "consciousness" are common:

  • A stream of consciousness.
  • The fabric of consciousness.
  • A field of awareness.
  • A thread of awareness
  • The tapestry of the mind
  • A vessel of thought
  • The machinery of the mind.
  • A lens of perception.

However, all of these uses are prone to hypostatisation, reification, and animation. 


Intentionality

One way around the mistakes people make is to acknowledge Dan Dennett's observation that consciousness is (almost) always intentional. We can say that consciousness (almost) always has an object or condition. Heuristically, we can say that consciousness is always consciousness of something. If we always follow "consciousness" with "of _____" and fill in the blank, we are much less likely to go wrong. For example:

  • Concrete: “I am conscious of feeling cold.” ✓
  • Abstract: “There is consciousness of feeling cold.” ✓
  • Reified:
    • “There is a consciousness.” X
    • “My consciousness...” X
    • “Consciousness is…” X
    • “Consciousness does…” X

Unfortunately, even true abstract statements about experience are likely to be misinterpreted in ways that falsify them.

The exception to conscious states being intentional is the state of "contentless awareness" sometimes experienced in sleep or meditation. See for example the discussion: "Can you be aware of nothing?" in The Conversation.

For Buddhists, note, that I now distinguish "contentless awareness" from "cessation". Following cessation there is no awareness. The state of śūnyatā (also an abstract noun) is not a conscious state. It is an unconscious state, though seemingly distinct from sleep or anaesthesia.

Contentless awareness probably corresponds to the higher āyatana stages, for example "the stage of nothingness" (ākiñcaññāyatana) or "the stage of no awareness or unawareness"(nevasaññānāsañña). Prajñāpāramitā texts make it clear that having any kind of experience or memories of experience is inconsistent with śūnyatā.


To sum up

"Consciousness" is an abstract concept. An idea. Ideas are not real, independent, and autonomous agents. Ideas are ideas. Ideas are subjective; though we can have objective knowledge about them.

Talking about consciousness as a soul is a dead loss. But, then, there is very little talk about consciousness that is not a dead loss. And this includes most of "philosophy". 

Consciousness as a abstract concept is intentional. This can be reflected in statements that include what we are conscious of.

~~Φ~~

Related Posts with Thumbnails