17 September 2021

Hostility To Change In Buddhist Studies (And Elsewhere).

There is a story in Adam Becker's book What Is Real? part of which he admits might be apocryphal, but which nevertheless accurately conveys the social dynamic in physics in the 1950s. It is true that in 1952, Max Dresden gave a lecture on the work of David Bohm to an audience of physics luminaries at Princeton's Institute for Advanced Study. Dresden himself would have been happy to ignore Bohm, but his students pestered him to read Bohm's paper outlining an alternative approach to quantum mechanics. Bohm's idea is that the quantum world is literally particles and waves combined: with the particle carrying the physical properties and the wave guiding the motion of the particle (the idea is also known as a pilot wave theory). The interesting thing about this, as Becker relates, is that "Bohm's theory was mathematically equivalent to 'normal' quantum physics" (90).

What Bohm showed was that the Schrödinger equation was consistent with at least two different and mutually exclusive descriptions of physical reality. But there can be only one reality. Other descriptions of physical realities consistent with the Schrödinger equation soon followed, but Bohm's was the first alternative to emerge. The Copenhagen supremacy was dead at that point. But it has not been replaced in university textbooks because, despite many alternative proposals, none of them is known to be the right one. In the absence of a good model, students are taught the bad one that is most familiar.

Bohm had previously done highly regarded work at Princeton. In 1952, Bohm was out of the mainstream and living in exile in Brazil because of problems with the US State Dept arising from his left-wing politics (it was the McCarthy era). Dresden finished his presentation (including the maths) and the floor was opened to questions. He was expecting some push back from the audience about this but was unprepared for the wave of vitriol that washed over him. As Becker recounts it:

"One person called Bohm a 'public nuisance'. Another called him a traitor, still another said he was a Trotskyite. As for Bohm's ideas, they were dismissed as mere 'juvenile deviationism', and several people implied that Dresden himself was at fault as a physicist to have take Bohm seriously. Finally, Robert Oppenheimer, the director of the Institute spoke up.... "if we cannot disprove Bohm, then we must agree to ignore him." (90, My emphasis)
"if we cannot disprove Bohm, then we must agree to ignore him."—Oppenheimer (allegedly)

In his 1980 book, Wholeness and the Implicate Order, Bohm suggested that "the scientific way of thinking is stereotypically stubborn" (3). Another physicist, Max Planck, lent credence to this supposition when, frustrated with the lack of progress in quantum theory he quipped "science proceeds one funeral at a time". This turns out to be a paraphrase of something more subtle that he wrote in his 1949 "Scientific Autobiography":

"A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it. . . . An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth." (1950: 33, 97)

Of course, to be fair to physics, Planck's ideas were widely accepted by the time he wrote this and many of them bear his name, e.g. Planck's constant. Still, Planck and Bohm were not alone in thinking this way. A decade later, in The Structure of Scientific Revolutions (1962), Thomas Kuhn wrote:

"Almost always the men who achieve these fundamental inventions of a new paradigm have been either very young or very new to the field whose paradigm they change… " (90).

I take the message here to be that only people not invested in the status quo are flexible enough to change it. And we can note that it is intuitively the case not only in science, but in every aspect of life. The use of "men" to mean "people" is a paradigm that has changed in my lifetime because a generation of women forced us to rethink gender. And rightly so. Science is no longer dominated by men by virtue of their gender roles in society. Women make excellent scientists and scholars.

Speaking of women in science, Professor Katalin Karikó, has recently been reported in a UK newspaper as saying

“If so many people who are in a certain field would come together in a room and forget their names, their egos, their titles, and just think, they would come up with so many solutions for so many things, but all these titles and whatever get in the way,” (emphasis added)

Karikó, now a senior vice-president for RNA protein replacement therapies at BioNTech in Germany, "endured decades of scepticism over her work and was demoted and finally kicked out of her lab while developing the technology that made the Pfizer and Moderna vaccines possible" (my emphasis).

Interestingly Karikó says that the adversarial competitiveness disappeared when she moved from academia to industry, where all that counts is an efficacious product. Still, if academic science proceeds one funeral at the time, industrial science makes progress only on what is profitable for shareholders.

I cite these examples to show that intellectual discourse can be and frequently is reluctant to change, and that even at the heart of academic physics, politics play a role. There is a general resistance to new ideas whoever proposes them and however they do it, even in the hardest of "hard sciences". However, and this is especially true in Buddhist Studies, this is not a healthy scepticism so much as it is dogmatism and/or egotism. When our title, job, role in society, and our very identity are bound up with a particular story, we don't want to know that the story is inaccurate. This is hardly rocket science. Economists call this the sunk cost fallacy. This is when we stay to the end of a bad movie because the tickets were expensive and we want to "get our money's worth". Sometimes known as throwing good money after bad.

But it is not just resistance to the innovative. There is another, darker aspect to the Buddhist Studies culture. Quite a number of Buddhists Studies academics are mean. I have some public examples to discuss, but I also have many comments sent to me in private and in confidence that confirm this. Many people tell me I'm better off out of it.


Meanness is endemic in Buddhist Studies. And it mainly seems to involve men being egotistical and treating Buddhist Studies as a zero-sum game. Charles Prebish observed that when he was an early career academic:

“I was convinced that Buddhist Studies, as it was developing in North America, was misguided. In the first place, most of the role models for this blooming discipline: Edward Conze, Leon Hurvitz, Alex Wayman, and a few others, were amongst the meanest individuals in academe [sic]... they seemed to take real delight in humiliating students rather than encouraging them.” (Prebish 2019, cited in Attwood 2020).

Despite a few difficult encounters over the years, I took this to be relatively contained in the past, but in 2020 two women in Buddhist Studies posted a video chat posted to YouTube, titled It's Not Rigor, It's hazing. In the discussion they related how different male colleagues had deliberately humiliated them at separate public events. I found the link via Twitter account and it is interesting to see that several other women has similar experiences. For example, Stephanie Balkwill tweeted: "What got me was that every[one] else saw it that way at the time and did nothing, continuing to work with the person. I have subsequently learned that this behavior is habitual by him and evidently everybody knows it.";

Note, this is not online trolling, This is in real life, in person, in public, in your face trolling. As I say, I have many examples of this that I can't use without breaking confidences. Watching this video made me rethink some other encounters I'd had.

It's notable that neither woman in the video named names. Nor does anyone name names in public. Even though it's an open secret and "everybody knows it", nobody talks about it in the open. I presume this is because the bullies are still in their academic posts, still on hiring and promotion committees, still the editors of journals. If you want a career in academia, you can't join the #metoo movement. Power is the ability to silence your victims. I'm not saying Dan Lusthaus is Harvey Weinstein, but he does bully with impunity.

Anecdotally, I hear that a lot of early career scholars are abandoning traditional Buddhist Studies centred on philology, and are being attracted to other disciplines. Women especially seem to be branching out into Women's Studies, Gender Studies, and Queer Studies, although applying the ideas and practices of these other disciplines to studying Buddhism. They often study contemporary Buddhism, thereby avoiding any confrontation with the traditional angry male philologist. Choosing to be based in another field entirely, seemingly, provides a more conducive and supportive environment for doing research.

I want to make it clear that within Buddhist Studies my experience has been mixed. I am grateful to a number of generous peers and mentors who have enabled me to publish around 20 articles on various topics in various scholarly journals. There are many good people in this field; people who are happy to hear from a serious outsider asking for advice or for a copy of an article. I try to thank them in notes, but I doubt I've conveyed just how much help and assistance and encouragement I've received over the years.

Nonetheless, in early Sept 2021, Dan Lusthaus was busy trying to publicly humiliate me because we disagreed over an interpretation of some facts regarding mantra and dhāraṇī. Here is his last comment on this issue:

"Yes, we've all come to understand that your supporting evidence is your own theories, not the actual texts and what they say. And when the texts indicate something other than what fits your theory, you misread them."

By the way, if this is true, what does it say about the many Buddhist Studies academics who have read my articles and recommended them for publication and published them? When you pick up shit to fling at someone else, you end up with shit on your hands, Dan.

Although Lusthaus may well sincerely believe his mean-spirited remark, it is clearly is false. My friends in academia not only assure me of this, but they also say that this bad behaviour is typical for Lusthaus (sound familiar?). I am playing the game of scholarship to the best of my ability and I have published ten articles on the Heart Sutra in scholarly journals offering expert peer-review. Each article has persuaded an editor and at least one reviewer (supposedly an expert in the field) that the article should be read by other academics and considered on its merits. I have no leverage over these people and they have no obligation to publish my work if it is substandard, and they are not shy about saying so, especially in anonymous reviews. And of course, many anonymous reviews are extremely mean.

It's hard to say what Lusthaus gets from being mean to me. Lusthaus has tried to bully me several times in the past. I've encountered him a few times over 25 years, mostly in the annals of the listserv Buddha-L which he now runs. I've seen him do this to numerous other people. The fact that Lusthaus is a bully is widely known in the field. Because of this, one friend in academia urged me, privately, to "not take him seriously". In my experience ignoring bullies does not stop the bullying. And having someone go out of their way to try to publicly humiliate you is tiresome and counterproductive, even if everyone knows he's a bully.

I can sort of understand some academics circling the wagons to exclude me—a self-taught amateur—but the same people have been doing this to Jan Nattier—a consummate professional scholar and educator—for thirty years. Nattier's 1992 proposal that the Heart Sutra was composed in Chinese is a new paradigm and casts doubt on much that has been said about this and other Prajñāpāramitā texts. Moreover the close reading of the text that follows in Huifeng (2014) and in my many articles, shows that Nattier was exactly right and that we really do need a new paradigm for understanding the Heart Sutra and for Prajñāpāramitā.

Lusthaus published some comments in 2003 that he asserted undermined Nattier's thesis but I showed that Lusthaus was merely deducing his axioms. This is the process by which a series of logical deductions will eventually reproduce your starting assumptions as valid conclusions. When we assume that the Heart Sutra was composed in Sanskrit, i.e. if this proposition is treated as axiomatic, and then apply deductive reasoning to the early Chinese commentaries, after a few deductive steps, we can conclude that the Heart Sutra was composed in Sanskrit and it looks like the conclusion is inferred only from reading the commentaries. In fact, the deduction doesn't come from the commentaries, it comes from the axiom itself. All deductive reasoning is subject to this limitation. I refuted Lusthaus's assertions in print in my Pacific World article: "The History of the Heart Sutra as a Palimpsest", showing that his reading of the text and his logic were flawed. So maybe he's still mad about this. I've known other male Buddhist Studies academics hold a grudge in this same way.

I certainly have many limitations, as a scholar and as a person. I'm keenly aware of this. But I carefully try to work within my limits and one or two friendly academics read every article before I submit them. Every statement I've made is the result of a careful analysis, checked and rechecked by me and several other knowledgeable people. It's backed by textual evidence and by previous scholarship (where possible). Not only is everything I have said in my articles testable, but it's clear what kind of evidence would refute it. No one has presented that kind of evidence yet. As soon as they do, I will certainly change my tune. Unfortunately, arguing can be trumped by shunning... "if we cannot disprove Jayarava, then we must agree to ignore him."

As a scholar with no formal "training" (see the video mentioned above for comments on this term) there is nothing special or clever about what I do. I see myself as feasting on the ample low-hanging fruits that others have ignored. Mostly, I'm just stating the obvious in ridiculous amounts of detail. One of my best articles (Epithets 2017) was a more organised and complete version of one of Jan Nattier's footnotes which explores some ideas proposed by Yamabe Nobuyoshi (1992 fn 54a). I checked with Nattier and Yamabe before publishing this refinement of their idea. And I'm happy to be doing this scut work. Honestly, I'm honoured to be tidying up after Jan Nattier, she is an inspiration to me. I never set out to change the world. I only set out to read the Heart Sutra. It's not my fault if the existing scholarship has missed the blindingly obvious. I'm just the messenger. I was as surprised as everyone else that no one had seen what I see. Now I can't unsee it and I have been attempting to communicate it. Ten articles later, there are still low hanging fruit that no one can see because they refuse to acknowledge that fruit even exist. Ironically, the deliberate withholding of attention is central to understanding Prajñāpāramitā (my interpretation of Huifeng 2014).

This meanness and use of public humiliation is not new to me. Indeed this has been a feature of my life. People use coercion and manipulation in attempts to control or negate other people all the time. It's a kind of sickness for a social primate, but in my experience (across cultures) this is the norm in life. Buddhism does not escape it (as we have learned to our great cost in the West) and Buddhist studies is mired in it. Bullying and shunning are commonplace.

Studying the Heart Sutra

I never even wanted to study the Heart Sutra. I'm still not that interested in it. But I had the opportunity to audit Sanskrit classes at Cambridge University with Vincenzo Vergiani and Eivind Kahrs (who was appointed to K. R. Norman's post when he retired). This was before Cambridge University finally killed off Indology and ancient Indian languages. I read Sanskrit in 2012 because they no longer offered Pāli and everyone told me (rightly) that knowing Sanskrit would improve my Pāli. As well as many textbook passages, I read stories from the Hitopadeśa, most of the Sānkhyakārikā, verses from the Mahābhārata, and passages from the Vākyapadya. I just wanted to read a Sanskrit Buddhist text, but I fully intended to keep my focus on Pāli.

What drew me into studying the Heart Sutra was the mistake I found in the first sentence of Conze's Sanskrit text: a transitive verb treated as intransitive, a noun in the wrong case, and a misplaced colon. The simple addition of an anusvāra (धाधां) and omission of anusvāra is the most common scribal error in these manuscripts. At least two of the extended text manuscripts have the noun in the correct case (making it the object of the transitive verb). A difficult nonsense sentence is transformed into a relatively straightforward three clause sentence. Lacking confidence back then, it took me 10,000 words to describe this problem and propose a solution (Attwood 2015). I covered all the bases, with help from Jonathan Silk and Jan Nattier on the Tibetan texts.

This initial insight was not dependent on Chinese origins or Nattier's work in general. It was all about Sanskrit grammar. No one else had seen this error in a text first published by Conze in 1946, revised in 1948, 1967, and translated numerous times. It's 2021 now, and long overdue for academia to wake up and think about this and my other grammatical points (Attwood 2018a, 2020a). Whether they agree with me over Chinese origins or not, these are basic questions of Sanskrit grammar.

I naively thought that if I published this small discovery (which I did in 2015) that academics and Buddhists alike would be like, "Oh yeah, now that you point it out...". I thought perhaps some might go as far as citing my discovery. However, in the intervening six years, not one single academic has discussed my article let alone adopted my suggested correction. The whole article was recently summarily dismissed in a footnote by senior Japanese scholar Saitō Akira (2021), in favour of the defective reading that makes no sense in Sanskrit.

Buddhist Studies academics have long preferred the defective version of the Heart Sutra and loudly praised Conze for his "meticulous scholarship" in producing a defective edition, a lousy translation, and a harebrained mystical interpretation. This preference for familiar confusion over unfamiliar clarity is inconsistent with objectivity, the primary defining characteristic of scholarship. Objectivity, as Carl R. Trueman has said, is not neutral. Objectivity shows that all answers are not equal and some are wrong. Reality is a particular way, at least on scales relevant to the human sensorium, and not any other way. Objectivity is as much as part of philology, history, and philosophy as it is of science.

How can I make sense of this refusal to even consider the possibility of change?

Belief Is An Emotion About An Idea.

It is well known that people often resist changing their beliefs when directly challenged, especially when these beliefs are central to their identity. In some cases, exposure to counterfactual evidence may even increase a person’s confidence that his or her cherished beliefs are true. Reed Berkowitz, discussing the similarities between QAnon and live action role-playing games, cites an article by Kaplan et al (2016).*

"Strongly held beliefs are literally a part of us. As such, attacks on core beliefs are treated very much as attacks on us, even as strongly as a physical attack." Berkowitz (2020)
* For a popular account of Kaplan et al's research see Resnick (2017) "A new brain study sheds light on why it can be so hard to change someone's political beliefs".

Kaplan (2016) notes that, presented with "counterevidence" (i.e. counterfactual evidence), "people experience negative emotions borne of conflict between the perceived importance of their existing beliefs and the uncertainty created by the new information." New information can create cognitive dissonance.

"Attacks on core beliefs are treated very much as attacks on us, even as strongly as a physical attack."
Berkowitz (2020)

This suggests that by presenting an alternative reading of the Heart Sutra Nattier generated negative emotions amongst those committed to a traditional reading, both conservative religieux and scholars alike. This religieux/scholar distinction is thin or absent in Buddhist Studies and in traditionally Buddhist countries, Buddhist Studies is completely dominated by religieux. Apparently no one sees the conflict of interests in this.

And it's not just that a Chinese Heart Sutra asks these men to change their minds. It goes a bit deeper than this. Because in confirming that the Heart Sutra is a Chinese digest text and the Sanskrit text a poor translation passed off as Indian, we are asking them to publicly admit they were wrong all this time. And this is a major challenge to their egos. Some people feel threatened by counterfactuals.

With respect to the Heart Sutra change is especially hard, heterodoxy is viewed especially negatively, and new information treated with heightened suspicion amongst the religieux in academic, simply because they are religieux in academia. The two conservatisms multiply. New information, even something as simple as a minor grammar correction, creates strong negative emotions in religieux (including academic religieux) because it conflicts with long held, cherished beliefs about the Heart Sutra, but also because it conflicts with the very identity of the religieux. Two strong emotional reactions combine into a perfect storm of denial and aggression. And this is expressed as intellectual incredulity and emotional hostility.

Some years ago, a chance meeting led me to look into the work of Hasok Chang, Professor of the History and Philosophy of Science at Cambridge University. I was very struck by his inaugural lecture for example, and his book Is Water H20? which covers many of the same themes in more detail. One of Chang's main themes is that pluralism at certain stages of knowledge-seeking is an advantage. According to Chang's liberal view of science, having competing explanations strengthens science. His striking example is that the much maligned idea of phlogiston actually had more going for it than Lavoisier's idea based on transfer of oxygen to and from metals. Thanks to Lavoisier's relentless self-promotion we have to say that fluorine "oxidises" hydrogen when they react to form hydrogen fluoride, even though the reaction does not involve oxygen at all. A better generalisation is that electrons flow from hydrogen to fluorine. And phlogiston being a hypothetical fluid, would have provided a much better model for this process. But Lavoisier was more popular and persuasive than Priestly. Phlogiston was the Betamax of chemistry.


In my view Nattier (1992) is the single most important article ever published on the Heart Sutra. I still pore over it all the time. It's a tour de force of modern, secular, scholarship. A paradigm-slaying piece of writing. I find it exhilarating. And yet it has largely been ignored or, in Japan, subjected to disingenuous theological refutations and apologetics of the type: "The Heart Sutra cannot be Chinese because we believe it is Indian." Nattier opened the door to a completely new reading of the Heart Sutra as concerned with epistemology rather than metaphysics. Not my suggestion, by the way, but Huifeng's (aka Matthew Orsborn):

“It is our view that this shifts emphasis from an ontological negation of classical lists, i.e. ‘there is no X’, to an epistemological stance. That is, when the bodhisattva is ‘in emptiness’, i.e. the contemplative meditation on the emptiness of phenomena, he is ‘engaged in the non-apprehension’ of these phenomena” (Huifeng 2014: 103).

We expect religieux to be sensitive to heterodoxy and to respond negatively to it, even to react violently. The sunk cost fallacy following huge investment of time and resources promoting orthodoxy virtually ensure this. Issues of belonging, identify, and status within a community are keenly felt by religieux and academics alike, and for the similar reasons. In Buddhist Studies a substantial proportion of the community are both academics and religieux. Even those academics who are not overtly religious, tend to be in love with Buddhism (and thus cannot see it objectively). If a scholar's first name is "Bhikkhu", then they are overwhelmingly likely to be a Theravāda apologist, though one of them got quite mad at me for saying so to his face a few years ago. Most academics are too canny to advertise their religious affiliations via the use a religious name in an academic context. It would be interesting to see some objective measure of how many Buddhist Studies academics think of themselves as "Buddhist". A good research project for someone studying contemporary Buddhist Studies. 

Meanness is, to some extent, just something we meet in everyday life and have to deal with. Including in our workplace, though usually work culture norms do put a lid on it: it's pretty unusual to see public humiliation these days as it's considered harassment. People are mean for all kinds of reasons, and these may not be obvious from the outside. Often it's a cry for help. We can offer people who behave meanly compassion on a good day, but being subject to their abuse does make it hard to think clearly or respond creatively in the moment. 

Still, while we can delve into the psychology of meeting counterfactual evidence and the negative emotional responses it generates, to explain the phenomenon, the bottom line is that trying to humiliate colleagues is not acceptable behaviour. It has likely aborted many promising careers in academia. My other idol, Sue Hamilton, for example, left academia and never looked back. Anecdote suggests many Buddhist Studies academics are decamping for greener pastures that offer a more collegial working environment and a coherent body of theory to work with.

Unchecked meanness makes for an unproductive environment. I'm sure it has contributed to driving people away from studying Prajñāpāramitā: a sub-field that everyone agrees is of central importance to understanding Buddhism, but in which almost no one works.

The academic field of Buddhist Studies needs to address this issue of senior academics publicly humiliating students and junior colleagues. But the problem that Buddhist Studies has no core set of values or theory remains. It's a field, but without a discipline. An Order without a rule. Senior academics have power but there are not enough checks and balances. And this is why abusive behaviour got established and continues to be a problem. And why the people who want to change it are fighting an uphill battle.

Quite honestly I'm tired of talking about the Heart Sutra. I'm just repeating myself now. I have a few loose ends to tie up and then I'm going to do something else. And chances are that my research will go on being suppressed by academia despite meeting all the criteria for serious consideration. Perhaps it is just too radical. Or perhaps I have to hope I outlive Lusthaus and co? Trouble is I'm fifty-five (old for a heretic) and not in great health, so that strategy lacks appeal. 

I have either made a good argument in my ten peer-reviewed articles on the Heart Sutra or I have not. I don't expect a Nobel Prize or an honorary doctorate (though I'd accept the latter). Rather, if I have then I deserve to be taken seriously, and if I have not then I have earned the right to see a proper refutation in print (not just a short footnote) and to have a right of reply.

However, before this basic level of respect is afforded to me, I'd like to see Jan Nattier get her dues. Nattier deserves to get the lion's share of the credit. She is my ādiguru and my work is almost entirely derived from hers (one or two minor points about Sanskrit grammar notwithstanding). I also think that Huifeng/Matthew Orsborn's contribution has been massively under appreciated. Give them the credit they are due, and what is due to me as a systematiser of their work, will fall into place. I'm relatively unimportant in this story. 

If you have not already, then please read Nattier (1992) and Huifeng (2014). Read them properly, slowly, read all of the notes, think about the method, follow the evidence. If you have a better explanation for the discrepancies between the passages copied from Pañcaviṃśatisāhasrikā and the versions found in the Hṛdaya then, by all means publish it. Prove us wrong, if you can



Becker, Adam. (2018). What Is Real? John Murray.

Berkowitz, Reed. (2020). "A Game Designer’s Analysis Of QAnon: Playing with reality". Medium.com.

Chang, Hasok. (2010). "The Hidden History of Phlogiston: How Philosophical Failure Can Generate Historiographical Refinement." HYLE – International Journal for Philosophy of Chemistry, 16 (2), 47-79.

——. (2012). Is Water H20? Evidence, Realism, and Pluralism. Springer.

Kaplan, J., Gimbel, S. & Harris, S. (2016). "Neural correlates of maintaining one’s political beliefs in the face of counterevidence." Nature: Scientific Reports 6, 39589. https://doi.org/10.1038/srep39589

Kuhn, Thomas S. (1962). The Structure of Scientific Revolutions. University of Chicago Press.

Planck, Max. (1949). Scientific autobiography and Other Paper. Williams & Norgate.

Resnick, Brian. (2017). "A new brain study sheds light on why it can be so hard to change someone's political beliefs: Why we react to inconvenient truths as if they were personal insults." Vox. Updated Jan 23, 2017, 8:37am EST. https://www.vox.com/science-and-health/2016/12/28/14088992/brain-study-change-minds

15 August 2021

The Dogma: On Not Taking Nāgārjuna Seriously (Seriously!)

I wrote this for my Facebook group on Heart Sutra research. As I haven't posted anything here for a while I thought I'd repost it. 

In response to a post about the word tathatā, two people responded by rehearsing aspects of Madhyamaka dogma. I'm just going to call this the Dogma and people who promote the Dogma as Dogmatics. When people cite the Dogma they present it as a transcendent truth that brooks no contradiction, though it is also frequently (and unironically) presented as a series of contradictions.

I want to address anyone who takes the Dogma seriously by explaining why I don't take it or them seriously.

The Dogma is a body of religious rhetoric that emerges at a time when sectarian Buddhism was maturing. Mahāyāna Buddhism is still nascent, but exists as an uncoordinated series of reforms centering around the problem of the absent Buddha. Gautama sought his own liberation and left this world, leaving us to find our own way out. And later Buddhists found this narrative intolerable (even selfish), so they changed it in various ways, some of which are (in essence) what we now call Mahāyāna.

The foundation of Dogma is principally associated with Nāgārjuna who is believed to be a real person that lived near the beginning of the first millennium of the Common Era. But Dogma has been augmented numerous times by commentators (right up to the present). Most scholars now question the orientation of Nāgārjuna. For example, it is apparent that in composing the Dogma, Nāgārjuna was not re-interpreting Prajñāpāramitā. When he cites scripture, he cites Sanskrit translations of early Buddhist texts. Some have questioned whether he would have identified as Mahāyāna at all. But in proposing the Dogma, Nāgārjuna was making a clear break with early Buddhist rhetoric.

The Dogma makes a number of erroneous assumptions that lead it to dubious conclusions: 1. that dependent arising is a theory of everything; 2. that experience is reality; 3 that existence must be permanent; 4. the experience of emptiness is reality. So let's take each of these in turn.

1. Dependent Arising

As hinted at above, dependent arising was never intended to be a theory of everything. Early Buddhists set out to explain how experience arises. Simple observation shows us that the dynamics of objects are not the same as the dynamics of experience. This is implicit in early Buddhist texts.

Somewhere along the line Buddhists began to apply dependent arising to everything. When the only tool you have is a hammer, everything starts to look like a nail.

I can easily imagine things that are physically impossible, that defy the laws of physics. I can imagine flying, for example. Not possible in reality, possible in imagination. Because the contents of our minds don't behave like real things. They are like illusions.

If we make dependent arising a theory of everything then contradictions ensue. We end up saying that things don't really exist because they are dependent on other things. But think about it. Why would anyone say something like this? What is is about dependency that makes an object unreal. Is a rock any less solid because is was formed by a process? No.

2. Experience versus Reality

Early Buddhists appear to have understood that sensory experience was different from reality. The Dogma, by contrast, refuses to make this distinction. In the Dogma, experience is a lesser form of reality. But experience is not reality. Experience is experience. Experience is what happens when a sentient subject encounters an object. Experience is subjective, that is to say that its mode of existence is subjective.

A good way of talking about it is Thomas Metzinger's use of the term "virtual". We don't have a self, we have a virtual self model, generated by the brain. As a virtual rather than a real thing, our sense of self has qualities and characteristics associated with subjectivity. For example, how we see ourselves is affected by mood. Our virtual model can be disrupted by drugs which do not change "reality", they change the way the brain generates our virtual self model.

3. Existence

The Dogma has a perverse definition of "real". I understand that some people may want to undermine the Abhidharma approach by criticising the nature of categories of experience. The fact that such categories rely on the concept of svabhāva qua distinctive characteristic smacks of essentialism.

But there svabhāva is an epistemic term: it is how experience appears to us, not the thing in itself. Moreover, when we categorise dharmas, we are mainly concerned with thoughts, feelings, and emotions.

It is useful, for example, to distinguish the ethical character of a thought. Was is motivated by greed? Or by generosity? And by "useful" here I mean soteriological. This distinction is important for anyone wanting to live an ethical life, and if you believe in liberation from rebirth in saṃsāra then it is an essential concept to understand.

In arguing against perceived (but in fact nonexistent) essentialism in Abhidharma, the Dogma changes the meaning of svabhāva so that it definitely is essentialist. Now it means the sole condition for the existence of an object. And it is trivial to show that this entity cannot be real, since nothing can be the sole condition for its own existence. Everything is more complex than that.

So how does this trivialism take on such gravitas in the Dogma? It's partly because people who adopt the Dogma attribute their own definition of svabhāva to other people (who almost certainly never did held that view and definitely do not now). Having created the strawman, they triumphantly burn it down. But so what? No one believes it anyway.

4. Emptiness is reality.

The final point is that, in the Dogma, it is assumed that the absence of sensory experience is reality. And this is the heart of the matter. It is this assumption that leads to all of the others.

We all know, either first or second hand, that the cessation of sense experience without the loss of awareness is a profound and potentially life-changing experience. And it's fairly obvious that the techniques to bring experience to a halt were in widespread use in the Ganges valley by the time of the second urbanisation, from about the 6th Century BCE onwards. The new cities attracted Brahmin immigration from the West, too, which is another story.

We should not be too harsh on this point. The assertion--that lack of experience is reality--is one that is common in Indian religious thought. The cessation of sense experience was taken to be reality by Brahmins, Jaina, and Sāṃkhyakas as well as Bauddhikas.

But here's the thing. The cessation of experience is simply the cessation of experience, it is not reality. And this can be seen in how different religions interpret it as Brahman, ātman, puruṣa, jīva, pudgalaadvaitaśūnyatā, etc.


Perhaps the problem is the preternatural clarity of mind that accompanies cessation; the purity of a mind without content, is hyperreal. The very vividness of the state makes it seem more real than reality. Certainly it can be more attractive than reality. Because in that state all ones desires and discontents cease along with other kinds of thought.

Still, the conclusion that reality is the absence of sense experience is fundamental to the Dogma. And it allows Dogmatics a peculiar form of rhetoric which I sum up this way: everything the Dogmatic says is true, while everything the non-Dogmatic says is an illusion, a conceptual proliferation.

I've dealt with this rhetoric for more than 25 years now. At first it worked as expected on me. When I tried to ask certain types of questions that seemed natural to me, a Dogmatic would simply shut down the conversation by pointing out that my questions were based on conventional reality or illusions. The truth is the Dogma and anything else is simply and self-evidently false.

The choice with Dogmatics is either to accept the Dogma or be dismissed as a deluded pṛthagjāna.

However, I reject the framing of the discussion in Dogmatic terms. I see no reason to believe that the cessation of sense experience gives one insights into the nature of reality. One cannot know more by closing off all sources of knowledge about the thing one wishes to know, one can only know less.

I grant that one may discover something about the way that our minds create our virtual models of body, self, and world. And how we use these virtual models to navigate our way through a complex and ever-changing world, especially the social world. The social world deserves a much greater prominence in our thinking about Buddhism. But this is all the province of epistemology. And the result, in Buddhism, is always some kind of knowledge: an epistemic inquiry resulting in epistemic insights.

I'm not arguing within the Dogma framework because it is both false and perverse. Nāgārjuna is not someone I revere at all. I count him the worst philosopher in history, precisely because he does not examine his own assumptions, even when the result is nonsense or contradiction.

The biggest problem with the Dogma is that Dogmatics hold it to be a self-evident truth that not only resists external criticisms, but resists all criticism. It is Holy Writ that can never be challenged. Like Richard Feynman, I'd rather have questions that cannot be answered than answers that cannot be questioned. But the thing is that we can answer many seemingly intractable questions if we only give up Dogma. Dogma is the greatest impediment. It is an extreme view, a wrong view.

The resolution of this issue is simply to make a distinction between metaphysics and epistemology and allow that Buddhism is principally concerned with the latter. Our conclusions about what we know, especially what we know about the cessation of sense experience, can be interpreted as metaphysics, but they need not be.

Religious dogmas now pose the greatest threat to the long-term survival of Buddhism. On the other hand, secular interest in "awareness without content" is now the subject of scientific scrutiny and is already beginning to escape from the religious chains in which it has been bound. Like the preliminary practices we put under the heading of "mindfulness" the practices that culminate in what we call "emptiness" are on the verge of escaping into the secular world. And that is something to celebrate.

Note: I've added a new tab at the top of the page where I'm going to keep a running bibliography of works that I think are relevant to the topic of secular emptiness. 

04 June 2021

Naturalism and Unnaturalism

Something I read recently prompted me to think about whether I would call myself an atheist. I have probably referred to myself as an atheist in the past. Buddhism is widely considered to be an atheistic religion in that while many Buddhists treat buddhas as gods, few of us believe that god to be a creator or controller. Despite growing scepticism about the traditional claims of Buddhism, I still think of myself as "religious" in the sense of living committed to a set of rules. I sometimes say that I am religious but not spiritual. See my series of essays on "spiritual".

Theism is essentially the idea that everything depends on God, however God is conceived. Thinking about this it seemed strange for me to even have a position on such things because they are completely irrelevant to my worldview. I see the value of being aware of some of the history of the influence of the various churches in shaping the modern world, modernism being largely an organised rebellion against church claims to authenticity and authority. But theism is not relevant to me in any other way. The scientific study of religion shows that it is not what it claims to be. Which is not to say that religion is bad, just that we have to get below the surface of the claims made by priests and to look at the sociology and neuroscience of religion in order to get at the truth about religion. 

It seems to me now that it would be silly for me to define my worldview in terms of things I don't believe in. Because, of all the possible things that humans believe, the vast majority of them are not things I believe. I don't believe in unicorns, fairies, Santa, utopias, and so on. But I don't claim to be an aunicornist. The label "atheist" does not inform a reader directly as to what my values and beliefs are. If I am going to state my beliefs, why would I do it with respect to a minor religious cult that has never had any appeal for me. So what am I, if not an atheist. I would say that I am a naturalist.


Naturalism comes in many varieties and, indeed, encourages pluralism. Naturalism has its starting point in the natural world, the world that we experience and interact with as humans. The world that we perceive through our senses, but also the world of which we are wholly a part. The physical world, but also the world of human culture. There may be other worlds or other non-experiential aspects of this world, but we cannot know them. And we need say nothing more, except that our explanations of what we can experience have no gaps that suggest the need for other worlds. 

Naturalism as a metaphysics is based on and informed by a particular approach to knowledge. We observe the world, notice regularities and try to infer what such regularities connote. We can use the conclusions of these inferences to make predictions about what we will experience next and then test this. And this works surprisingly well for understanding the physical world. Different approaches must be taken to understand human culture because it is a much higher order of complexity than physical objects. For example, reductionism seldom makes for an interesting approach to human affairs. 

Within the realm of science, predictions will have a degree of accuracy and precision that we compare with what we see. If the accuracy and precision reaches a threshold then we say that prediction was accurate and precise. That threshold may be formal, such as a statistic measure such as 5σ or 99% confidence, but for lay people it may just be informal and heuristic. Scientists ideally accompany every measurement with an indication of measurement error, and measure of accuracy and precision. So we might say the Higgs Boson has a mass 125.10 ± 0.14 GeV to a 5σ confidence level. The error is due to our measurements, not to nature. 

We take results more seriously if someone has measured them by some other means and reached a similar or better level of accuracy and precision. Sometimes the confirmation or "comparing notes" part is left out of the naturalist epistemology, but it is essential. We generally call this approach to knowledge empiricism, although strictly speaking empiricism is the idea that all knowledge comes from sensory experience. Modern empiricism is a collective and collaborative enterprise that influences all other approaches to knowledge. 

We can also study the "humanities", i.e. the forms and products of human cultures, from how human societies function, to behavioural norms, to how we make and appreciate art. All this is still part of the natural world. Where weather is a complex system comprised of simple parts, a human society is a complex arrangement of complex parts. Historians, according to Hans-Georg Gadamer are less interested in universal laws, but focus on a single event and try to understand it in context.  Still, as Carl R Trueman has subsequently observed, "objectivity is not neutral or unbiased" (2010: 27ff). Objectivity by its very nature excludes the majority of explanations.

In recent years the division between science and humanities has thinned, but there is an incorrigible tendency to see them as incompatible. My layered approach to reality is set out in a three part essay.  I argued that each layer adds structure and organisation to the previous, creating new complex entities with emergent properties. This is not the same as simply changing scale, since life is an offshoot from the middle of the scale of mass, length, and energy of the universe. Whether or not life exists elsewhere, it exists here and any theory of reality that does not include life or human culture in all its complexities is useless. So, for example, the idea that a unification of general relativity and quantum field theory would become a theory of everything is simply nonsense. Physics is useless when it comes to describing human behaviour. I accept that physics certainly provides limits to what is possible. Interestingly, in phrasing it this way I have stumbled on a principle of constructor theory as enunciated by David Deutsch and Chiara Marletto. Physics limits what life can be like, but it does not determine what life actually is or what creatures evolve into being. It does not because of emergent properties at higher levels of organisation, piled on top of each other, that are not predicted by the lower level theories. Nothing about either relativity or quantum theory suggests that sapient beings will emerge to discover these explanations. And it's not that they are vague on this subject, rather there is nothing about those theories that predicts sentience or sapience as a possibility. They can be applied retroactively, but not with any great explanatory power. Determinism does not necessarily survive emergence. 

For a naturalist, then, the natural world is what can be inferred to exist and what can be known. Naturalism argues that if something exists and can be known it is part of the natural world. We can also say that if something doesn't exist it cannot be known. If something cannot be known, then we can say nothing definite about it. Our best route to knowledge is allowing observation to guide theory, principally by comparing notes on close observations of the natural world, keeping in mind that all acts of explanation are also acts of interpretation (simply because of the our human apparatus). 

Accurate and precise knowledge of the natural world has transformed human lives beyond measure, for better or worse. There are, of course, ethical and moral questions raised by naturalism. For example, it has given us tools that can be used for good or ill. A bulldozer can be used to quickly prepare a building site for the building of homes or it can be used to level areas of essential rainforest (sometimes these are the same action). But the work that one person can do with a bulldozer is thousands of times more than one person prior to the invention of high carbon steel and internal combustion engines. Technology magnifies human abilities, without similarly transforming human aesthetics or ethics. How we interpret events has become even more important because of this magnification. And how we interpret events has also become subject to empiricist scrutiny (much more so than when Gadamer was writing). 

For naturalists, then, the focus is the natural world. Anything other than the natural world is unnatural. To believe in some unnatural agent, entity, or realm is a form of unnaturalism, and one who accepts unnaturalism is an unnaturalist. Thus, for me the question is not, "What is an atheist?", rather it is "What is an unnaturalist?"


Unnaturalism is a neologism of mine. It is the flipside of naturalism. As I use it, unnaturalism is a broad term that takes in disbelief in the natural world per se, such as Indian beliefs that the world is māyā "an illusion", as well as a range beliefs about unnatural agents, entities, or forces that exist beyond the scope of the natural world (and thus beyond the scope of the naturalist epistemology). 

Unnaturalists often assert that unnatural agents are able to interact with the natural world, but this is a contradiction in terms. If agents interact with the natural world then, ipso facto, they must be part of the natural world and thus bound by the patterns of behaviour that we see in the natural world. Or else we have to rewrite our explanations to include them and there seems no necessity to do this.

Unnaturalism seems to begin with animism, which, for example, appears to be ubiquitous amongst hunter-gatherers (Peoples, Duda, and Marlowe 2016). This is the view that the natural world is full of sentient agents, seen and unseen who interact with the natural world, but exist outside of it. A modern form of animism is panpsychism in which all matter is, in some inexplicable way, "conscious". Belief in life-after-death is a common unnaturalist belief and with ancestor worship is found in about 80% of hunter-gatherers. At the other end of the spectrum are large organised religions based on sets of unnatural beliefs, notably an omnipotent, omniscient god. I will look more closely at theism and deism in the next section. 

Some terminological issues crop up. For example, some unnaturalists refer to their beliefs as "supernatural" suggesting something above the natural world, "metaphysical" suggesting something beyond the natural world, or "paranormal" suggesting a reality alongside the natural world. In my view, all these separate terms can be dismissed as hair-splitting since they all involve rejecting the naturalistic account of the world. They are therefore better categorised simply as unnaturalism, unnatural views asserting unnatural agents, entities, forces, etc. 

By definition, anything unnatural is beyond the scope of naturalism: we cannot interact with or know an unnatural world and it cannot interact with us. This does not exclude the possibility of unnatural phenomena, but it does exclude the possibility of experiencing them or gaining knowledge of them.  There are epistemic limits, and the knowable is ipso facto the natural and vice versa. Normally we need not bother with the unnatural because we cannot know anything about it. However, unnaturalists claim to have unnatural knowledge. If we press the unnaturalist for evidence they must demure because evidence implies the natural world. 

Part of the problem here is the teleological fallacy, i.e. the fallacy that everything happens for a reason. For a naive person this seems a reasonable heuristic and compatible with commonsense views on causation. Causation is tricky since Hume pointed out that it's really just a regular sequence of events; where one thing regularly precedes another we say it "caused" it. In this view, causation is metaphysical,  that is to say we don't see a separate event that we can label "causation". Discussions of causation tend to refer to billiard balls colliding and other such mechanistic ideas: we see one ball strike another and both travel off in new directions. Can we say that one ball causes the other to move? Formulations of laws of motion do not include anything that might indicate causation. We can describe two balls colliding, for example, using conservation of momentum but . My view is that our understanding of causation comes from our early experience of gaining control of our bodies. We will things to happen, like willing our hand to grasp an object, and after a while that starts to happen. The model for causation is the connection of the desire for something to happen followed by that very thing happening. As John Searle is fond of saying, "I will my arm to go up, and the damn thing goes up." (I think he says this in every lecture of his on YouTube). 

Even if we get a grasp on causation, a cause is not a reason, though the two terms are easy to confuse precisely because our internal model for causation is that of desire making our limbs move. The classical view of reasons is that they are explanations of causes. The teleological fallacy can be restated as: the reason something happens is because something causes it. But this assumes that all sequences of events are regular and that nothing novel ever happens. And of course new and one-off things happen all the time. Even so, in the classical view, reasons are still ideas about why things happen. The classical view sees reasons as prior to actions. 

But then, as naturalists, we have too look at the evidence and it turns out that reason isn't like this (See Huge Mercier and Dan Sperber. The Enigma of Reason). It turns out that experiments show that reasons are generated post hoc to rationalise decisions made by unconscious inferential processes. So it turns out that the classical view of reason is another result of unnaturalism, i.e. the result of thinking about reasoning in the abstract instead of observing reasoning in practice. And here we can see the importance of interpretation in explanations of history. If reasons are post hoc then our accounts of history in terms of the psychological motivations of individuals are likely to be inaccurate. Reasons don't drive behaviour at all. In fact, behaviour drives reasons. 

So when someone who is open to unnatural beliefs comes to understand that the universe has a beginning they may infer that the universe has a cause and they frame in terms of some agent causing the universe to begin for some reason. Even if we eliminate the overtly unnatural elements, we are still left with the possibility that something caused the universe to come into existence. With the present state of our knowledge that cause is unnatural and we cannot know anything about it. This epistemic limit is open to exploitation by unnaturalists; they may claim to know, through unnatural means, about that cause. An unnaturalist may ignore the epistemic limit, adopt the teleological fallacy, and infer that the universe came into being for a reason  and further that if there is a reason, that there must be an agent that is not part of the natural world (since the natural world for them is that which was created). Now we have an unnatural agent with superpowers creating the universe for reasons, though these reasons are typically held to be unfathomable because in practice we cannot discern any unnatural agents. And this brings us to the most visible form of unnaturalism: belief in a creator god. 

Theism as a form of Unnaturalism

Unnaturalism has a much longer history than naturalism. For most of human history most people have been unnaturalists. Unnatural ideas like animism, disembodied minds, or post-mortem existence have seemed plausible to most human beings who ever lived. Since the emergence of naturalism these kinds of ideas have been marked out by terms such as metaphysical, supernatural, or paranormal. 

Theism begins to emerge with the Zoroastrian religion, the first of the monotheisms. The dates of Zoroaster are disputed, but are generally in the range 1200-800 BCE. I have argued, for example, that aspects of Zoroastrianism may have influenced the development of Buddhism. So theism is relatively new in human evolution, but relatively old in human history. 

We can usefully contrast theism with deism. In deism, God made the world, set it in motion, but is no longer involved in the world. Some Jews take this approach, concluding that God was more involved in the world during the infancy of humanity, but now as a mature species, how we live is up to us. An increasingly common form of deism is that the idea that God was responsible for the initial conditions of the universe and the big bang that set things running, but after that God just let things play out as they will. This kind of God is also called otiose or "uninvolved". And note that this idea of setting the initial conditions and allowing them to evolve according to dynamical laws of motion is the main paradigm of physics. However, this paradigm fails to account for many phenomena, notably living organisms, prompting David Deutch and Chiara Marletto to pursue constructor theory which promises to construe physics in exact terms as counterfactuals: what is possible at any given time and what is not. 

The retreat to deism allows some Christians to reconcile with science using a God of the gaps argument, since science cannot tell us the reason for the initial conditions of the universe. When we consider the fine tuning problem, why the universe is conducive to life at all, it seems that the epistemic gap in which deists locate God is increasingly small. The physical parameters of our universe are fine-tuned to allow life to exist. Tiny variations of physical constants like the charge of the electron would make life impossible. Even if God was the first cause, he had little or no choice about how to make the universe. In other words, God had no free will when it came to creation of a universe in which sapient creatures would be capable of thinking about God. And what is the point of worshipping a God who was last active 13.8 billion years ago and who had no free will? There is no deist soteriology; the universe is what it is and there is nothing god can do to change it.

Note that I am also not a deist. The neologism adeist has been used informally, e.g. Daniel Finke's blog post: I am an Agnostic Adeist and a Gnostic Atheist. Some Buddhists are deists in the sense that they talk about an ultimate reality or a ground of being. 

Theists, by contrast, believe in the ongoing active involvement of God. Unlike deism, theism makes testable predictions. Theists' claims about God entail that processes we expect to be random will sometimes not be random because God intervenes on behalf of his followers. Christians, we might argue, should be more lucky than others, suffer less from disease, accidents, and other misfortunes. No such bias in the universe has ever been detected. As a simple matter of fact, Christians don't get a smoother ride, but suffer every bit as much as everyone else. Indeed, lately some Christians have been arguing that they are treated unfairly, which suggests that not only is God not tipping the balance in their favour, He is tipping it against them. So theism looks to be false for this reason and many others. 

This is a corollary argument deriving from the problem of evil, i.e. the problem of why a loving creator would make a world plagued by so much misery and suffering. Charles Darwin was dissuaded from Christian theism by the existence of parasitic wasps:

"With respect to the theological view of the question; this is always painful to me.— I am bewildered.— I had no intention to write atheistically. But I own that I cannot see, as plainly as others do, & as I shd wish to do, evidence of design & beneficence on all sides of us. There seems to me too much misery in the world. I cannot persuade myself that a beneficent & omnipotent God would have designedly created the Ichneumonidæ with the express intention of their feeding within the living bodies of caterpillars, or that a cat should play with mice." -- Darwin Correspondence Project.

I have explored different accounts of why unnaturalism was so successful and persistent; see, for example, my two-part essay: Why Are Karma and Rebirth (Still) Plausible (for Many People)? Part I and Part II.

Although human beings are fully encompassed by the set of natural things, our minds are not limited to thinking in terms of the natural world. We can imagine unicorns, for example. Not only this, but we can proliferate stories about unicorns, complete with imagery. Search for unicorn online and you will find millions of references, images, theories, stories, and so on. But none of this makes unicorns a real thing. We will never meet a unicorn in the natural world. For many people this distinction can easily be blurred, especially when it comes to God. But ideas about god such as theists embrace are not universal by any means:

"Ancestor spirits or high gods who are active in human affairs were absent in early humans, suggesting a deep history for the egalitarian nature of hunter-gatherer societies." (Peoples, Duda, and Marlowe 2016) 


To sum up, then, atheism is a reaction to, and thus still defined in terms of, the Christian worldview. The term itself accepts the normative value of Christian ideas. Atheism is not-theism. To me, God is irrelevant, a trivial problem easily dismissed before getting on with the serious business of understanding the world. And it makes no sense to define my worldview with respect to something irrelevant and trivial. "Atheist" is a Christian label for non-Christians. 

Please don't call me an atheist. I'm a naturalist. And in my worldview theists are unnaturalists

Of course we can discuss unnaturalism, but it is pointless to do so on the terms of unnaturalists because their views are unnatural. The study of unnatural beliefs is part of anthropology and sociology and best undertaken from a naturalist view point. It is important to objectively understand unnaturalism through careful study because so many people have unnatural views and act upon them. We need to understand and appreciate how people behave under the influence of unnaturalism because unnaturalism is still widespread and influential. 

And, of course, theism is not the only variety of unnaturalism; I've mentioned also deism and animism, for example. By lumping various forms of unnaturalism together we are better able to generalise the ideas involved in unnaturalism. 



Deutsch, D. (2013). "Constructor theory". Synthese. 190 (18): 4331–4359. https://arxiv.org/ftp/arxiv/papers/1210/1210.7439.pdf

Gadamer, Hans-Georg. (1975) Truth and Method. Bloomsbury Academic.

Marletto, C. (2015). "Life without design: Constructor theory is a new vision of physics, but it helps to answer a very old question: why is life possible at all?" Aeonhttps://aeon.co/essays/how-constructor-theory-solves-the-riddle-of-life

Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.

Peoples, H.C., Duda, P. & Marlowe, F.W. 2016. "Hunter-Gatherers and the Origins of Religion." Human Nature 27, 261–282. https://doi.org/10.1007/s12110-016-9260-0

Trueman, Carl R. 2010. Histories and Fallacies: Problems Faced in the Writing of History. Wheaton, Ill.: Crossway.

14 May 2021

The Mind-Body Problem and Why It Won't Go Away

One doesn't have to spend a long time talking to people to discover that most of them subscribe to some form of mind-body dualism. Not in any formal way. No one is declaring "I am an ontological dualist". Rather, they find ideas like life after death and a mind that can be independent of the body to be intuitively plausible. These types of views appear to be common to people of all religions and, interestingly, to many people of no religion. It's a gut feeling that death is not the end and a willingness to believe the dualism that this entails. Moreover, many of the people who are ambivalent seem to think that scientific explanations of the world have left the door open to this. The idea being that the afterlife cannot be proved one way or the other, it is beyond the scope of science.

Since virtually all philosophers and scientists now reject such ontological dualism, we have to wonder what's going on here. In this essay I will try to explain why dualism has such enduring appeal, why it continues to confound philosophers and scientists.

Popular culture effortlessly absorbs a philosophical or scientific explanation when it seems intuitive. For example, we use any number of expressions drawn from psychoanalysis—ego, neurosis, narcissistic, subconscious—in daily life without a second thought. Where an explanation is counterintuitive, popular culture simply ignores philosophers and scientists. A striking example of this is that I know plenty of people who still believe that you can catch a chill from being cold and wet; an idea rooted in the four humours theory of the 2nd Century physician, Galen, which relates the qualities cold/wet with the phlegm humour.

So there is still a mind-body problem and it is non-trivial because the majority still find mind-body dualism intuitively plausible despite several centuries of powerful counter-argument and evidence. Any account of the mind-body problem needs to deal with this or it isn't useful. And yet such aspects of the problem are not even part of the philosophy curriculum. Rather, they are dealt with by a completely different academic department, psychology, as though belief is no concern of philosophers. Moreover, philosophers dismiss non-believers as cranks, idiots, or dupes.

As a rule of thumb, I contend that when a problem has been discussed without any resolution for many centuries we have to consider that perhaps we have framed it badly.

Alternative Approaches to Standing Problems

When I took up the problem of identity as reflected in the traditional dilemma of the Ship of Theseus, I realised—with help from John Searle—that the traditional framing of the problem effectively made it insoluble. This may have been unconscious when the problem was first posed, but there's no excuse for retaining this unhelpful approach.

John Searle's On the Construction of Social Reality proposes a useful matrix for thinking about facts. On one axis is the objective-subjective distinction and on the other is the epistemic-ontological distinction. This gives us a grid of four different kinds of facts.

Ontologically objective facts concern the inherent features of an object that are independent of any observer. An example of this is: a screwdriver is made of metal and plastic or wood.

Epistemically objective facts concern statements that are true because we have prior knowledge. We know that the object is a screwdriver only if we have prior knowledge of modern building technology. But everyone who knows what a screwdriver is knows that this screwdriver is one.

Ontologically subjective facts concern statements that are true because of the observer's relationship with the object. Searle especially links this to functions. The function of a screwdriver is to turn screws. But unless you know what a screw is this doesn't make sense. Moreover the function is not inherent in the materials of the object. A function is something that humans impose on objects. The fact that a screwdriver is for turning screws is a real, but subjective fact.

Epistemically subjective facts exist only in the mind of the observer. For example, "this is my favorite screwdriver" is true for me, but you may have a different favourite screwdriver. And the difference does not invalidate either fact. There is no contradiction because the fact is relative to the individual.

With respect to the ship of Theseus, an ontologically objective fact is that the ship is made of timbers arranged in such a way that it floats and can move easily through the water. An epistemically objective fact is that this arrangement of timbers is called "a ship". An ontologically subjective fact is that the function of this ship is to ferry people across the ocean. And an epistemically subjective fact is that this ship belongs to Theseus, it is Theseus's ship.

Traditionally we are supposed to ask, "Is it the same ship when all the timbers have been replaced?" And this generally ties us in knots. Some wish to say it is the same ship because the whole is unchanged, while some wish to say it is not the same ship because all the parts have changed.

My approach is to look at the different types of facts. For example, the ship is a ship at the start of the process of change and it is a ship at the end of the process. We can identify it throughout as a ship. So it has identity qua ship in the mind of any observer who knows what a ship is. This fact is epistemically objective. The ship can carry out its function throughout, so it has identity qua function, i.e. being an ocean-going passenger boat. This fact is epistemically subjective.

The problem here is that the identity of the ship is subjective: it exists in the mind of the observer, not in the object. If the observer believes it to be Theseus's ship then, to them, it is. If I have a different belief that may also be true and the difference does not necessarily invalidate either belief. The ontological status of the ship doesn't matter. It could be, and probably is, purely hypothetical.

The ship qua ship or qua ferry very obviously has identity over time (though I don't see this approach in the account of the problem that I have read). But the kind of identity we are being asked about when the question is framed as—Is it the same ship?—is subjective, i.e. it's not inherent in or to any ship.

The least interesting and least answerable questions are the ones that philosophers typically ask without delineating what they mean by identity, i.e. Is identity vested in the whole or the parts? The answer is that identity is in the mind of the observer. It is a belief about the ships. And as we know, belief amounts to having an emotion about an idea. Opinions are post hoc rationalisations of such emotions. And this means that the order of production is

feeling → belief → actions →reasons

Not the other way around.

There are two points here. The first is that philosophers can't afford to ignore how people actually think and propose solutions in a social vacuum. They may technically right, but if everyone ignores them, what is the point?

The other point is that philosophers are often wrong. The further back in history that we go, the greater the likelihood that philosophers are trapped in an unhelpful way of thinking about an issue. We don't have to accept the traditional way that philosophical problems are framed, especially when centuries of argument have not led to any resolution. If we can see a better way to think about the problem then we are free to adopt it and give the finger to philosophers.

Why We Still have a Mind-Body Problem

Given the overwhelming consensus amongst academics and intellectuals for ontological monism, why do we still routinely encounter the mind-body problem? I've tried to argue that the mind-body problem would be better framed as the matter-spirit dichotomy. I think this is a more general statement of how people actually think about the mind-body problem. People tend to think of matter as cold, dull, hard, dense, lifeless; and by contrast spirit is warm, bright, immaterial, diaphanous, alive. The body is a thus a special case of matter, in this view, because it is matter animated by spirit. Life was seen as something added to matter: an élan vital, or spark of life (such a view is termed vitalism).

If you have ever seen a corpse you know that it is very different from a living body. With reference to a living body, the corpse has shifted decisively towards the archetype of matter. The life has gone out of the person. The difference is what we conceptualise as spirit. Across many cultures, the ancients understood spirit as synonymous with breath. Terms such as spirit, animus, prāṇa, qi, and so on all mean "breath". In the Christian tradition this is epitomised by Yahweh breathing (spiritus) life into the clay body he fashioned for Adam. Adam's soul is the breath of God.

For the longest time, death was equated with the cessation of breathing. And before resuscitation methods were invented this was adequate. Once we realised that forcing air into the lungs of the "dead" person could revive them, we needed new definition of death. Around the same time the function of the heart was discovered and the cessation of the heartbeat became the new definition. Then we learned how to restart hearts and discovered brain waves and the cessation of brainwave activity. Popularly, however, the cessation of breathing is still associated with death. Someone who has been resuscitated is said to have died and come back, and their experiences while their breath or heart stopped is erroneously termed a "near death experience" and treated as a source of knowledge about the afterlife. The fact that we continue to have such experiences is seen by some as proof that there is an afterlife.

Other types of experience can also be interpreted as the mind being independent of the body: lucid dreams, out-of-body experiences, dissociative experiences brought on by trauma, drugs, or physical injury (think of Jill Bolte-Taylor's stroke). And we don't need to have one of these ourselves to find accounts of them plausible. Bronkhorst (2020) deals with how accounts of such experiences are transmitted by those who have not experienced them and become part of the public discourse. I keep in mind also the quote from The Ego Tunnel by Thomas Metzinger:

For anyone who actually had [an out-of-body experience] it is almost impossible not to become an ontological dualist afterwards. In all their realism, cognitive clarity and general coherence, these phenomenal experiences almost inevitably lead the experiencing subject to conclude that conscious experience can, as a matter of fact, take place independently of the brain and body. (p.78. Emphasis added)

The urge to dualism is really quite strong. It is matter-spirit dualism that keeps alive the possibility of an afterlife and also a desire for an afterlife that helps keep dualism alive. This is not something humans are likely to give up on soon, even though for many intellectuals life after death is simply not possible.

Another problem that John Searle pointed out that was that materialism is still rooted in ontological dualism. Materialists still divide the world into two substances; the difference is that they assert that matter is real and mind is not real. Idealists do the same but assert that matter is unreal and mind is real. Even though a materialist may argue that mind is not real—that it is a mere epiphenomenon—they still tacitly concede a substantial difference between mind and matter. They still talk about two distinct substances, even if one is unreal. Lay people pick up on this kind of equivocation even if they can't put it into words.

This tells us that materialism is not an answer because it does not go far enough. If the thesis is idealism and the antithesis is materialism, then we need a synthesis of the two. One synthesis is genuine ontological monism which holds that there is no ontological distinction between mind and matter, that neither can be reduced to the other. In order to address the persistence of dualism we have to invoke epistemology.

Epistemic Pluralism

We can all observe that we have different inputs into our sensorium. I know the world of objects through sight, hearing, taste, smell, touch, temperature, kinaesthesia, etc. I know the world of mind through conscious mental activity and the appearance of pre-formed results of unconscious mental activity emerging into my awareness (intuitions, etc). In other words, even if we formally accept a monistic world in which mind and body are manifestations of a singular, unified reality, there is still an inescapable epistemic distinction between our knowledge of the world and our knowledge of mind.

It is this epistemic distinction that fuels the plausibility of the ontological distinction,especially in the light of out-of-body experiences and other altered states that give the vivid impression of mind independent of matter.

Most people, most of the time, suspend disbelief and proceed in daily life as naive realists. To do otherwise would be inefficient and potentially dangerous. Anyone can examine their experience and ponder the distinction between perception and reality. We all know that there is a difference because our perceptions lead us astray in minor ways quite often. For example, mistaking an object for a threatening agent (e.g. a predator or a dangerous defensive agent like a snake or spider), or getting a colour wrong because of the lighting or background. But note that I never make huge mistakes like perceiving my home to be in Cambridge, England, only to discover one day that in fact I still in Auckland, New Zealand. Glitches on this scale are a sign of pathology. Moreover, minor glitches tend to resolve themselves quite quickly; we may mistake a stick for a snake at a glance, but this does not survive sustained attention. We usually recognise that the "snake" is a stick.

Of course there are abnormal perceptions. Colour-blindness, for example. One can live with colour blindness without too much danger, but one cannot safely pilot an aeroplane. With psychotic delusions the problem becomes more serious. If I perceive my children as demons and follow the urging of internal voices to kill them, the result is catastrophic for everyone involved.

Normal perception is quite reliable and where it is unreliable it errs on the side of protecting us from danger or it is trivial. And so, in daily life, we take perception as reality and most of the time this is fine. Keep in mind that humanity evolved over millions of years and attained the anatomically modern form about 200,000 to 300,000 years ago. For most of this time we were all naive realists and ontological dualists and we survived and thrived. There appears to be no evolutionary disadvantage to being an ontological dualist. Arguably, it is possible that belief in an afterlife keeps us from despair over the fact that we all die and that ontological dualism gave believers some advantage.

The problem is that naive realism encourages us to reify experience, i.e. to consider that what we experience is reality without any intervening processes. And this means we have a tendency to reify the epistemic distinction between world and mind. Hence, so many of us find ontological dualism so plausible. However, this is just the default setting for human beings. It's not a conscious ideology. On the contrary it is only with sustained (and educated) effort that some of us are able to break away from the gravity well of naive realism and subsequent dualism and see the world anew.


We know that our senses respond to a range of different stimuli from visible light, to physical vibrations, to temperature differences, to our own muscle tension. But all of these are turned into identical electrochemical pulses transmitted by nerve cells exchanging sodium and potassium ions across a semipermeable membrane, linked by synapses in which the signal is briefly carried by neurotransmitters. The point is that the signals that arrive in the brain are not distinguished by being of different kinds. They are only distinguished by where in the brain they arrive and the architecture of the brain. We are still arguing over the extent of the role of the brain in creating experience, but recently Lisa Feldman-Barrett noted that the optic nerves account for only about 10% of the inputs to the primary visual cortex. Fully 90% of the inputs are from elsewhere in the brain. Vision must involve a considerable amount of self-stimulation. And presumably the other senses must be similar. Moreover, we see similar patterns of brain activity whether the subject is seeing something or imagining it. Vision and visualisation both use the same parts of the brain. Which explains why hallucinations can be so compelling.

If we stop back from this level of detail and simply take perception as we perceived it then our "world" is made up from a variety of kinds of sensory stimulation: appearances, sounds, smells, tastes, tactiles, temperature differences, muscle tension, etc. And the characteristic of all of these is that they are objective to some extent. You and I may disagree on the pleasantness of an odour (epistemically subjective fact) but we agree that there is an odour. And this agreement leads us to conclude that the odour exists independently of either of us. The smell is an ontologically objective fact. If the smell is the reek of methyl or ethyl mercaptan (the sulphur analogues of methanol and ethanol) then we may agree that it serves the function of making natural gas for cooking detectable by its odour (epistemically objective fact).

The point is that for many of our senses there is some aspect of the information we have access to that is public and accessible to any observer, even if we disagree on some of the subjective facts. No one would ever argue that the pungent smell of ethyl mercaptan is not an odour. Even the synesthete is aware of perceiving one sensory modality in terms of another. Synaesthesia is not a delusion.

Again, our awareness of mental activity is not like our awareness of the other senses. We may be able to use functional MRI to see enhanced blood flow in different parts of the brain correlating with some experience, but the content of our mental activity is not available to anyone else. Our mental sense is ontologically and epistemically subjective. In some senses mental activity is analogous to digestion. We swallow food and it is digested within our body. The nutrients are absorbed by our gut and circulate in our blood. Those nutrients are not publically available, they are contained within us. We can detect changes in blood flow or blood components, but this information does not permit my nutrients to nourish your body.

In this view, subjectivity is not such a mystery. The brain is an internal organ, housed within the skull, and with the body as its interface with the world. Sense data comes in, muscles move in response to signals from the brain (and to some extent from spinal cord). It would make no more sense for mind to be public than it would for nutrition to be public. Inputs from the brain to the brain, i.e. from one part of the brain to another part of the brain are going to have a different flavour to those which come from outside the brain.


Despite advances in science and refinements in philosophy, we still routinely encounter the so-called mind-body problem. I've argued that this is so because there is a striking epistemic distinction in the sensory modes through which we experience mind and body, self and world, spirit and matter. We all have a tendency to reify this epistemic difference and treat it as a metaphysical difference. And this lends plausibility to the belief. We feel that self and world are quite different and thus we believe that they are, we take actions based on this belief, and we subsequently float reasons why we believe or why we acted in that way. This is the process:

feelings → beliefs → actions → reasons

Scientists and philosophers have decisively come down on the side of monism in their work, with a few holdouts that are not taken very seriously. The methods employed tell us that what seems intuitive and plausible is not the case. If we are interested in understanding the world as it is, then this is important.

Part of the problem is that many science communicators are still working with the classical theory of rationality: if you just present someone with the facts they will changed their minds. That is to say we start with reasons and expect people to work backwards, against the flow, and change how they act, believe, and feel. And it doesn't work. Sadly, right wing politicians have embraced this new model and now spend all their time trying to manipulate how we feel, while left wing politicians are still trying to make rational arguments.

On the other hand, there is no great disadvantage to being an ontological dualist. There appears to be no evolutionary disadvantage and there is no day to day disadvantage. When we combine the intuitive plausibility with the lack of any disadvantage for being wrong we get a persistent fallacy. Many of the dualists I know are simply not interested in metaphysical monism. To them it seems to lack salience, or if it is salient, then it is counterintuitive.

There is no getting around the fact that the audience for philosophy is human beings. If we ignore this and pursue truths in the abstract then we can easily become irrelevant to most people. Worse, many intellectuals fail to understand why their ideas don't take off and they blame the audience. As communicators, the responsibility lies on us to get our message across. We are making assertions and thus the burden of proof is on us. If we fail to get our message across, then we have to consider this our failure, not the failure of the audience. It is a poor teacher who blames the student.

As I write this, I am waiting to hear back from a conference organiser about a proposal to give a presentation. What I propose to do is tear down 2000 years of hermeneutics and exegesis and argue for an entirely new way of seeing things. I have outlined the reasons for doing this in ten peer-reviewed articles and dozens of essays here on my blog. At the same time as feeling confident in my conclusions, I am acutely aware that none of these articles has been cited. I think some of them have been read by some people, but as yet my work is either unknown, or not considered salient. Heart Sutra articles still appear that are completely unaware of my articles. How to go about dismantling a familiar, and to some extent cherished, paradigm? If I had four hours I might present something like coherent case. But the best case scenario is that I'll have one hour. At best I'll be able to gloss some of the main points. I doubt anyone who has not already read the relevant papers will even follow the argument let alone be persuaded by it. And yet I have to try.

This is the kind of dilemma that philosophers face all the time in getting across new ideas. New paradigms seldom emerge fully formed and they are almost always resisted by the old guard. Max Planck quipped, perhaps a little unfairly considering history, that his field progressed one funeral at a time. In other words as the old gurda died they made space for new ideas.


Bronkhorst, Johannes.2020. "The Religious Predisposition." Method and Theory in the Study of Religion 33(2) :1-41.

Metzinger, Thomas. (2009). The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.

16 April 2021

If You Meet Conze on the Road, Set Fire To Him

Edward Conze is still considered by many to be the doyen of the field of Prajñāpāramitā Studies. He is still described in superlative terms and draws effusive praise verging on adoration from some scholars and religieux. I argued in my recent article (Attwood 2020) that this might not be wholly deserved and that we need to reconsider Conze's contributions (and his character). In that article, I gave some examples of Conze's character and his work on the Heart Sutra that I hoped would make people rethink their attitude to him.

In this essay, I will consider some aspects of Conze's philosophical work prior to his turn to mysticism (ca 1937); in particular, I will consider Conze's attitude to Aristotle's law of noncontradiction. This was the subject of Conze's postgraduate research after earning a German doctorate in philosophy (at the time equivalent to a British Master of Arts degree). Conze himself said that all his later ideas were contained in the book that would have constituted his PhD thesis or Habilitationsschrift, i.e. Der Satz vom Widerspruch "The Principle of Contradiction" (1932). To be clear, he is talking about the same Aristotelian principle, that most modern sources refer to as the law or principle of noncontradiction. The term "noncontradiction" seems to more clearly convey Aristotle's intent. 

Not long after it was printed, Der Satz vom Widerspruch was burned by the Nazis along with other books by communists. As Holger Heine (who recently translated the work into English) tells the story, "almost all of the five hundred copies of the first edition were destroyed [and] Conze's hopes for an academic career in Germany had come to naught" (xiv). There was an unauthorised reprint of 600 copies in 1976, produced by the German Socialist Students Association, however a literature review reveals very few citations of Conze's book or other work from the period up to 1937 when his midlife crisis began. It is safe to say that even Heine's enthusiastic attempts to resuscitate Conze's corpse have not led to signs of life. For someone who gets the kind of sycophantic praise that Heine and others heap on him, Conze remains a very minor figure in the history of early 20th Century Marxist philosophy, let alone philosophy generally. Still, given his elevated status in Prajñāpāramitā studies, those few of us who work in this field ought to at least make an effort to engage with Conze's earlier philosophy, given the influence it had on his later work.

In brief, the law of noncontradiction says that if logical contradictions were allowed, we could not make sense of the world. If I state that a proposition is true, the contrary of that proposition must be false. For example, if it is true that Conze was born in Germany, the contrary, that Conze was not born in Germany must be false. At face value this is trivial, but it has profound implications. In this essay I will explore the law of noncontradiction and Conze's attempt to invalidate it. 

Law of Noncontradiction

For Aristotle, the law of noncontradiction was the most fundamental axiom on which rational thought was based. It is what must be known if anything is to be known. It was not something that could be derived from first principles, but had to be true a priori for rational thinking to work at all. In his Metaphysics Aristotle states the law in at least three ways, which Gottlieb describes as ontological, doxastic, semantic.

“It is impossible for the same attribute at once to belong and not to belong to the same thing and in the same relation” (Metaphysics IV 3 1005b19–20).
“it is impossible for anyone to suppose that the same thing is and is not” (Metaphysics IV 3 1005b24 cf.1005b29–30. Emphasis added).”

“And since the contradiction of a statement cannot be true at the same time of the same thing, it is obvious that contraries cannot apply at the same time to the same thing.” (Metaphysics IV 6 1011b13–20).

Quotes are from the Tredennick translation (1933) as found on the Perseus Website.

The law of noncontradiction has to apply at the level of ontology. An object that exists and has certain attributes is not non-existent and lacking those attributes. As Conze puts it:

"We cannot judge that the same man is learned and is also not learned at the same time and in relation to the same group of facts, because in fact he is learned and cannot be not learned at the same time and in relation to the same group of facts" (1934: 207. Emphasis in the original).

At the level of belief, if one rejected it, one's thoughts would be disordered. I cannot logically believe that God exists and that God does not exist, though of course I can be undecided for various reasons. Aristotle notably makes a distinction between what someone says and what they believe. He is thinking of the latter, since lies are eminently possible.

And it must apply at the level of assertion because if it is not true then no communication would be possible. Communication depends on agreements amongst a language using community on what linguistic signs mean. If the law of noncontradiction does not hold then no such agreement is possible. 

The principle goes deeper than this. Being fundamental, it must apply to all things and the commonality of all things is their existence. This principle can be stated as "Being is not and cannot be non-being" (Conze 1934: 208). Although, as we will see, Conze never accepted the universal validity of this and states the opposite in his Heart Sutra commentary using a reduction of Suzuki's logic of sokuhi, i.e. "A is not-A" (on which see Suzuki, Negation, and Bad Buddhist Philosophy). 

Although we cannot argue for the law of noncontradiction on first principles, there are some approaches to justifying it.

Arguments for the Law of Noncontradiction

As evidence for the applicability of the axiom, we can cite the fact that reality is somewhat comprehensible, our thoughts are somewhat ordered, and communication is somewhat possible. Unlike traditional transcendental arguments I am hedging here (using "somewhat") and I will get into this shortly. The point is that with effort we can attain a very high degree of comprehensibility as represented in a vast body of mathematical formulae used in science to describe patterns of regularity we experience when we examine the universe. The intricate web of computers and communications networks that we call the Internet is one physical manifestation of this. If the principle of noncontradiction did not hold, something like the internet would be impossible. Some level of order is required, and though in practical terms this need not be absolute, it must be substantial.

Now let's address the issue of hedging. Aristotelian logic is the foundation of modern logic, it is not the whole of modern logic. The problem with reality is that our knowledge of it is necessarily indirect and incomplete. Aristotle sets out the ideal case in which we have perfect knowledge and reason infallibly. This is useful because it is a model for how things work under ideal conditions. If we did solve problems using reasoning, this is what it would look like. Of course we have known since the mid 1960s that this is not how we solve problems and that 90% of us routinely fall for simple logical fallacies. 

We always operate at some remove from the ideal. Our knowledge is inevitably partial, and there is always the possibility of unknown unknowns (aka the black swan effect). Still, the fact that reality is comprehensible at all is a sign that Aristotle's ideal is relevant to our world. The better our knowledge of the world, the closer we can come to this ideal.

Aristotle dismisses the idea that this axiom requires proof. It cannot be proved because it has to be in place in order for the notion of proving something to mean anything. However, what we can do is refute the opposite. Consider the contrary, i.e. the case where logical contradiction is the norm, i.e. A is not-A. In this case, whatever is true is also false. And whatever is false is also true. One could never know anything because whatever one knew would ipso facto also be unknown. One could not get out of bed in the morning because neither "bed" nor "morning" would stand for anything. Bed and not-bed are indistinguishable. If contradiction is the norm, then one is in a realm of utter nihilism. If what I said could mean literally anything at all, then utterances would convey no information. A lie would be true and a truth would be a lie; "turn left in 200m" would be indistinguishable from "eat a peach while the sun is out" or "the yellow flower is wilting".

Refuting the contrary does not prove the proposition. All we can say is that any scenario in which the law does not hold would be incomprehensible. And since our world is comprehensible, we have to assume that the law holds. 

In his rejection of the law of noncontradiction, Conze takes an exclusively logical approach and in particular his argument against the law of noncontradiction rests on the absolute validity of the law of noncontradiction. Aristotle warned against exactly this: "You cannot engage in argument unless you rely on [the law of noncontradiction]. Anyone who claims to reject [the law of noncontradiction] 'for the sake of argument' is similarly misguided." ‒ Gottlieb (2019).

Another reason that we might hedge on noncontradiction in the modern world is quantum physics. In this branch of physics we describe the state of a subatomic entity using the Schrödinger equation, and this gives us the probability of, for example, finding a given particle at any point in space at any given time. Unfortunately, the Schrödinger equation usually has more than one valid answer. Physicists typically take this to have an ontological counterpart in which the particle is in multiple locations (or states) called a superposition. However, once the particle (or the system of interest) interacts with its environment, then the possibilities collapse to one state with 100% certainty. Again this is interpreted as an ontology in which the interaction causes the cloud of possibilities to collapse down to one, also known as the collapse of the "wave function". This term "wave function" is confusingly used both for the abstract mathematics that describes the state of the particle and for the corresponding physical reality. In this view, the wave function is the particle or, more importantly, the particle is a wave function. The nature of subatomic entities is wave functions in fields.

In the famous thought experiment, Schrödinger's cat is alive and dead at the same time, breaking the law of noncontradiction. This was intended as a criticism of the idea that "observation" caused the collapse of the wave function. Eugene Wigner went further and suggested that the observation had to be made by a sentient being, that somehow "consciousness" caused the collapse of wave functions. Not only was Wigner's wrinkle nonsense, but it is generally considered that the whole idea of observation is poorly defined, discussed in vague terms, and doesn't qualify as science. Still, we are left with the fact that, in quantum metaphysics, reality behaves in counterintuitive ways that apparently break the law of noncontradiction.

Think about a visual observation. A photon leaves the system of interest and strikes the retina, and causes an electrochemical cascade partly shaped by the frequency of the photon and what kind of cell it hit. This cascade arrives in the visual cortex and is interpreted as a visual stimulus. At no point does the eye or brain physically interact with the system. The eye is a passive receiver. Wigner wanted to say, and many people wanted to believe, that looking at something (or perceiving it) changes it. In reality the causality is the other way around. Changes in the system, resulting in the emission or reflection of photons, cause us to observe the system. Without that change our eye receives no photons. So observation by a human being cannot be the cause of anything. 

Conze was certainly ignorant of quantum physics and did not invoke it in his work. On the other hand, Aristotle was concerned only with the visible world and with the functioning of human reasoning as understood in his time. Which brings us to Conze's views on Aristotle.

Conze's Rejection of the Principle of Noncontradiction

Conze has said that all of his later thinking is contained in Der Satz vom Widerspruch. The book purports to be a Marxist critique of Aristotle. Typically, Marxists eschew the mechanical materialism of the nineteenth century and replace it with a materialism inspired by Hegel. In Hegel's dialectic, opposites (thesis and antithesis) clash and this leads to a resolution in which the extremes are unified (synthesis).

Marx and Engels "stressed the dialectical development of human knowledge, socially acquired in the course of practical activity [and] social practice alone provides the test of the correspondence of idea with reality—i.e., of truth." Britannica. This aspect of their thought is distinct from historical materialism and class struggle.

Conze attempts to explain human knowledge as a clash between magic and logic. However, rather than casting them in a dialectical relationship, which would see the triumph of a synthesis of the two, Conze sees the clash of thesis and antithesis as leading to the hegemony of one or the other. And as a result Conze is caught in a cleft stick. On one side is the hegemony of logic, which Conze loathes but relies on to make his point; on the other is the hegemony of magic, something he believes in but cannot use to make his argument (since magical arguments are not persuasive). Conze makes no bones about his view that magic is superior to logic. He outspokenly asserted this superiority in his Prajñāpāramitā work, despite continuing to tacitly use logic throughout. Using logic to show how logic doesn't work is not a very convincing rhetorical strategy.

Thus, also though the idea that truth is socially defined might have some merit, the argument that this leads us to necessarily abandoning the law of noncontradiction is still nonsensical. Rather, noncontradiction becomes even more important as a yardstick for truth. In the most extreme version of this approach, truth is that which does not contradict social norms. Even if we accept the full-on relativism of Conze's Marxism, the social nature of logic does not eliminate the need for the law of noncontradiction. The hardcore relativist imagines that they stand outside any system and can see the merits of each. This God's eye view is a nonsense however. Despite being a refugee, Conze was very much a man of his time, culture, and class, i.e. a minor German aristocrat of the early 20th Century. Conze's views on truth are just as socially conditioned as anyone else's. 

Unfortunately, Conze's antipathy to science has blinded him to the possibility that some observations are not culturally defined. Everyone experiences gravity, for example, and if they took the time to measure it, everyone would find that the acceleration due to gravity is ~ 9.8 ms-2. We may have different accounts of gravity, but some are more accurate than others. As I pointed out in my article on Conze, the historian Carl R. Trueman makes the salient point that objectivity is not neutral or unbiased (2010: 27ff). Objectivity by its very nature excludes the majority of explanations. Objectively, magic is not real; astrology does not describe the influence of the planets on human beings, and "A is not-A" is nonsense.

Marxist ideas about materialism assert an objective world existing independently of the mind with the corollary that mind can exist independently of matter. The latter idea has long been disproved, but in the 1930s there may well have been educated people who sincerely believed in it. Conze, rejected the idea of an objective world, and instead substituted a magical world. To Conze, such ideas were axiomatic and he made no attempt to defend them. He simply asserted the existence of magic and of a magical reality. Citing my essay on Conze's place in Prajñāpāramitā research:

As he says, his “life-long acceptance of magic... has not been so much due to theoretical considerations as to the early acquired intuitive certainty that beyond, or behind, the veil of the deceptive sensory appearances, there lies a reality of magical, or occult, forces” ( Conze 1979: I 32). And in his view science “…has little cognitive value, but is rather a bag of tricks invented by God-defying people to make life increasingly unbearable on Earth and finally to destroy it” (1979: I 32).

These words were written at the end of his life, but they do appear to reflect an attitude that is apparent in his early philosophical work (1934, 1935, 1937).

It is true that magical thinking exists even today. I know many people who sincerely believe that the Heart Sutra is magical and who assume that by studying it some of that magic will rub off on me. And when I shrug and say, "magic isn't real", they are genuinely dismayed. Just because some people are attracted to the idea does not mean that magic is real or the magical thinking is appropriate to decision making. Generally speaking, magical thinking leads to poor decisions. 

Seeing Conze's work on Prajñāpāramitā in the light of his earlier work, as he suggested, is instructive. He is already using the terms "God", "the One", and "the Absolute" as synonyms when talking about mysticism in his 1934 presentation to the Aristotelian Society. Clearly he is drawing on several different traditions: Christianity, Neoplatonism, German idealism, and Theosophy. He even cites Mahāyāna Buddhism in passing. It is useful to read this paper since it helps us understand Conze's turn to Buddhism in terms of his earlier embrace of mysticism, which he defines as a state of "complete union with God, with the One". One can see much of his later attitude to Prajñāpāramitā in these earlier works. In another early essay, Conze notes, for example, the tendency of Mahāyāna Buddhism (amongst other ideologies) to what he calls mystical pantheism:

"Mysticism develops into mystical pantheism under two conditions, namely, that the state of ecstasy is considered to give a true, the only true image of reality, and further that the one object of ecstasy is expressly stated to include all reality." (1935: 212)

He also says:

"Generally mystical pantheists do not devote much attention to the consequences of their ideas on logical thinking, its categories and laws." (1935: 212)

This describes the later Conze and the way he writes about the Heart Sutra in 1946. For Conze, the mystical pantheist, "nothing except the One and infinite Absolute" exists and:

"All differences are then absolutely reduced to nought. Since contradictions are not possible without differences, the [law of noncontradiction] is meaningless and inapplicable." (1935: 212)

In effect, Conze shoehorns the epistemic rhetoric of Prajñāpāramitā into his own idiosyncratic mystical metaphysics. In my article I called this Conze's idiodoxy. Behind all of Conze's meandering thought is a conviction that reality is magical.

Now I could work through Conze's argument, taking it point by point. But we can short circuit this discussion by taking a step back. Conze's argument is, in immortal words of Michael Palin, "a series of connected statements intended to establish a definite proposition". Conze's aim is to discredit logic itself and to propose magic as the viable alternative, but in doing so he used a logical argument. Moreover, his target is specifically the principle of noncontradiction. "The validity of thought has a social origin and meaning... the delusion that a supersocial validity can be reached [using logic] has its social roots" (1934: 42).

In Conze's view, validity is merely a matter of belief. And belief is a matter of social convention. In his view if we all decided that logic was not valid, then it would not be. And this is how he imagines the world working prior to the systematisation of reasoning in ancient Athens. He argues that before logic, people lived by magical thinking rather than using reason. And moreover logic is inimical to magic:

Magic and logic are irreconcilable and unintelligible in terms of one another... this mutual hostility between them makes it impossible to regard magic as a form of logic or logic as a form of magic. (1934: 33)

In other words, there is a contradiction between magic and logic. And this concrete contradiction is central to Conze's argument that the principle of contradiction is not valid. Worse, if we say that the principle of noncontradiction does not hold, then it is equally valid to say that it does hold. If we allow contradiction, the result is nonsense. Conze wants to distinguish magical thinking from logical thinking, but a consequence of his conclusion is that logical thinking is magical thinking. In other words Conze is deeply confused about logic. And as noted there is nothing dialectical about this approach. Conze is not interested in synthesis, he is interested in defending magic.

Some will say that in jumping to the end I have misrepresented Conze's argument. So, even though the conclusion is self-defeating and based on false assumptions, I want to loop back for a brief look at that argument. There is some merit in locating logic or at least reasoning in the social sphere. Long time readers of mine will recall my enthusiasm for the work of Hugo Mercier and Dan Sperber (2011, 2017). Like Conze, Mercier and Sperber attack the classical understanding of reasoning. However, they do not share Conze's ulterior motive.

Classically, reason is a faculty of pure logic, free from external influences such as emotions, beliefs, or social conditions. Mercier and Sperber show that evidence has been accumulating from the mid 1960s decisively showing that this faculty doesn't exist. We can use logic, but most of us do not use it routinely. What we call reasoning has two aspects in their thought. In their earlier work (2011) they made the case that reasoning evolved to assist group decision making. Members of a group will propose different courses of action and then the group will use reason to weigh the relative merits of each proposition. In this view reasoning is social and even argumentative. By contrast, individual problem-solving tends to be based on "on intuitions of relevance." (Mercier & Sperber 2017: 43). The later work (2017) characterises reasoning as a process of producing reasons for actions after the fact. The evidence on reasoning shows that our decision making is based on many unconscious inferential processes. We decide and then, if need be, we produce reasons that seem to plausibly account for our behaviour. 

This critique by Mercier and Sperber is compelling but it is a critique of the classical view of reason. It is not, as I understand it, a critique of logic per se. Logic is affected by fuzziness and quantum uncertainty or indeterminacy but it is more or less intact as a way of validating reasoning processes. It is simply that people don't actually use logic that much unless trained to do so and then mostly in formal situations: e.g. when presented with a syllogism in a logic class. Mercier and Sperber are not interested, per Conze, in eliminating logic in favour of magic. Logic, in its modern guise, is intact. Rather it is the idea of humans as logic users that comes into question. The inferential processes we do use are not magic, they are heuristic. They are rules of thumb for surviving in the wild.


When we take Conze seriously as a philosopher we rapidly encounter all kinds of problems. It is no wonder, therefore, that his later works on Prajñāpāramitā were so confused and misleading. It is not simply that Conze did not pay attention to detail (by his own admission) with the result that his editions are faulty. It is not that his translations are execrable, barely qualify as English, and misrepresent the source texts in numerous ways. All of this is true. But taking into account his earlier work we can see Conze as pursuing an agenda that preceded and guided all his work on Buddhist texts and his interpretation of Mahāyāna Buddhism. 

His agenda is anachronistic. Conze imagines a golden age of magical thought and pines for it though he missed it by at least 2500 years. Worse, there is simply no evidence for his assertion that before logic was formalised in Athens people relied on magical thinking. Moreover, the argument is based on a now discredited understanding of what reasoning is. We evolved the capacity for speech, reasoning and inferential decision-making processes as part of becoming anatomically modern humans in Africa ca 200,000 years ago. These attributes didn't suddenly appear in Athens in 500 BC.

Magical thinking was undoubtedly present in the human intellect before the modern era and indeed well beyond it. Some people still childishly want magic to be real (and want Buddhism to be magical). An intellectual who rejects logic in favour of magical thinking now looks quaintly ridiculous. So does a Marxist who rejects materialism and dialectical arguments, nor less a Marxist who was bourgeoisie in his bones and never lost the attitudes and values of the German upper classes.

The eleventh century Persian polymath, Avicenna (aka Ibn Sina) had a plan for dealing with people who share Conze's rejection of Aristotle's law of noncontradiction, i.e.

“The obdurate one must be subjected to the conflagration of fire, since ‘fire’ and ‘not fire’ are one. Pain must be inflicted on him through beating, since ‘pain’ and ‘no pain’ are one. And he must be denied food and drink, since eating and drinking and the abstention from both are one and the same.”—Avicenna (2005: 43).

It was this that inspired my paraphrase of the easily misunderstood old Zen maxim "If you meet the Buddha on the road, kill him." If you meet Conze on the road burn him, beat him, and starve him until he admits that burning is different from not burning, beating is different from not-beating, and starving is different from not starving. 

Unfortunately, we cannot put Conze the man through the ordeal of fire. We can, however, read and critically evaluate his oeuvre. If, as his acolytes say, Conze is a scholar of the highest rank, a veritable genius, with insight into the true nature of reality, then this critical examination can only further glorify His presence amongst us. However, if I am right then critical evaluation will topple Conze and the pedestal that devotees have placed him on. Very few people ever take the time to read Conze at all, let alone critically. Which means that his mistakes go unnoticed by the majority even when they have been pointed out in print. I have searched in vain for any mention of his philosophical works, any attempt to compare his earlier and later phases, or any critical evaluation of his contribution.

In writing critically about Conze, I see two main responses. One from scholars who work in or near Prajñāpāramitā, which is "about time someone said this". However, for the most part people are unwilling to openly criticise Conze. A few examples exist of people listing faults in his editions or translations, but these are almost inevitably accompanied by supplication and homage to Conze. I don't bow before false idols. 

The other response is from Conze acolytes who see my critical reflections as mere "hostility". This group appear to be shocked to discover a dissenting voice and view it as an expression of emotion rather than intellect. For true believers it seems to be difficult to imagine anyone who refuses to assent to Conze's self-confessed greatness. And this means that they don't engage with the content of my literary and philosophical criticism. In this sense, support for Conze has a cult-like quality to it. In the light of this, I have begun to see this aspect of my work as an attempt to normalise criticism of Conze so that we can get it all out in the open. 

In reality, I'm not particularly interested in Conze, Mahāyāna, or the Heart Sutra. These are simply vehicles for writing. My personal approach to Buddhism is far more rooted in Pāli texts and my understanding of early Buddhism gained through exploring ideas on those texts. Until discovering Conze's mistakes in the Heart Sutra, unnoticed by all and sundry for 70 years, I saw myself as following in the footsteps of Richard Gombrich and Sue Hamilton (Richard having been an informal mentor since we met in 2006). I have a certain amount of animus towards bullies but I'm mostly just shocked by the disparity between the poor quality of Conze's work and the superlatives that continue to be heaped on him. I'm more motivated by trying to resolve the cognitive dissonance created by this disparity than about hatred of Conze. 



Aristotle. 1933. Aristotle in 23 Volumes, Vols.17, 18, translated by Hugh Tredennick. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. Reprinted 1989. As found on the Perseus Website.

Avicenna. 2005. The Metaphysics of The Healing. Translated by Michael E. Marmura. Provo, Utah. Brigham Young University Press.

Attwood, J. 2020. "Edward Conze: A Call to Reassess the Man and his Contribution to Prajñāpāramitā Studies." JOCBS 19: 22–51. http://jocbs.org/index.php/jocbs/article/view/223

Conze, E. 1932. Der Satz vom Widerspruch: Zur Theorie des Dialektischen Materialism. Hamburg. Reprinted 1976 by Frankfurt: Neue Kritik.

———. 1934. "Social Implications of Logical Thinking". Proceedings of the Aristotelian Society, 35, 23-44. Retrieved February 4, 2021, from http://www.jstor.org/stable/4544248

———. 1935. "The Objective Validity of the Principle of Contradiction." Philosophy, 10(38): 205-218.

———. 1937" Social Origins of Nominalism ," Marxist Quarterly (January-March, 1937), pp. 115-124. Reprinted in Further Buddhist Studies.

———. 1953. “The Ontology of the Prajñāpāramitā.” Philosophy East and West 3(2): 117-129.

———. 1979. Memoires of a Modern Gnostic. Parts I and II. Privately Published.

———. 2016. The Principle of Contradiction. Translated by Holger Heine. Lanham MD: Lexington Books.

Gottlieb, Paula. 2019. "Aristotle on Non-contradiction", The Stanford Encyclopedia of Philosophy, Edited by Edward N. Zalta https://plato.stanford.edu/archives/spr2019/entries/aristotle-noncontradiction

Heine, Holger. 2016. "Aristotle, Marx, Buddha: Edward Conze's Critique of the Principle of Contradiction." In Conze (2016: xiii-lxiii).

Mercier, Hugo & Sperber, Dan. (2011) 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.

Mercier, Hugo & Sperber, Dan. (2017) The Enigma of Reason: A New Theory of Human Understanding. Allen Lane.

Trueman, Carl R. 2010. Histories and Fallacies: Problems Faced in the Writing of History. Wheaton, Ill.: Crossway.

Related Posts with Thumbnails