13 January 2017

Doctrine & Reason I: Reasoning.

In my forthcoming book on karma and rebirth I cite one of the leaders of the Triratna Buddhist Order on the importance of beliefs in Buddhism. In his 2007 essay, There Are Limits, Subhuti says:
“These essential principles of the Dharma set out how existence works and are therefore the basis for a Buddhist life. Just as a Christian life is based on belief in God’s creation, Christ’s sacrifice, and salvation through faith in him and works in accordance with God’s commandments, so a Buddhist life is based on belief in conditionality, karma (including ‘rebirth’), and the Path – albeit Buddhist belief being provisional, compatible with reason, and capable of direct verification. Without conviction that these are the essential mechanics of life, one will not practice the Dharma.” [Emphasis added] Subhuti (2007)
In this essay I will focus on the phrase, compatible with reason. What is reason, or more specifically, what is the activity of reasoning? What would it mean for a religious belief to be compatible with reason? Having addressed these general questions, I will use the example of the Buddhist belief in karma. I choose karma because Subhuti mentions it and because I know the various karma doctrines of karma fairly well.

The first snag that we hit comes almost immediately because, based on discussions over the years, I can identify around a dozen different views on karma currently held by members of our Order, some of which are mutually incompatible. So belief in karma is not a simple matter. The threads of karma doctrine form a tangled mess that take a book length project to unravel (not that my book will do this). In this essay I will, therefore, take one view of karma that is fairly common in our Order and test its compatibility with reason. This view is the one deriving from Nāgārjuna's treatment of the subject in his Mūlamadhyamakakārikā, though this source is seldom acknowledged and the view is generally absorbed by reading modern day Tibetan Madhyamaka philosophy of one kind or another.


1. Reason

The second snag is that what Subhuti means by "reason" is not entirely clear. My Dictionary of Philosophy opens its entry on reason by explaining that reason is,
"A word used in many, various, often vague senses, with complex and sometimes obscure connections with one another."
The dictionary goes on to note that one important distinction is between reason and other mental qualities such as "imagination, experience, passion or faith." I think Subhuti probably has something like this distinction in mind. The implied comparison with Christian articles of faith reinforces the impression. In other words, Subhuti seems to be referencing the common distinction between faith and reason as basis for belief. This distinction has been a feature of Christian theology and the focus of a lot of debate about religion in modern times.

Actually, in theology, faith and reason are both authorities for belief. Faith is usually considered to be the basis of belief, but some theologians have attempted to use reason to prove articles of faith. Faith is clearly important in Buddhist life. In the classical Pāḷi texts, faith (saddhā) arises when one hears a dhamma teaching, i.e., at DN 2 (dhammaṃ suṇāti), and at AN 10.61 and AN 10.62 (saddhammassavana). Faith, here, is faith in the Buddha (tathāgate saddhā). All too often, Buddhists (and particularly Triratna Buddhists) insist that Buddhism does not involve blind faith and that saddha (Skt śraddhā) is not blind faith. The Pāḷi texts make it clear that saddhā is precisely faith in the words of a religious teacher, lacking demonstrability, at least for the for the moment.

Having practised the methods of Buddhism with success (by which is generally meant, becoming a stream entrant) one may also develop another quality, aveccapasāda ‘confirmed confidence’ (or perhaps ‘perfect clarity’). So, until stream entry, until we join the āriyasaṃgha, our motivation to practice is based on faith. Beyond this we see people taking Buddhism on faith all the time. Most Buddhists take the possibility of enlightenment on faith, and have to, because there are no enlightened people around. For Buddhists, as for other religieux, belief is based on faith.

As the Internet Encyclopedia of Philosophy (IEP) says of reason, "Some kind of algorithmic demonstrability is ordinarily presupposed." Here we see why Subhuti might have included "capable of direct verification" as a criterion. To go beyond faith we have to have a procedure to test our belief and see where it leads us. Note that though Subhuti's actual expression appears to be that of a logical positivist, this was probably unintentional. What he means is that the promised results of legend are said to be attainable by anyone. Of course, this statement is also an article of faith.

Subhuti has said only that belief must be compatible with reason; not that it must be based on reason. This implies that belief may still be based on faith, as long as reason does not subsequently dis-confirm it. This gives us a little more room to manoeuvre. Most rationalists see faith and reason as antagonistic, at best and as polar opposites, at worst. However, in so-called Natural Theology, for example, "Articles of faith can be demonstrated by reason, either deductively (from widely shared theological premises) or inductively (from common experiences)." (IEP).

There is a subtle move here from reason as a faculty of the mind, to reasoning as a method for producing knowledge through the application of logical inferences (deduction and inductions). This is not necessarily problematic, because reason is often associated with the ability to employ the methods of reasoning. However, it is worth noting the tacit shift from reason as a faculty that exists (ontology) to reasoning as a method of obtaining knowledge (epistemology). The confusion between the ontology and epistemology is a major problem in philosophy.

For example, before materialists ask "What is real?" they divide the world into mental and physical phenomena based, as all such divisions are based, on the epistemological differences engendered by our perceptual apparatus. They conclude that only physical phenomena are real. But this result has confused ontology with epistemology. "Mental" and "physical" are epistemological distinctions. The question is like asking: "Which is more real, hearing, vision, smell, or taste?" Which is to say the question is nonsensical. Also, the question of which kind of experience corresponds to reality is predicated on using mental phenomena to judge the truth. If mental phenomena are not real, then how can they produce accurate judgements on what is real? And so on.

If there is an equivalence between compatibility with reason and conforming to the procedures of reasoning the we have an obvious way to test Subhuti's assertion. Can we, for example, derive the details of the Buddhist belief in karma from first principles? That is to say, can we arrive at a doctrine of karma by applying the various modes of reasoning? In order to answer this question we need to look more closely at how reasoning works.


2. Reasoning

In the passages that follow I'll look at the three most common methods associated with reasoning: deduction, induction, and abduction. All these logic words derive from Latin ducere 'to lead' hence: de-duce, 'to lead down' or derive; in-duce 'to lead onwards' or predict; and ab-duce 'to lead away' or explain. We can see why abduct is a synonym for kidnap! From the verb ducere we also get Mussolini's title Il Duce i.e The Leader; other titles such as duke and dux; and a whole raft of other English words: adduce, conduce, conduit, douche, duct, ductile, educate, induct, introduce, produce, product, redoubt, reduce, seduce, subdue, and traduce.

Deduction, induction, and abduction are all methods of inferring new knowledge from something already known. I'll begin, as most philosophers do, considering these activities as solo events, but I will also reconsider them as collective activities, which Mercier and Sperber (2011) have argued is the natural context for reason. Although I will not recapitulate Mercier and Sperber's arguments here, I will have them constantly in mind. Their most important observation, which is by no means original or new, is that in solo reasoning tasks most people score so badly that they cannot be said to be reasoning at all. Instead, they rely on cognitive bias and logical fallacies. Mercier and Sperber point out that, by contrast, when critiquing someone else's argument in a small group setting, most people do very much better. In other words, when producing arguments we don't use reason, but when evaluating someone else's argument we do. Thus, they argue that reasoning is argumentative. A corollary of this is that confirmation bias is a feature (and perhaps even a necessary feature) of argument production, though not of argument evaluation.


2.1 Deduction

Using deductions, we try to infer conclusions based on our set of axioms about how the world works. These axioms are what Justin L. Barrett (2004) has called our non-reflective beliefs.* Non-reflective beliefs include our views on such metaphysical concepts as time, space, and causation. These are the beliefs that we absorb while we are growing up, both from our experience of interacting with objects and from interacting with people. We may not know we have these beliefs and they may not even be immediately accessible to introspection. Nevertheless, these axioms are central to how we understand the world.
* I discussed Barrett's ideas in a two part essay called Why Are Karma and Rebirth (Still) Plausible (for Many People)?

This kind of reasoning involves asking ourselves, in the light of our axioms about the world, what event, or sequence of events could have occurred to bring about the current state of affairs. For example, most of us non-reflectively believe that there are agents behind most events. So, based on the available information, we may try to deduce what kind of agent was responsible and what their motivations might have been, based on our internal models of what agents exist and what kinds of events they can cause. So we might hear an eerie cry in the night and experience horripilation, but deduce that this is the kind of noise a fox makes and conclude we are safe. Since most of us include supernatural elements in our non-reflective beliefs, it often seems intuitive, or at least minimally counter-intuitive, to conclude that an experience has a supernatural cause.

The IEP citation above referred to "widely shared theological premises". This highlights a problem with reasoning with respect to religious beliefs. A deduction from widely shared theological premises is likely to reinforce those same widely shared theological premises. If our widely held theological premise is that the Christian God exists, then deductions we make about, for example, how the world came into being or what is moral, are predetermined by our axioms. We may well perform a perfectly logical deduction from our premise, but this signifies little because the starting premise or axiom was not arrived at by reasoning.

Similarly for our theological premise that karma is, in Subhuti's words, how existence works. What we have done is decide a priori that karma is how existence works and then set out to look for confirmation of this axiom. This is a cognitive bias called confirmation bias. As noted, Mercier and Sperber (2011) have argued that confirmation bias is ubiquitous in argument production, but seldom in found in argument assessment, unless one already agrees with the argument. So, getting a believer to critique and argue for karma is pointless. To get rational, objective feedback, one must get feedback from a non-believer, but not one who is so hostile the belief that they cannot think rationally about it.

Looking for confirmation of our beliefs is not rational, because of the Black Swan Effect. This means no matter how many times we confirm our view, some evidence may still come along that falsifies it. Thus, a tenet of rationality is that one ought to seek falsification rather than confirmation, which for most people is counterintuitive. Most of us, set a problem in which we have a choice between seeking confirmation and seeking falsification of a belief, opt to seek confirmation. We look for evidence to support our argument. We only try to disconfirm arguments produced by others.This is an important observation: what is intuitive is not necessarily rational and vice versa.

If our in-group is Buddhist, then our argument is typically with out-group non-Buddhists. Within the group we tend to confirm and reinforce each other's views (which is not compatible with reason), while without we argue against the other's views (which is compatible with reason). This suggests that most of the time in-group beliefs won't be compatible with reason; and that reasoning about our views can only be found with those who disagree with us. This failure of groups to produce an internal critique can lead to groupthink, another cognitive bias in which the desire for harmony or conformity overwhelms reasoning in a group. In this sense, the wide range of incompatible views on karma in the Triratna Buddhist Order is a good thing, or it would be, if people were willing to argue about their views (there is some resistance to arguing with me about views, I find).

One of the buzz-words of the day is echo-chamber. This compound word was coined to refer to the mistaken view that our social media environments tend to restrict our exposure to dissenting political views, so that we end up only seeing and hearing views which seem to confirm our own view. Yang et al (2017) showed that, in fact, social media exposes people to more dissent rather than less. Making deductions about the world based on widely held religious premises is only ever going to result in our conclusions echoing our existing beliefs.

Deduction is a useful tool for reasoning, but it has rather severe limitations. When it comes to reasoning about beliefs, that limitation becomes catastrophic if our articles of faith are taken as axiomatic. Since articles of faith are treated as axioms and not themselves arrived at by reasoning, the danger is that our conclusions simply reflect our existing beliefs. Logic and reason are not always the same thing. Deductions logically derived from irrational axioms can and will be irrational.

So in this sense I disagree with the Natural Theology crowd that deduction enables us to reason about belief. Deduction is completely dependent on what we believe.


2.2 Induction

We use inductive reasoning to arrive at generalisations about experience and to form rules of thumb for dealing with similar experiences. Generalisations are possible because experience has patterns. Experience has patterns because the world evolves in regular ways, our minds operate in regular ways and experience is a function of both. A lot of induction relies on the general principle that the future will most likely be like the past. Probabilities are an important form of generalisation about the future.

For a generalisation to be valid does not require that all experiences confirm it. It's not like a law of nature. If 80% of experiences fall into known categories, then it can be more efficient to proceed as if they all will and be alert for exceptions, than to have to assess each experience individually. It's like a compression algorithm that only notes the parts of a video that are changing. There's no need to compute the whole picture every time if large chunks of the background are not changing. Of course, if we don't notice the exceptions, then we are led into error by generalisations.

One thing I want to flag here is the problem of generalising from a single or rare experience. A made up example might be that I try mint and licorice ice-cream and I conclude that I do not like ice-cream. This is an over-generalisation, because mint and licorice is an unusual flavour and there are more conventional choices that I probably would like.

Another problem is when we combine this with confirmation bias. For example, astrology may seem to make sense if we generalise from the predictions that confirm our belief and ignore those that do not. A random prediction is likely to be right some of the time. By filtering out all the times the prediction is wrong, we come to the conclusion that astrology is generally pretty accurate. This aggressive filtering of experience is not only possible, but very likely to happen. And it explains the persistent popularity of irrational claims like those made by astrologers.

These kinds of generalisations from one experience, or just a few experiences, are extremely prone to cognitive bias. And many of our experiences in meditation are unique or unusual. But even if they are not, they tend only to coincide with being in an altered state of consciousness. Thus, we ought to be wary of generalising on the basis of them. However, Buddhists often rush to the conclusion that is supported by the norms of the group. A vision in meditation is not an hallucination, but a confirmation of the transcendent reality that our latter-day Buddhists metaphysics describes.

Another problem we Buddhists face is the premise that what applies in meditation is applicable everywhere; i.e., that features of our awareness that we identify in the altered states achieved in meditation are general features of awareness or, indeed, general features of reality. If we stop to consider this, it is quite a bizarre inference to make. The effort required to get into the altered state is considerable and the state itself is so qualitatively different from any other kind of experience. The very fact that I can describe these as altered states, reflects that they are unusual rather than common. Why would we choose to infer knowledge about reality on the basis of unusual experiences instead of usual experiences? Since it is common to be completely absorbed in these states and completely cut off from sensory perceptions of the world, why would we infer that they reflect the world more accurately?

Inductive reasoning is even more susceptible to bias than deductive, or at least susceptible to more kinds of bias that skew the conclusions we come to. One of the common biases is to see ourselves as less biased than other people (bias blind spot). Wikipedia has a list of almost 200 cognitive biases, most of which apply to the process of inductive reasoning. In his Cognitive Bias Cheat Sheet, Buster Benson has provided a more structured overview of cognitive bias that I find quite useful. Biases creep in when we have too much information, or too little, or when we are trying to arrive at an answer via a shortcut (which we do most of the time). Too much information creates the secondary problem of what to remember.

So again, induction is not a very reliable way of reasoning about belief. Beliefs themselves create cognitive biases (like confirmation bias) that distort the reasoning process in favour of what we already believe. In fact, most of the time we arrive at a belief or a decision and then, and only then, we look for reasons to retrospectively justify our belief or decision. So, when you ask a Buddhist why they believe in karma and the answer is, "Because it seems intuitive", the first suspicion must be that it seems intuitive because it's what that person believes. Belief itself makes the belief seem intuitive and thus we will tend to infer that our belief is rational.

We have one more approach to reasoning. Is it any better?


2.3 Abduction

Abduction is the process by which we infer explanations from observations, and use these explanations to make predictions. Where deduction proceeds to a certain conclusion, and induction to rules of thumb, abduction seeks to produce the best explanation given some facts that do not allow for a certain conclusion. Whenever we "jump to a conclusion" we are using abduction. And in this lies the downside of abductive reasoning. Many of our shortcuts are motivated by cognitive bias or logical fallacy. So if we hoped for a fool-proof approach, we aren't going to find it in abduction.

One of the most famous applications of abduction is the quote by Sherlock Holmes that:
“When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”
Eliminating impossible explanations is an important process in abductive reasoning. One of the reasons philosophers frequently refer to Occam's Razor (aka the principle of parsimony) is that it places a useful limit on how we should go about the process of producing explanations. Attributed to William of Ockham (c. 1287–1347), though it definitely existed before him, Occam's razor takes many forms, but the basic form is that the explanation which makes the least assumptions is best. This is sometimes over-simplified and presented in terms of "the simplest explanation is best". However, this version is not very useful. Sometimes a complex explanation is best because it makes fewer assumptions.

A great example this is the reasoning behind Jan Nattier's argument for the Heart Sutra being composed in China. The larger Prajñāpārmitā text (LPT) is taken from India to China via Central Asia, where it is translated, extracted, and framed to create the Heart Sutra, whereupon it is exported back to India, back-translated into Sanskrit, lengthened, and then re-transmitted to China. This is by no means a simple scenario. But it makes very few assumptions compared with other possible explanations of the available evidence. Red Pine, for example, has to assume that in addition to the Sanskrit and Chinese versions of the texts that we still possess, a separate large Prajñāpārmitā text with different wording was composed, transmitted to China, and then lost in both Sanskrit and Chinese, leaving only the Heart Sutra as a record of it. But this is hardly credible.


Also, some assumptions are more likely than others. If our explanation of events requires a miracle, then Hume's comments on miracles become pertinent:
"...no testimony is sufficient to establish a miracle, unless the testimony be of a kind, that its falsehood would be more miraculous, than the fact, which it endeavours to establish..."
Science at its best is the epitome of abduction at work. And science at its best also involves a process which we have only mentioned in passing, i.e., comparing notes. Since this is often left out of accounts of reasoning, I want to highlight it here. But first a word about salience.


2.4 Salience

Clearly, reasoning has some limitations. On our own, we may not reason at all, but take some shortcut or invoke a rule of thumb ,instead. Humans are poor at solo reasoning tasks because we fall victim to many cognitive biases and logical fallacies. However, even if we were competently reasoning, there are many cases in which given the same information two people would come up with entirely different conclusions, generalisations, and explanations. Nowhere is this more obvious than in the case of politics.

The Political Compass website assesses political affiliation on two axes: progressive-conservative (or economic left-right) and authoritarian-libertarian. But no matter which quadrant you end up in, anyone who takes the test has access to pretty much the same information. The differences come about because of salience. George Lakoff's (1995) explanations of the different underlying political metaphors of our divisions are very salient to my understanding of politics. They relate to the kind of family that feels right for us.

For example, conservatives conceptualise the nation as a self-contained family with a strong father-figure in charge, who is strict, strong, and calm. Part of being self-contained is always paying off debts promptly. Happiness is found by everyone playing their role and following the rules. Children are taught obedience and self-reliance. Liberals, by contrast, see the nation as a family of two equal parents in which everyone is cared for and loved. Happiness is found by playing positive roles in the community and work. Children are taught to love their parents and to care for themselves and others.

These metaphors underpin reasoning in the political domain. The conclusions, generalisations, and explanations produced by political reasoning are powerfully shaped to fit. So, given, to take a topical example, a shortfall of funding in the National Health Service, conservatives will tend to want to cut costs and balance the books, whereas liberals will want to insure that everyone is looked after. We used to pay a lot more in tax and the government had a number of income generating assets and businesses. But the Neoliberal (really a conservative libertarian) forced the government to lower taxes and to sell off assets, because of an ideological commitment to small government and minimal government involvement in individuals' lives. As Ronald Raygun framed it, it was about relieving the tax burden, tax here being a burden imposed on the individual, rather than a way of the community taking care of its own. But with low taxation and eliminating other sources of income comes a crisis in funding the health service. But the health service conflicts with conservative values in fundamental ways: it does not encourage self-reliance, people get something for nothing (creating a debt that cannot be paid), and people who injure themselves through carelessness or poor lifestyle choices get the same treatment as those who are careful and who make good lifestyle choices.

Different people reasoning about the same situation, with the same information, but coming to very different conclusions, making very different generalisations, and explaining the situation using very different principles.

I have often written about this: when we assess information, we may go through a cognitive procedure to assess its veracity, but we register how important the information is to us emotionally. This importance is what I call salience. The emotionality of salience is what enables us to have a "gut reaction" to news or to make "intuitive" choices.

Salience is also an aggressive filter on what we consider when making decisions. Given a wealth of information, we all filter it. Dealing with too much information is a major source of cognitive bias. If we always had to evaluate every option, we would be unable to make decisions at all. The solution to the problems of bias, error, and salience is comparing notes, to which I will now turn.


2.5 Comparing Notes

The evidence is that humans are extremely poor at solo reasoning tasks (Mercier and Sperber 2011). It's not very credible to assume that humans can reason things out on their own under normal circumstances.

In fact, we seem to have evolved reasoning in the context of decision making in small groups. And this means that all this reasoning needs to be reframed as a group activity. And, after all, we are social animals, we evolved to live in communal groups. This social aspect of human beings is all too often simply left out of accounts of how our minds work. When we look at other social mammals, many social relations are fully functional without language or abstract reasoning. This has led me to suggest that, although we typically see the hierarchy of science as going from individual psychology to collective sociology, in fact, sociology is more fundamental and so profoundly shapes our psyches that it ought to be the other way around; i.e., out of biology emerges sociology, which shapes the minds of individuals. Indeed, we are so attuned to our social environment that "individual psychology" may be an oxymoron.

Some years ago Sean Carroll's Twitter bio read,
"I'm sure if the blind guys had compared notes they'd have figured out it was an elephant."
For me this captured something important; not only about our search for knowledge, but the stories we tell about our search for knowledge. Comparing notes (in the form of literally comparing notes, but also of presenting results in seminars and conferences where they can be discussed, and in formal peer review prior to publication) is one of the things scientists do that makes science an effective knowledge seeking activity. Knowledge seeking is typically a collective activity. "Science is sold as facts and it's not, it's process. And that process is mainly arguing." (Edwards 2017)

It is precisely when we do not compare notes that we are most at risk of falling into some logical fallacy or cognitive bias. By comparing notes and, well, arguing about what they mean, we are more likely to be rational. Of course, groups are also prone to cognitive bias, so even then we must proceed with caution. For example, simply comparing notes in a naive way can be unproductive. We can uncritically accept the other's conclusions, generalisations, and explanations because they support our own.

When we compare notes uncritically we get a consensus reality. For example, part of consensus reality is the supernatural. If I have an experience and describe it as supernatural to someone who has come to a similar conclusion about some experience they have had, we may both reinforce the delusion of the other. Critical comparing of notes leads to what I call a collective empirical realism. In this approach there can be no unquestioned axioms. All axioms are up for discussion and criticism. Other people who participate in the comparing of notes critique methods as well as conclusions. By being sceptical about axioms, methods, and results, we can begin to eliminate the illogical and irrational elements that inevitably creep into our narratives, along with the other purely subjective elements.

What science does, that other forms of knowledge seeking do not, is to look at why different observers come to different conclusions or explanations. Scientists try to get at the underlying principles of our beliefs to see which are most consistent with reality. Hence, for the first few centuries of science, the emphasis was on reductionism. Given the human propensity for bias and error, we had to really get clear on the underlying substance and principles under discussion. And note that in the general population bias and error are still dominant forces. Supernatural beliefs are de rigueur, for example. Even within science, bias and error cannot be eliminated except by retrospectively subjecting results to collective criticism and weeding. Wrong results and claims are published all the time. But the approach of science means that before a result can be widely accepted it must be replicated and shown to fit in with the system of knowledge that has developed.


3.0 Compatible With Reason

The concept of reason is by no means straightforward. When we say that our beliefs are compatible with reason we are making some big assumptions. We assume we are capable of reasoning and capable of understanding when some belief of ours is compatible with reason. Looked at in the cold light of day these are doubtful assumptions. Our beliefs are much more likely to be unreflective assumptions based on bias and fallacy. Which may explain why our expectations and intentions are so very often thwarted.

Clearly, if humans are poor at reasoning, then a lot of what is said about reason is bunk. If you look up popular quotes on the subject, it is variously supposed to be what separates us from other animals (we called ourselves Homo sapiens); our highest faculty, a kind of pure and abstract virtue, the quality that helps us triumph over nature, etc., etc. But this is all bunk. Most of us don't reason, but instead rely on irrational rules of thumb and shortcuts. It's not that we are incapable of reason. We are certainly capable, but we prefer not to and have other means of arriving at decisions that we prefer to use instead. Yes, we can, if called upon, give reasons for our beliefs and decisions, but the overwhelming likelihood is that we did not use reason when arriving at them. For most of us, the best we manage on a day to day basis is post hoc rationalisation for beliefs we already hold or decisions we've already made without the benefit of reason.

Reason and reasoning have been widely misunderstood in history. For the most part they are still widely misunderstood. What is called "reason" is often something else entirely. All too often, it is simply ideology or some kind of Freudian wish-fulfilment fantasy. Those people who come across as more severely rational are almost always simply good at hiding the emotional basis of their decision making and good at persuading people. Most top politicians fall into this category: irrationally committed to an ideology, emotionally self-contained (and thus impervious to criticism), and highly persuasive. All qualities we might also associate with psychopaths.

If our religious identity resides in adopting certain beliefs, and that identity is important to us, then our ability to think clearly about belief is severely compromised. If we have made great sacrifices in our religious life—the extreme example is refraining from sexual activity—then our reasoning is always motivated towards confirming the value of our sacrifice. Which is why monks are such vocal apologists for Buddhism. Others will be inspired by such sacrifices and also want to confirm the value of them, since they get status by association. Outsiders can never appreciate the true meaning or significance of religious identity and their opinions hardly matter. Thus, religious belief becomes a self-sustaining process within a religious group.

We can see that compatible with reason is a very high bar to reach. Having explored the general issues surround reason and reasoning, in Part II of this essay, I'll begin to look at the reasoning behind morality, such as it is, because karma is the Buddhist explanation for morality. Part III will focus on assessing whether a particular version of karma doctrine is compatible with reason.


~~oOo~~

Continues | Part II | Part III |


Bibliography

Attwood, Jayarava. (2014). Escaping the Inescapable: Changes in Buddhist Karma. Journal of Buddhist Ethics, 21, 503-535. http://blogs.dickinson.edu/buddhistethics/2014/06/04/changes-in-buddhist-karma

Barrett, Justin L. (2004). Why Would Anyone Believe in God? Altamira Press.

Edwards, Tamsin. (2017) Inside Science [Interview on explaining science]. BBC Radio4. 12 Jan 2017.

Kalupahana, David J. (1986) Nāgārjuna, The Philosophy of the Middle Way: Mūlamadhyamakakārikā. SUNY.

Lakoff, George (1995) Metaphor, Morality, and Politics, Or, Why Conservatives Have Left Liberals In the Dust. http://www.wwcd.org/issues/Lakoff.html

Mercier, Hugo & Sperber, Dan. (2011). Why Do Humans Reason. Arguments for an Argumentative Theory. Behavioral and Brain Sciences. 34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.

Subhuti (2007) There are Limits or Buddhism with Beliefs. Privately Circulated. [This essay is not included on Subhuti's Website, nor is it included in the collection of his essays entitled, Seven Papers.]

Yang, J. H., Barnidgeb, M. and Rojasa, H. (2017) The politics of “Unfriending”: User filtration in response to political disagreement on social media. Computers in Human Behavior 70, May 2017: 22–29
Related Posts with Thumbnails