10 May 2013

An Argumentative Theory of Reason

This post is a précis and review of:
Mercier, Hugo & Sperber, Dan. 'Why Do Humans Reason. Arguments for an Argumentative Theory.' Behavioral and Brain Sciences. (2011)  34: 57 – 111. doi:10.1017/S0140525X10000968. Available from Dan Sperber's website.
I'm making these notes and observations in order to better understand a new idea that I find intriguing. I have recently argued that the legacy of philosophical thought may be obscuring what is actually going on in our minds by imposing pre-scientific or non-scientific conceptual frameworks over the subject. I see this rethinking of reason as a case in point. 

In this long article, Mercier & Sperber's contribution runs from pp.57-74. What follows are comments from other scholars titled "Open Peer commentary"  (pp.74-101) and 10 pages of bibliography. The addition of commentary by peers (as opposed to silent peer review pre-publication) is an interesting approach. Non-specialists are given an opportunity to see how specialists view the thesis of the article.

The article begins with a few general remarks about reasoning. Since at least the Enlightenment it has been assumed that reasoning is a way to discover truth through applying logic. As another reviewer puts it:
Almost all classical philosophy—and nowadays, the “critical thinking” we in higher education tout so automatically—rests on the unexamined idea that reasoning is about individuals examining their unexamined beliefs in order to discover the truth. (The Chronicle of Higher Education. 2011)
Over a considerable period now, tests of reasoning capability have documented the simple but troubling fact that people are not very good discovering the truth through reasoning. We fail at simple logical tasks, commit "egregious mistakes in probabilistic reasoning", and we are subject to "sundry irrational biases in decision making". Our generally poor reasoning is so well established that it hardly needs insisting on. Wikipedia has a wonderful long list of logical fallacies and an equally long list of cognitive biases, though of course Mercier & Sperber cite the academic literature which has formally documented these errors. The faculty ostensibly designed to help us discover the truth, more often than not leads us to falsehood.

One thing to draw attention to, which is almost buried on p.71 is that "demonstrations that reasoning leads to errors are much more publishable that reports of its success." Thus all the results cited in the article may be accurate and yet still reflect a bias in the literature. The authors attempt to ameliorate this in their conclusions, but if you're reading the article (or this summary) this is something to keep in mind. 

However, given that there is plenty of evidence that reason leads us to false conclusions, what is the point of reason? Why did we evolve reason if it's mostly worse that useless? The problem may well be in our assumptions about what reason is and does. The radical thesis of this article is that we do not reason in order to find or establish truth, but that we reason in order to argue. The argument is that viewed in its proper context--social arguments--reason works very well. 

It has long been known that there are appear to be two mental processes for reaching conclusions: system 1 (intuition) in which we are not aware of the process; and system 2 (reasoning) in which we are aware of the process. Mercier & Sperber outline a variation on this. Inference is a process where representational output follows representational input; a process which augments and corrects information. Evolutionary approaches point to multiple inferential processes which work unconsciously in different domains of knowledge. Intuitive beliefs arise from 'sub-personal' intuitive inferential processes. Reflective beliefs arise from conscious inference, i.e.  reasoning proper:
"What characterises reasoning proper is indeed the awareness not just of a conclusion but of an argument that justifies accepting that conclusion." (58).
That is to say, we accept conclusions on the basis of arguments. "All arguments must ultimately be grounded in intuitive judgements that given conclusions follow from given premises." (59) The arguments which provide the justification are themselves the product of a system 1 sub-personal inferential system. Thus even though we may reach a conclusion using reason proper, our arguments for accepting the conclusion are selected by intuition.

What this suggests to the authors it that reasoning is best adapted, not for truth seeking, but for winning arguments! They argue that this is its "main function" (60) which is to say the reason we evolved the faculty. Furthermore reasoning helps to make communication more reliable because arguments put forward for a proposition may be weak or strong, and counter arguments expose this. Reasoning used in dialogue helps to ensure communication is honest (hence I suppose we intuit that it leads towards truth - though truthfulness and Truth are different).

Of course this is counter intuitive claim and thus strong arguments must be evinced in its favour. Working with this idea is itself a test of the idea. Anticipating this the authors propose several features which reasoning ought to have if it evolved for the purpose of argumentation.
  1. It ought to help produce and evaluate arguments.
  2. It ought to exhibit strong confirmation bias.
  3. It ought to aim at convincing others rather than arriving at the best decision.
The authors set out to show that these qualities are indeed prevalent in reasoning through citing a huge amount of evidence from the literature the study of reasoning. This is where the peer evaluation provides an important perspective. If we are not familiar with the literature being cited, with its methods and conclusions, it is difficult to judge the competence of the authors, and the soundness of their conclusions. Even so most of us have to take quite a lot of what is said on trust. That it intuitively seems right is no reason to believe it.


1. Producing and Evaluating Arguments

On the first point we have already mentioned that reasoning is poor. However, because we see reasoning as an abstract faculty, testing reasoning is often done out of context. In studies of reasoning in pursuit of an argument, or trying to persuade someone, our reasoning powers improve dramatically. We are much more sensitive to logical fallacies when evaluating a proposition than when doing an abstract task. When we hear a weak argument we are much less likely to be convinced by it. In addition people will often settle for providing weak arguments if they are not challenged to come up with something better. If an experimenter testing someone's ability to construct an argument offers no challenge there is no motivation to pursue a stronger line of reasoning. This changes when challenges are offered. Reasoning only seems to really kick in when there is disagreement. The effect is even clearer in group settings. For a group to accept an argument requires that everyone be convinced or at least convinced that disagreeing is not in their interest. Our ability to reason well is strongly enhanced in these settings - known as the assembly bonus effect
"To sum up, people can be skilled arguers, producing and evaluating arguments felicitously. This good performance stands in sharp contrast with the abysmal results found in other, nonargumentative settings, a contrast made clear by the comparison between individual and group performance." (62) 
On the first point the literature of reasoning appears to confirm the idea that reason helps to produce and evaluate arguments. This does not prove that reasoning evolved for this reason or that arguing is the "main function" of reasoning, but it does show that reasoning works a great deal better in this setting than in the abstract.


2. Confirmation Bias

Confirmation bias is the most widely studied of all the cognitive biases and "it seems that everybody is affected to some degree, irrespective of factors like general intelligence or open-mindedness." (63). The authors say that in their model of reasoning confirmation bias is a feature.

Confirmation bias has been used in two different ways:
  • Where we only seek arguments that support our own conclusion and ignore counter-arguments because we are trying to persuade others of our view. 
  • Where we test our own existing belief by only looking at positive inference. For example if I think I left my keys in my jacket pocket it makes more sense to look in my jacket pocket, than in my trouser pockets. "This is just trusting use of our beliefs, not a confirmation bias." (64) Later they call this "a sound heuristic" rather than a bias.
Thus the authors focus on the first situation since they don't see the second as a genuine case of confirmation bias. The theory being proposed makes three broad predictions about confirmation bias
  1. It should only occur in argumentative situations
  2. It should only occur in the production of arguments
  3. It is a bias only in favour of confirming one's own claims with a complementary bias against opposing claims or counter-arguments. 
I confess that what follows seems to be a bit disconnected from these predictions. The evidence cited seems to support the predictions, but they are not explicitly discussed. This seems to be a structural fault in the article that an editor should have picked them up on. Having proposed three predictions they ought to have dealt with them more specifically.


In the Wason rule discovery task participants are presented with 3 numbers. They are told that the experimenter has used a rule to generate them and asked to guess that rule. They are able to test their hypothesis by offering another triplet of number. The experimenter will say whether or not it conforms to the rule. The overwhelming majority look for confirmation rather than trying to falsify their hypothesis. However the authors take this to be a sound heuristic rather than confirmation bias. The approach remains the same even when the participants are instructed to attempt to falsify their hypothesis. However if the hypothesis comes from another person, or from a weaker member of a group then participants are much more likely to attempt to falsify it and more ready to abandon it in favour of another. "Thus falsification is accessible provided that the situation encourages participants to argue against a hypothesis that is not their own." (64)


A similar effect is noted in the Wason selection task (the link enables you to participate in a version of this task) The participant is give cards marked with numbers and letters which are paired up on opposite sides of the card according to rules. The participant it given a rule and asked which card to turn over in order to test the rule. If the rule is phrased positively participants seek to confirm it, and if negatively to falsify it. Again this is an example of a "sound heuristic" rather the confirmation bias. However "Once  the participant's attention has been drawn to some of the cards, and they have arrived at an intuitive answer to the question, reasoning is used not to evaluate and correct their initial intuition but to find justifications for it. This is genuine confirmation bias." (64)

One of the key observations the authors make is that participants of studies must be motivated to falsify. They draw out this conclusion by looking at syllogisms e.g. No C are B; All B are A; therefore some A are not C. Apparently the success rate of dealing with such syllogisms is about 10%. What seems to happen is that people go with their initial intuitive conclusion and do not take the time to test it by looking for counter-examples. Mercier & Sperber argue that this is simply because they are not motivated to do so. On the other hand if people are trying to prove something wrong--if for example we ask them to consider a statement like "all fish are trout"--they readily find ways to disprove this. Participants will spend an equal amount of time on the different tasks.
"If they have arrived at the conclusion themselves, or if they agree with it they try to confirm it. If they disagree with it then they try to prove it wrong." (65) 
But doesn't confirmation bias lead to poor conclusions? Isn't this why we criticise it as faulty reasoning? It leads to conservatism in science for example and to the dreaded groupthink. Mercier& Sperber argue that confirmation bias in these cases is problematic because it is being used outside its "normal context: that is the resolution of a disagreement through discussion." (65) When used in this context confirmation bias works to produce the strongest, most persuasive arguments. Scholarship at its best ought to be like this.

The relationship of the most persuasive argument to truth is debatable, but the authors suppose that the truth will emerge if that is the subject of disagreement. If each person presents their best arguments, and the group evaluate them then this would seem to be an advantageous way of arriving at the best solution the group is capable of. Challenging conclusions leads people to improve their arguments, thus the small group may produce a better conclusion than the best individual in the group operating alone. Thus:
confirmation bias is a feature not a bug
This is the result that seems to have most captured the imaginations of the reading public. However the feature only works well in the context of a small group of mildly dissenting (not polarised) members. The individual, the group with no dissent, and the polarised group with implacable dissent are at a distinct disadvantage in reasoning! Conformation bias works well for the production of arguments, but not so well for evaluation, though the later seemed less of a problem.

Does this fulfil the three broad predictions made about confirmation bias? We have seen that confirmation bias is not triggered unless these is a need to defend a claim (1). Confirmation bias does appear to be more prevalent when producing arguments than in evaluating them, and that we do tend to argue for our own claims and against the claims of others (2 & 3). However the predictions included the word only, and I'm not sure that they have, or could have, demonstrated the exclusiveness of their claims. More evidence emerges in the next section which deals (rather more obliquely with convincing others).


3. Convincing others.


Proactive Reasoning  in belief formation.


The authors' thesis is that reasoning ought to aim at convincing others rather than arriving at the best decision. This section discusses the possibility that, while we do tend to favour our own argument, we may also anticipate objections. The latter is said to be the mark of a good scholar, though the article is looking at reasoning more generally. There is an interesting distinction here between beliefs we expect to be challenged and those which are not:
"While we think most of our beliefs--to the extent that we think about them at all--not as beliefs but just as pieces of knowledge, we are also aware that some of them are unlikely to be universally shared, or to be accepted on trust just because we express them. When we pay attention to the contentious nature of these beliefs we typically think of them as opinions." (66) 
And knowing that our opinions might be challenged we may be motivated to think about counter-arguments and be ready for them with our own arguments. This is known as motivated reasoning. Interestingly from my point of view, because I think I have experienced this, one of the examples they give is: "Reviewers fall prey to motivated reasoning and look for flaws in a paper in order to justify its rejection when they don't agree with its conclusions." (66).

The point being that from the authors' perspective it seems that what people are doing in this situation is not seeking truth, but only seeking to justify an opinion.
"All these experiments demonstrate that people sometimes look for reasons to justify  From an argumentative perspective, they do this not to convince themselves of the truth of their opinion but to be ready to meet the challenges of others." (66)
If we approach a discussion or a decision with an opinion, then we our goal in evaluating another's argument is often not to find the truth, but to show that the argument is wrong. The goal is argumentative rather than epistemic (seeking knowledge). We will comb through an argument looking for flaws for example, finding fault with study design, use of statistics or employing logical fallacies. Thus although there are benefits to confirmation bias in the production of arguments, confirmation bias in the evaluation of arguments can be a serious problem: it may lead to nitpicking, polarisation or strengthening of existing polarisation.

Two more effects of motivated reasoning are particularly relevant to my interests: belief perseverance and violation of moral norms. The phenomenon of belief perseverance (holding onto a belief despite evidence that a view is ill founded) is extremely common in religious settings. The argumentative theory sees belief perseverance as a form of motivated reasoning: when presented with counter-arguments the believer focuses on finding fault, and actively disregards information which runs counter to the belief. If the argument is particularly unconvincing--"not credible"--it can lead to further polarisation. And in the moral sphere reasoning is often used to come up with justifications for breaking moral precepts. Here reasoning can be clearly seen to be in service of argument rather than knowledge or truth.

Thus in many cases reasoning is used precisely to convince others rather than arriving at the best decision, even when this results in poor decisions or immoral behaviour. We use reason to find justifications for our intuitive beliefs or opinions.


Proactive Reasoning

The previous section was mainly concerned with defending opinions, while the next (and final) sections looks at how reason relates to decisions and actions more broadly. On the classical argument we expect reasoning to help us make better decisions. But this turns out not to be the case. Indeed in experiments people who spend time reasoning about their decisions consistently make decisions that are less consistent with their own previously stated attitudes. They also get worse at predicting the results of basketball games. "People who think too much are also less likely to understand other people's behavior." (69). A warning note is sounded here that some of the studies which showed that intuitive decisions were always better than thought out decisions have not be able to be replicated. So Malcolm Gladwell's popularisation of this idea in his book Blink may have over-stated the case. However the evidence suggests that reasoning does not necessarily confer advantage. Which to my mind is in line with what I would expect.

The argumentative theory suggests that reasoning should have most influence where our intuitions are weak - where we are not trying to justify a pre-formed opinion. One can then at least defend a choice if it proves to be unsatisfactory later. In line with research dating back to the 1980s this is called reason-based choice. reason-based choice is able to explain a number of unsound uses of reasoning noted by social psychologists: the disjunction effect, the sunk-cost fallacy, framing effects, and preference inversion.

The connecting factor is the desire to justify a choice or decision. We can see this in action in many countries today with the insistence on fiscal austerity as a response to economic crisis. Evidence is mounting that cutting government spending only causes further harm, but many governments remain committed to it. As long as they can produce arguments for the idea, they refuse to consider arguments against.


Conclusions

Some important contextualising remarks are made in the concluding section, many of which are very optimistic about reasoning. Reasoning as understood here makes human communication more reliable and more potent.
"Reasoning can lead to poor outcomes not because humans are bad at it but because they systematically look for arguments to justify their beliefs or actions.... Human reasoning is not a profoundly flawed general mechanism: it is a remarkably efficient specialized device adapted to a certain type of social and cognitive interaction at which it excels" (72) 
The authors stress the social nature of reasoning. Generally speaking it is groups of people that use reason to make progress not individuals, though a small number of individuals are capable of being their own critics. Indeed the skill can be learned, though only with difficulty and one only ever mitigates and does not eliminate the tendency towards justification. Thus though confirmation bias seems inevitable in producing arguments, it is balanced out in the evaluation by other people.
"To conclude, we note that the argumentative theory of reasoning should be congenial to those of us who enjoy spending endless hours debating ideas - but this, of course, is not an argument for (or against) the theory. (73)
~o~ 

Comments

It ought to come as no surprise that a faculty of a social ape is evolved to function best in small groups. The puzzle is why we ever thought of the individual as capable of standing alone and apart from their peers. It's a conceit of Western thinking that is going to come under increasing attack I think.

This review is also sort of a follow up to an earlier blog Thinking it Through sparked by a conversation with Elisa Freschi in comments on her blog post: Hindu-Christian and interreligious dialogue: has it any religious value? I think Mercier & Sperber raise some serious questions about this issue. Reasoning does work well in polarising environments. And religious views tend to be mutually exclusive. 

I think it's unlikely that we'll ever be able to say that we evolved x for the purposes of y except in a very general sense. Certainly eyes enable us to see, but it is simplistic to say that eyes evolved in order for us to see. We assume that evolution has endowed us with traits for a purpose, even when the purpose is unclear. And we observe that we have certain traits which serve to make us evolutionarily fit in some capacity. In this case the trait--reason--does not perform the function we have traditionally assigned to it. We are in poor at discovering the truth through reasoning alone, and much of the time not even looking. Therefore we must look again at what reason does. This is what Mercier and Sperber have done. Whether their idea will stand the text of time remains to be seen. My intuitive response is that they have noticed something very important in this paper.

My own interest in decision making stems from the work of Antonio Damasio, particular in Descartes's Error. My argument has been that decision making is unconscious and emotional and that reasons come afterwards. Mercier & Sperber are pursuing a similar idea at a different level. Damasio suggests that we make decisions using unconscious emotional responses to information and then justify our decision by finding arguments. And we can see the different parts of the process disrupted by brain injuries or abnormalities in specific locations. Thus neuroscience provides a confirmation of Mercier & Sperber's theory and correlates the behavioural observation with brain function. Neither cites the work of the other.

I presaged this review and my reading of this article in my essay The Myth of Subjectivity when I claimed that objectivity emerges from scientists working together. Mercier & Sperber confirm my intuition about how science works, including my note that scientists love to prove each other wrong. However they take it further and argue that this is the natural way that humans operate, and emphasise the social, interactional nature of progress in any field. And after all even Einstein went in search of support for his intuitions about the speed of light. He did not set out to disprove it. Thus we must reassess the role of falsification in science. It may be asking too much for any individual to seek to falsify their own work; but we can rely on the scientific community to provide evaluation and especially disagreement!

Those wishing to comment on this review should read Mercier & Sperber First. There's not much point in simply arguing with me. I've done my best to represent the ideas in the article, but I may have missed nuances, or got things wrong - I'm new to this subject. By all means let us discuss the article, or correct errors I have made, but let's do it on the basis of having read the article in question. OK?

~~oOo~~


Other reading. 
My attention was draw to this article by an economist! Edward Harrison pointed to The Reason We Reason by Jonah Lehrer in Wired Magazine. (Be sure to read the comment from Hugo Mercier which follows the article). Amongst Lehrer's useful links was one to the original article. 
Hugo Mercier's website, particularly his account of the Argumentative Theory of Reasoning; and on Academia.edu.
Dan Sperber's website, and on  Academia.edu.

18 Aug 2016

Hugo Mercier has uploaded a new paper to academia.edu

The Argumentative Theory: Predictions and Empirical Evidence A Social Turn in the Study of Higher Cognition. Trends in Cognitive Science. September 2016, 20(9): 689-700. http://dx.doi.org/10.1016/j.tics.2016.07.001

Abstract

The argumentative theory of reasoning suggests that the main function of reasoning is to exchange arguments with others. This theory explains key properties of reasoning. When reasoners produce arguments, they are biased and lazy, as can be expected if reasoning is a mechanism that aims at convincing others in interactive contexts. By contrast, reasoners are more objective and demanding when they evaluate arguments provided by others. This fundamental asymmetry between production and evaluation explains the effects of reasoning in different contexts: the more debate and conflict between opinions there is, the more argument evaluation prevails over argument production, resulting in better outcomes. Here I review how the argumentative theory of reasoning helps integrate a wide range of empirical findings in reasoning research.

03 May 2013

The Simile of the Raft

allposters.com
THERE are a small number of texts which are quoted again and again by Western Buddhists. Perhaps the most common is the so-called Kālāma Sutta and I have already spent several essays trying to demonstrate that it does not support the uses to which it is put (now combined into a booklet called Talking to the Kālāmas). Western Buddhists are simply mistaken about that text.

If the Kālāma Sutta is the most cited text then the Simile of the Raft from the Alagaddūpama Sutta (MN 22; M i.130) would be a good contender for second. This is the text that tells us that the Dharma is a raft to get us to the other side, where it must be abandoned. What follows is an extract from my translation and commentary on the Alagaddūpama Sutta which I hope to publish at some point.

The Simile of the Raft
(M i.134-5)

Bhikkhus I will teach you the simile of ‘the raft for the purpose of getting across’. Pay attention and listen to what I will say. 
“Yes, Bhante,” the bhikkhus replied. 
The Bhagavan said, “Suppose a man is following a stretch of road, and he comes to a great flood. The near bank is dangerous and frightening, the far bank is safe and secure. There is no boat or bridge to cross the water. He thinks, ‘what if I were to were to gather grass, wood, sticks and leaves and having woven them into a raft, I should swim, and safely cross to the other side?’ So he makes a raft and crosses the flood. Then once he has crossed over to the far bank he thinks: ‘this raft was very helpful to me in crossing the flood, what if I were to pick it up and carry it on my head or shoulders and go on my way?’”. 
“What do you think, bhikkhus, is this man acting sensibly if he takes the raft with him?” 
“No, Bhante.” 
“What would the sensible thing to do be? Here bhikkhus, he has crossed over to the far bank he thinks: ‘this raft was very helpful to me in crossing the flood, now let me haul it up to dry ground, or sink it in the water, and be on my way.’ That, bhikkhus, is the sensible way to act towards the raft. Just so, bhikkhus, I have taught the Dhamma as like a raft for ferrying, for getting across. 
Bhikkhus, through understanding the Dhamma in terms of this parable, you should renounce dhammas, and more-so non-dhammas.”
~o~


In this passage the Buddha certainly says that his dhamma is like a raft for crossing a river. And it is clear that having crossed a river, it is foolish to carry the raft along with you. However, this is a simile or, really, a parable, and the interpretation of what this parable means hinges on how we read the last sentence. The last sentence is the critical part of this passage, and it is also the most difficult to understand.

Now most people take this simile as saying that we don't need the Dhamma when we are enlightened, but this was not the Buddha's view as I will show below. The Buddha never abandoned the Dhamma as a refuge. So we can exclude this meaning. In order to understand the whole passage, to understand what the parable is pointing at with it's comparison of crossing a river, we need to understand this last sentence.


dhamma and adhamma


The passage tells us that, having understood the Dhamma in terms of the parable of the raft, then, we ought to  renounce dhammā and more so adhammā (both in the plural): dhammāpi vo pahātabbā pageva adhammā. The words dhammā and adhammā have evoked a variety of renderings.

Buddhaghosa (MA ii.109) says that ‘dhammā’ here means calm and insight (samatha-vipassanā), specifically craving for calm and insight, but this does not make a great deal of sense; someone on the other shore has no craving to give up and one cannot abandon the raft before getting across. No modern exegetes seem to accept Buddhaghosa’s suggested interpretation. Horner interpreted the phrase as suggesting that we give up morality at the further shore (see Keown 1992: 93). Horner’s (1954) translation is “you should get rid even of (right) mental objects, all the more of wrong ones.” (p.173-4). Gethin (2008) interprets dhammā/adhammā as “good practices and bad practices” (p.161), which echoes Buddhaghosa but is less specific. However, ‘practice’ is hardly a usual translation for dhamma (one might even say it is a mistranslation). Also, there is plenty of evidence that the Buddha did not give up practice after his awakening.

Ñānamoli and Bodhi (2001) opt for the “teachings and things contrary to the teachings” which is at least a possible translation. I am doubtful about dhammā in the plural being interpreted in the sense of ‘teaching’ (I’ll return to this). Bodhi’s footnote (p. 1209, n.255) acknowledges the ambiguity and justifies their translation with a pious homily. Thanissaro (2010) does not translate the key terms: “you should let go even of Dhammas, to say nothing of non-Dhammas." The capitalisation implies that he understands ‘teachings’, as dhammā as ‘things’ is seldom capitalised and he therefore has the same problem as Ñānamoli and Bodhi. Piya (2003) also avoids committing himself: “you should abandon even the dharmas, how much more that which is not dharmas” [sic]. He refers to MA and Bodhi’s footnote for an explanation, thus seems to be accepting Ñānamoli and Bodhi's reading.

Richard Gombrich (1996) has weighed in with support for translating ‘teachings’ and ‘non-teachings’: “The Buddha concludes that his dhammā, his teachings are to be let go of, let alone adhammā. The occasion for this whole discourse is given by Ariṭṭha, who obstinately declared that he understood the Buddha’s teaching in a certain [wrong] sense.” (p.24). The argument that dhammā in the last sentence is not the dhamma referred to in the earlier parts of the passage Gombrich declares to be “sheer scholastic literalism” (p.24), but I have been unable to locate another passage in which the Buddha uses dhammā in the plural to describe his teaching. Gombrich comments on the irony of taking literally a text preaching against literalism (p.22), with the implication that Ariṭṭha--to whom he emphasises the sutta was directed--is guilty of literalism, or of clinging to the Dhamma. However, Ariṭṭha was guilty of stubbornly refusing to relinquish a completely wrong interpretation. He is not a literalist, but simply has a wrong view. His problem is that he does not take the Buddha’s injunction literally enough! That the simile of grasping the snake at the wrong end, which immediately precedes the raft simile, applies to Ariṭṭha we cannot doubt. Ariṭṭha has misunderstood the teaching. The simile of the raft appears to be talking about something entirely different, and unrelated to Ariṭṭha. This is so striking when reading the text that I am inclined to agree with Keown who speculates that the sutta is a composite of originally separate sections (p.96).

Basing his discussion solely on Ñānamoli and Bodhi’s translation, Jonardon Ganeri has attempted to problematise the idea of abandoning the teachings. Firstly, he says that if we take dhammā to mean teachings then the teachings only have instrumental value (p.132). Ironically, this is not really a problem from a Buddhist point of view as we tend to see the teachings instrumentally (though there are Buddhist fundamentalists). His other argument, which relies on interpreting the Buddha’s word as ‘Truth’, is that for one on the other side “truth ceases altogether to be something of value” (p.132). Again, this is not really an issue for Buddhism as truth as expressed in language is always provisional. The ‘Truth’ (if there is such a thing) is experiential, and on experiencing bodhi and vimutti one does not need provisional truth any more. Ganeri seems to misunderstand the pragmatic way Buddhism values truth – truth is whatever is helpful. This is epitomised in two now clichéd passages: in the Kesamutti Sutta (A i.188ff) where the Buddha tells the Kālāma people to trust their own experience in determining right and wrong conduct; and at Vin ii.10 where the Buddha tells his aunt Mahāpajāpatī that the Dhamma is whatever conducive to nibbāna.

If we accept Ñānamoli and Bodhi’s ‘teachings and things contrary to the teachings’ then we must state the standard caveat, which is that one only abandons the teachings after reaching the further shore. Too often this passage is used to attack doctrine being applied on this shore, or in the flood. There is no suggestion but that we absolutely need the raft until we are safely on the other side. 

Thus from various reputable scholars we get the full range of possibilities for translating dhammā: ‘teaching, morality, things, mental objects’.

This parable is also examined in depth by Keown (1992), where he points out that this is the only mention of abandoning the raft (p.95) and that in other texts “it is made perfectly clear that sīla along with samādhi and paññā are part of the further shore and are not left behind on the near side after enlightenment.” (p.95). As Keown points out, in some texts the further shore is morality (e.g., A v.232, and v.252f ). I would add that this idea that one abandons the Dhamma after enlightenment is flatly contradicted in the Gārava Sutta:
Yaṃnūnāhaṃ yvāyaṃ dhammo mayā abhisambuddho tameva dhammaṃ sakkatvā garuṃ katvā upanissāya vihareyyanti. (S i.139) 
“I will reverence, pay my respects, and dwell in subordination to that very dhamma to which I have fully-awakened” 
The Buddha himself does not give up on Dhamma, why should anyone else? This militates against interpreting dhammā as ‘teachings’. Keown’s tentative translation is “…good things (dhammā) must be left behind, much more so evil things (adhammā)” though he affirms the ambiguity. Keown notes that in other places where dhammā and adhammā are contrasted, they seem to mean good things and bad things (p.101). He concludes that the simile has two purposes: 1. to affirm that the dhamma is for the purpose of salvation and no other purpose (this being the main point of the first part of the Alagaddūpama Sutta); and 2. that we must not become emotionally attached to particular doctrines, practices, teachings or philosophical views, and that none should assume a disproportionate status. But that things which are unambiguously evil must certainly be rejected (p.102). Keown is at least thorough and pays attention to the text, and tries to take the text on its own terms, but I still don't find his interpretation satisfying because, again, the Buddha does not give up good things after his awakening. 

Kalupahana (1986: 183) agrees with Keown’s interpretation of adharma in discussing chapter 8 of the Mūlamadhyamakakārikā. “While it is true that the term dharma is used in the Buddhist texts, both in an ontological sense (referring to ‘phenomena’) and in a more ethical sense (meaning ‘good’), there is no evidence at all that the negative term a-dharma was ever used in the former sense.” Thus he treats it as synonymous with akuśala. However, we have to take Kalupahana with a grain of salt, because neither the Buddha nor Nāgārjuna thought of dhamma as having an "ontological sense". Indeed, both go out of their way to deny this. Dhamma qua phenomena have no ontological status: they are neither existent nor non-existent. It is Kalupahana himself who draws attention to the role of the Kaccānagotta Sutta (S 12.15) in the Mūlamadhyamakārikā. And it is in the Kaccānagotta Sutta where this is plainly stated. Kalupahana himself constantly rejects ontology in his discussion of the texts. His desire to squeeze Buddhism into a Western mould has mixed success. 

Despite this plethora of interpretations by leading interpreters of Buddhism, I can offer yet another. A little later in the Alagaddūpama Sutta one of the bhikkhus asks: “could one be tormented by something externally non-existing (bahiddhā asati)?” The reply is:
“You could, bhikkhu,” replied the Bhagavan. “Suppose one thought like this: ‘it was mine, [now] it is not mine; it might be mine, but I can’t get it.’ They are upset and miserable; distressed and depressed. They are tormented by something externally non-existing.”
By something externally non-existing is meant 'something that they do not possess'. Note here that the thing desired is not non-existent (asati) in the absolute sense, but is merely something lost, or unobtainable. In light of this I suggest that dhammā here could also be ‘things’ (that exist) and adhammā is ‘non-things’ (things that don’t exist in this sense). That is to say we must abandon attachment to what we have, and to what we wish to have. This is not a perfect answer to the problem, but it has the real advantage of not requiring the arahant to give up something that arahants were extremely unlikely to give up!

In the Gārava Sutta (SN 6.2 PTS S i.139) we find the Buddha explicitly turning to the Dharma as his refuge:
Yaṃnūnāhaṃ yvāyaṃ dhammo mayā abhisambuddho tameva dhammaṃ sakkatvā garuṃ katvā upanissāya vihareyyanti. 
I will reverence, pay my respects, and dwell in subordination to that very Dhamma to which I have fully-awakened.
However, no single view of this simile appears to be unproblematic. All we can say with any certainty is that the pop-Buddhism answer that one gives up the Dharma as teaching when one is enlightened is a non-starter. Nor do we give up practising the Dharma. As far as I am aware, none of the enlightened figures of history ever renounced the Dharma. 

~~oOo~~

Note: 17 Jan 2017
Na hi dhammo adhammo ca, ubho samavipākino;
Adhammo nirayaṃ neti, dhammo pāpeti suggatin ti. (Thag. 304)
          For virtue and vice do not have equal results
          Vice leads to hell, virtue causes a good rebirth.
In this view, one would give up both dhamma and adhamma because they both lead to rebirth. Albeit that virtue (dhamma) causes on to attain (pāpeti < causative from pāpuṇāti) a good rebirth (suggati), it is still a rebirth and thus still within saṃsāra. The goal of Buddhism is to end rebirth.


Note: 26 Aug 2019

The most ancient retrievable understanding of (DHp 1-2) is that 1. mental action precedes bodily and verbal actions (dharmā), 2. among them, mental action is the most important one, and 3. they are prompted by mental action. All Buddhists would accept these statements, and it will suffice to quote Vasubandhu’s Abhidharmakośa IV 1c-d: cetanā mānasaṃ karma tajjaṃ vākkāyakarmaṇī ‘mental action is volition, and what arises from it are verbal and bodily actions’. (24)
Agostini, Giulio. 2010. 'Preceded by Thought Are the Dhammas': The Ancient Exegesis on Dhp 1-2. Buddhist Asia 2: Papers from the Second Conference of Buddhist Studies Held in Naples in June 2004, 1-34.

The argument is that in this context dhammā means actions. 


Bibliography

Ganeri, Jonardon. 2002. 'Why truth? The Snake Sūtra.' Contemporary Buddhism, 3,2 2002: 127-139.

Gethin, Rupert. 2008. Sayings of the Buddha. Oxford University Press, p.156-167.

Gombrich, Richard. 1996. How Buddhism Began : The Conditioned Genesis of the Early Teachings. London: Athlone.

Horner, I.B. 1954. 'Discourse on the Parable of the Water-Snake.' The Collection of Middle Length Sayings. London: Luzac, p.167-182.

Kalupahana, David J. (1986) Nāgārjuna: The Philosophy of the Middle Way. Mūlamadhyamakakārikā. State University of New York Press.

Keown, Damien. 1992 The Nature of Buddhist Ethics. London: Macmillan, 1992.

Ñānamoli and Bodhi. 2001. The Middle Length Discourses of the Buddha. 2nd ed. Wisdom, p.224-236.

Piya Tan. 2003. Alagaddūpama Sutta: The Discourse on the Parable of the Water-snake [Proper grasp of the Buddha’s Teaching], Majjhima Nikāya (22/1:130-142). Online: http://dharmafarer.org/wordpress/wp-content/uploads/2009/12/3.13-Alagaddupama-S-m22-piya.pdf

Thanissaro. (trans.) 2010a 'Alagaddupama Sutta: The Water-Snake Simile.' Access to Insight. Online: http://www.accesstoinsight.org/tipitaka/mn/mn.022.than.html.
Related Posts with Thumbnails