14 August 2015

Why Are Karma and Rebirth (Still) Plausible (for Many People)? Part I of II.

Plausible?
This essay summarises and explores some idea from Justin L. Barrett's short but important and influential book Why Would Anyone Believe in God? (2004). Barrett's book is not simply an account of the psychology of theism in evolutionary terms, but goes into the evolutionary origins of religious beliefs more generally. He identifies several cognitive processes or functions that contribute to religious style thinking and locates them within a social and psychological context that lends religious concepts plausibility for the individual. We will focus here on Barrett's ideas on the plausibility of religious beliefs, which I will apply to the two beliefs central to the Buddhist religion: karma and rebirth. 

Barrett's work forms a cornerstone of my understanding of the psychology of religious belief. In my view, belief in just world virtually entails belief in an afterlife in order to balance out all the blatant unfairness and immorality (unrewarded goodness and unpunished wickedness) that we see around us. All religious afterlife beliefs are basically the same in that they amount to a post-mortem balancing of the moral books, whether this happens in one go, or through repeated rebirth, or reincarnation. Buddhism combines both of these two basic approaches: rebirth, unless one does something about it, and liberation from rebirth, if one has done what needed to be done (kataṃ karaṇīyaṃ). The afterlife is attractive anyway because the fact of inescapable death is so disturbing to a living being. Additionally an afterlife is made to seem plausible by phenomena such as the so-called out-of-body experiences, and many kinds of meditative experience, which seem to point to disembodied mind (ontological dualism). Combined, these factors suggest how religious beliefs, particular beliefs about universal morality and an afterlife, arose and became so ubiquitous in human cultures. Karma is our Buddhist myth of a just world, and rebirth is our myth of the afterlife required to allow fairness to play out. There are more and less sophisticated versions of these two myths, they all share these basic features.

However, we no longer live in traditional societies. We live in post-Enlightenment societies in which technological marvels are routine. When I was a child, ideas like video phones, personal communication devices, powerful personal computers, a universal repository of knowledge, automatic translation, and so on were the stuff of science fiction. Now, they are all rolled into one small handheld device. Science has transformed our understanding of the world: theories such as evolution, plate tectonics, relativity, classical & quantum mechanics, thermodynamics, genetics, bacterial-pathogenesis, are not incidental or trivial. They are powerful explanatory paradigms that accurately predict the behaviour of the world at different scales, even when, as with relativity and quantum mechanics we know the theories to be incomplete. In large measure the Ptolemaic/Christian worldview, with its false presuppositions and superstitions, has been superseded in Europe and its (former) colonies. Why then do religious beliefs continue to seem plausible to so many people, even outside the confines of classical organised religion? Why do some people abandon the superstitions of Christianity only to embrace the superstitions of Buddhism? We can situate this question inside of the larger question that Barrett addresses about why other kinds of religious belief, particularly belief in gods, persist into the modern era and to resist incursions by new knowledge about the world. 


Evolutionary Psychology & Mental Tools

Barrett's viewpoint comes under the rubric of Evolutionary Psychology. The basic idea is that the brain, and therefore the mind, is modular and these modules evolved and bestowed fitness (in the special sense meant by geneticists) on homo sapiens as a whole. For many years now neuroscientists have noticed that damage to certain parts of the brain produce a deficit in the functioning of the mind. For example damage to the occipital lobe of the brain affects vision in variety of ways. I've often cited Hannah and Antonio Damasio's work on injuries in the ventro-medial-prefrontal cortex and how they affect decision making (see Facts and Feelings). 

A recent article  by Alfredo Ardila (2015) highlights this approach in a very interesting way. When we lose the ability to speak it's called aphasia. There are two kinds of aphasia due to brain damage: Wernicke's-type, associated with brain damage in the temporal lobe, and Broca's-type associated with damage in the frontal-subcortical region. Wernicke's aphasia affects the lexical/semantic aspects of language, while Broca's affects grammar. This suggests that the two aspects of language evolved separately, i.e. that we have one language module that deals with words, and another that deals with how words fit together to make sentences. Ardila proposes a staged evolution of language in which animal style communication evolved very early; it was followed by a gradual build up of verbal signs for things or actions in hominids as our cognitive capacity increased. Only with the advent of anatomically modern humans did we begin to use grammar to create strings of words with distinctions between nouns and verbs, and so on.  This is consistent with Robin Dunbar's outline of the evolution of brain capacity and social group size and the theory of language evolution that he proposes (Dunbar 2014; see also When Did Language Evolve?). 

There are some vigorous critiques of the modular theory of Evolutionary Psychology, but it seems incontrovertible that the brain is divided into functional areas with different tasks and that it must have evolved to be that way. Sometimes another part of the brain can take over the function following an accident, especially in young people. There are of course the curious cases where people grow up with vastly reduced brain volumes due, for example, to childhood hydrocephalus, but have apparently normal brain function. In these cases brain volume can be as little as 5%-10% of typical. The reduction in volume has to happen early in life, and it's not clear how the number of neurons is affected (might their be the same number squeezed into a much smaller volume?). We also see people with severe epilepsy surviving radical brain surgery, where have the neocortex is removed, but again that part of the brain has been disabled from an early age and the brain has adapted to work around it. Mostly brain damage, in adults at least, results in permanent dysfunction. Whether this physical modularity translates into more abstract 'mental tools' of the kind that Barrett talks about, is moot, but it seems plausible.

A caveat to be aware of is that while science journalists like to see these areas in the brain as operating in isolation—witness the brouhaha about the so-called "God spot" (now refuted)—in fact the whole brain is active all the time, the whole brain is involved in producing experience and directing our activities. Some areas clearly do perform specific tasks, but they do so as participants in a system, and often a system within a system. And not only do we have to keep the whole brain in mind, we have to see the brain as situated in a body that also contributes to experience through the peripheral nervous system and sensory organs. Recently David Chapman and I discussed this issue and he argued that we need to acknowledge that cognition has a social dimension as well. I'm sympathetic to this view, it's consistent, for example, with Mercier and Sperber's Argumentative Theory of Reasoning, with Robin Dunbar's Social Brain Hypothesis, and other more systemic ways of thinking about life, but it takes us beyond the scope of Barrett's work and Chapman himself has yet to commit his ideas to writing (nudge, nudge). However once, I've spelt out this part of my psychology of belief, the obvious next step would be to attempt some kind of synthesis.

Barrett outlines some different types of mental tool. Some categorise sensory information, into objects, agents, or faces, for example; whereas others describe such objects once they are detected. Barrett highlights what he calls the "Agent Detection Device" (ADD) in his writing. This is the function of the brain that allows us to distinguish an object that is an agent, from one which is not: a rat from a rock; a snake from a branch. Ordinary objects in particular follow rules of movement that are bound by the laws of physics: a bird moves differently than a missile. We have an intuitive sense of the different ways that agents move compared to non-agents. An important agent describer is known as Theory of Mind (ToM). Having recognised an object as something that initiates its own actions, the ToM attributes to it a host of mental properties suitable to an agent. For example, agents have motivations or desires that set them in motion (emotion; from Latin ex- 'out' + movere 'to move'); they act to achieve goals; and so on. Understanding this allows us to interact creatively with agents in a way that is not required with non-agent objects and that enhances survival (e.g. trapping animal food or avoiding predators). As we will see, the ADD and ToM are central to Barrett's understanding of belief in gods.

In Barrett's theory, the mind is furnished with many categorising and describing tools which operate unconsciously and impose structure and order on our perceptions so that we can make sense of them. What we actually become aware of, out of the vast array of sensory inputs, is the product of considerable real-time processing that shapes how we perceive the world. 


Beliefs

Barrett identifies two kinds of belief: reflective and non-reflective. He argues that most beliefs are of the non-reflective kind. They arise from assumptions about the way the world works, automatically generated by the unconscious functioning of our various mental tools (especially categorisers and describers). We often don't even think about non-reflective beliefs, to the point where we may not know that we have a belief. And non-reflective beliefs are transparent to us, which is to say that we are not aware of the process by which we come to have a non-reflective belief. These are simply the beliefs that we deduce from interacting physically with the world and unconsciously assimilate from our family, peers, and society.

Barrett does not say anything about a relationship between non-reflective beliefs and Kant's idea of a priori judgements, but the similarity is noticeable. Non-reflective beliefs, in Barrett's view, encompass physical facts such as the belief that an object in motion will continue along its inertial path, objects fall under the force of gravity unless supported, or that physical objects cannot pass through one another. Such beliefs emerge, at least in part, through experience, which is then reflected in the way language works (See Lakoff & Johnson). So such beliefs are a priori, in the sense that they are prior to information arriving in conscious awareness and the process of forming them is transparent to us and therefore out of our control. We cannot help but understand experience in terms of our non-reflective beliefs. This certainly seems to correlate with Kant's idea of a priori judgements.

Reflective beliefs are the kind that we learn or decide for ourselves after consciously assessing the available information and making a decision. According to my own understanding (see Facts and Feelings, 25 May 2012), decision making involves weighing the merit of various bits of information. The salience of information is assessed by the emotions we associate with them. Thus my understanding is not that these reflective beliefs are "rational" in the old sense of that word. Contrarily such beliefs may well seem "intuitive", or "feel right" and this may be more important than other assessments of value. Given recent observations on the process of reasoning (See An Argumentative Theory of Reason) we need to be a bit cautious in how we understand the idea of "reasoned beliefs". Individual humans are quite bad at reasoning tasks, falling easily into dozens of logical fallacies and cognitive biases (including several dozen memory biases). If a misconception is repeated often enough it can come to seem the right thing through sheer familiarity (politicians and advertisers rely on this fact). Reason and rationality have to be see in this light, though Barrett was not writing with these ideas in mind. 

There's nothing about reflective or non-reflective beliefs that guarantees accuracy or truth, nothing that guarantees that we when act on them they will produce expected results. However I would add that the kinds of non-reflective beliefs that describe the way objects move, for example, are so reliable a guide to results that we need never question them, unless perhaps we are sent into space where gravity is so much weaker that we must learn a whole new set of reflexes. Non-reflective beliefs serve the purpose of unconsciously directing our actions in ways that help us to survive. As long as the subsequent behaviour has survival value, evolution doesn't care what the belief is or whether it is true. Survival value is the primary value of the system that causes us to form beliefs. Truth is optional.

Distinguishing these two types of belief is important for Barrett's theory. He's going to argue that reflective religious beliefs, such as the belief in God, rely heavily on non-reflective beliefs. He notes that when tested with plenty of time, people give good accounts of their reflective beliefs. But put under time pressure they tend to fall back on non-reflective beliefs. So for example when describing God at leisure, people are consistent with mainstream theology. God is able to be everywhere at once (omnipresence), to read minds, to know without seeing (omniscience) and so forth. But under time pressure the same people were more likely to attribute human limitations to God, such as having only one location in space, not always aware of our motivations, and needing to see in order to know.
"People seem to have difficulty maintaining the integrity of their reflective theological concepts in rapid, real-time problem solving because of processing demands (11)
The relationship between reflective and non-reflective beliefs in complex. Barrett identifies three major ways in which they are related.

1. Non-reflective beliefs may as a defaults for reflective beliefs. For example, handed an unfamiliar object and asked if we think it will fall when held up and released, our non-reflective understanding of how the world works will inform our answer in the affirmative. Barrett's other example in this category involves a girl stealing apples. Non-reflective belief, drawing on our mental tools for describing agents in the relation to food, lead us to unconsciously conclude that she is hungry. But perhaps we also recall that the girl earlier mentioned a horse that will allow you to pet it in return for apples. In this case we might choose the alternative hypothesis that the girl is bribing the horse with apples in order to pet it. Non-reflective beliefs also form our views about the horse as agent in relation to food, but having two options means we must reflect on the possibilities. In this case may rule out the default option (girl is hungry) but non-reflective beliefs still provide the default.

Something Barrett does not comment on here, but which he might have, is the phenomenon of the Attribution Fallacy. Social Psychologists note that we assign motives to agents, but that we almost always assign internal motives without reference to external circumstances: we understand agents to be preferentially motivated by internal considerations. If a girl is taking apples without asking, breaking established norms, then we typically assume she's doing so deliberately and knowingly, i.e. that she is stealing the apples (a moral judgement); that she is therefore "a bad girl". Walk along a British high street for five minutes and you're bound to hear a parent shout (or indeed scream) "naughty!" at their small child. And given the inconsistency with which the word is used, that children cannot help by grow up confused about what "naughty" means (leaving aside the etymology!). Barrett's example suggesting that we might conclude that the girl is hungry is charitable at best, and perhaps a little naive. Maybe if it were only one apple. If we witness repeated unauthorised taking, our conclusion tends towards moral judgement. What we do not do is cast around for other reasons. For example the girl may be suffering from peer pressure to steal apples, bending to the will of older peers, or trying to impress them in order to fit in. Or she may be trying to get attention from parents distracted by their marriage break up. These may be mitigating factors once our judgement is formed, but our judgement says that the responsibility still lies with the girl (or if she is very young with her parents). We tend to assume that wrong deed is carried out due to bad motivations, whatever else might be true. Even if we understand the actions of other agents through introspection - for example, by speculating what might motivate us to act in that way - we still do not seem to take environmental factors into account, but simply project our own emotions onto the agent.

In this sense the case for non-reflective beliefs being our default seems to me to be rather stronger than Barrett suggests. This could also be why first impressions are so hard to shift. First impressions are based solely on non-reflective beliefs. In the next part we will consider more closely the kind of non-reflective beliefs that make karma and rebirth seem plausible as reflective beliefs to many Buddhists.

2. Non-reflective beliefs make reflective beliefs seem more plausible. When our reflective beliefs coincide with the non-reflective beliefs generated by the mental tools that unconsciously describe the world, then there is a sense that the belief is more reasonable. When this happens we may say that it seems "intuitively right" or perhaps that it "feels right". This sense of rightness may be difficult to explain, since it is based on how well a reflective belief fits with our non-reflective beliefs (which are transparent and frequently unconscious).

In physics, classical mechanics largely coincides with non-reflective belief. Classical mechanics largely describes the world we can see with our eyes and thus any mathematical expressions are likely to be intuitive (to feel right). Relativity is somewhat counter-intuitive because it involves unimaginably large magnitudes of velocity, mass, and length, and tells us that time is relative to the frame of reference. Quantum mechanics by contrast, the description of the behaviour of subatomic particles, describes a world that no one can see or even imagine, and as a result is deeply counter-intuitive. Sometimes even scientists will refer to this as "quantum weirdness".

What seems intuitive, by which in Barrett's terms we mean "that which our non-reflective beliefs make plausible" is a very significant aspect of religious belief. For example, consider the passage I have often cited from Thomas Metzinger's book The Ego Tunnel
"For anyone who actually had [an out of body experience] it is almost impossible not to become an ontological dualist afterwards. In all their realism, cognitive clarity and general coherence, these phenomenal experiences almost inevitably lead the experiencing subject to conclude that conscious experience can, as a matter of fact, take place independently of the brain and body." (p.78)
Many of us, especially those who meditate, have experiences that lead us towards ontological dualism. One of the great meditation practitioners and teachers I have known makes exactly this point, i.e. that his meditative experience makes it seem incontrovertible to him that cognition is not tied to the body. It is this kind of non-reflective dualism, based on the "realism, cognitive clarity and general coherence" of these types of experience, in which our mind appears to be distinct from our body, which makes religious ideas (spirits, afterlife, gods) more plausible. Experience causes us to form non-reflective beliefs (e.g. mind/body dualism) that make our reflective beliefs (e.g. rebirth) seem more plausible. For many Buddhists, for example, rebirth is quite intuitive, quite an obvious proposition. It seems naturally plausible. Our non-reflective beliefs about the nature of our minds, the possibility of mental activity without a body, and the powerful desire for continuity combine to make a reflective belief in rebirth seem plausible and likely. Of course that a view seems plausible, even when the majority think so, does not make it true. It's not even a valid criteria for judging the truth of the belief.

But Barrett missed out something important here. yes, non-reflective beliefs do make reflective beliefs seem plausible, but the flipside of this is that they also make some of them seem more implausible. Most people, of whatever faith, find an afterlife plausible. The new annihilationists who rest their reflective beliefs on science are historically unusual, and their beliefs are powerfully counter-intuitive to most people. This is a large part of why supernatural beliefs persist despite progress in science; and why, despite regular debunking, people with "psychic powers" are still able to draw crowds and make a lot of money. And why people can read detailed explanations of why an afterlife is implausible and just write it off without a second thought. One Reddit commentator took one look at my essay There is No Life After Death, Sorry and said:
"I consider this article completely and fundamentally false. The author is fairly clearly a materialist, but he does not succeed in proving anything, here." (Reddit /r/Buddhism
But when pressed the commenter concedes that they didn't really read the essay. The title conflicts so drastically with their non-reflective beliefs that without a considerable act of will they come to the inevitable conclusion that I am wrong without reading body of the essay. And rejecting my argument without ever having carefully considered it seems a reasonable stance. In effect it is demanded by their non-reflective beliefs. This is all too common amongst Buddhists, who ironically tend to have a very high opinion of themselves with respect to rejecting blind faith.

Important in Barrett's theory is that the lending of plausibility to concepts is not simply a passive process. Because these non-reflective beliefs are actively involved in processing the information that is presented to our conscious minds. Therefore the third way that the two kinds of belief interact is:

3. Non-reflective beliefs shape memories and experiences.  Our minds are actively involved in perception. It's not that we have a perception and then interpret it. In fact interpretation and perception are simultaneous processes. In Buddhist terms the processes that go into making up experience, the five skandhas, work together simultaneously to produce an experience. What presents itself to our conscious mind is partly the product of our non-reflective beliefs. This is true also of memories. Everything that we become aware of is being filtered through our system of producing non-reflective beliefs. Again we see the parallel with Kant's a priori judgements. There is no experience that is not understood through our pre-existing beliefs about the world, including such "metaphysical" notions as space, time, and causality. But again this process is transparent, so that we do not realise that what reaches awareness is already a compromise. 

Non-reflective beliefs, along with memories of past experiences, are the standard against which we judge all other beliefs. A conclusion that is consistent with a larger number of non-reflective beliefs, is (unconsciously) judged more plausible and is thus more likely to become a reflective belief. The process by which this happens "often amounts to a crude heuristic" (15). Although Barrett's description of this process is evocative, I think Damasio has identified more accurately how this process works. Damasio (2006) describes a process involving emotional weighting of facts to determine their "salience" (See Facts and Feelings). By scanning our emotional response to certain conclusions we can evaluate many possibilities at once and come to a conclusion quickly and unconsciously. Even reasoning seems to involve this process of assessing the salience of information through how we feel about it. Because the decision making process works by integrating emotional responses, it is effectively able to assess many possibilities at once and present the preferred option (the one we feel best about) to our conscious mind quickly, but transparently. We then find reasons to justify our decision.

A fascinating example of this surfaced, as I was writing this essay, from the blog of Joseph LeDoux, the world's leading expert on the neurophysiology of emotions, especially fear. His published work on the amygdala stated that damage to the amygdala weakens the ability to assess threats and of course one of the most accessible aspects of our response to threats is the feeling of fear. But this was taken to mean that the amygdala caused fear. This is an example of the fallacy that correlation equals causation, we actually alter what we read or hear so that it fits our preconceptions. As LeDoux says "When one hears the word “fear,” the pull of the vernacular meaning is so strong that the mind is compelled to think of the feeling of being afraid." In fact the amygdala "only contributes to feelings of fear indirectly." 

As Barrett puts it, "people rarely work through a logical and empirical proof for a claim, Rather, what I call 'reflective' tools typically do their calculations rapidly." In Barrett's view it is the consistently with a large number of non-reflective beliefs which tip us towards a reflective belief. To the extent that this fits with Damasio's decision making model I think it is accurate. However for Barrett's theory this aspect is important because it underpins his view on what makes for a plausible supernatural belief. This brings us to the subject of minimally counter-intuitive beliefs.

~~oOo~~



Bibliography

Ardila, A.  (2015) A Proposed Neurological Interpretation of Language Evolution. Behavioural Neurology. doi: 10.1155/2015/872487. Epub 2015 Jun 1.

Barrett, Justin L. (2004) Why Would Anyone Believe in God? Altamira Press.

Bering et al. (2005) The development of ‘afterlife’ beliefs in religiously and secularly schooled children. British Journal of Developmental Psychology, 23, 587–607. http://www.qub.ac.uk/schools/InstituteofCognitionCulture/FileUploadPage/Filetoupload,90230,en.pdf

Blanco, Fernando; Barberia, Itxaso  & Matute, Helena. (2015) Individuals Who Believe in the Paranormal Expose Themselves to Biased Information and Develop More Causal Illusions than Nonbelievers in the Laboratory. PLoS ONE 10(7): e0131378. doi: 10.1371/journal.pone.0131378

Boden, Matthew Tyler. (2015) Supernatural beliefs: Considered adaptive and associated with psychological benefits. Personality and Individual Differences. 86: 227–231. Via Science Direct.

Cima, Rosie. How Culture Affects Hallucinations. Priceonomics.com. 22 Apr 2015.

Damasio, Antonio. (2006) Descarte's Error. London: Vintage Books.

Dunbar, Robin. (2014) Human Evolution: A Pelican Introduction. Pelican.

Foucault, Michel. (1988) Madness and Civilization: A History of Insanity in the Age of Reason. Vintage.

Lakoff, George. (1995) Metaphor, Morality, and Politics, Or, Why Conservatives Have Left Liberals In the Dust. http://www.wwcd.org/issues/Lakoff.html

LeDoux, Joseph. (2015) The Amygdala Is NOT the Brain's Fear Center: Separating findings from conclusions. Psychology Today. 10 Aug. https://www.psychologytoday.com/blog/i-got-mind-tell-you/201508/the-amygdala-is-not-the-brains-fear-center

Metzinger, Thomas. (2009) The Ego Tunnel: The Science of the Mind and the Myth of the Self. Basic Books.

07 August 2015

Sanskrit, Dravidian, and Munda

Modern distribution of
Indian languages
In this essay, I will reiterate some important points made by Michael Witzel about the linguistic history of India. When the first anatomically modern humans reached India ca. 70,000 years ago, they almost certainly used language. But all the direct evidence for language is much more recent, the oldest being written forms of language. Comparative linguistics allows us infer a great deal more about the history of language so that we can get a picture of how people spoke long before writing was even invented.

Like many historians I use the term India or, sometimes, Greater India, to mean then whole of the sub-continent, taking in the political territories of modern day Pakistan, India, Nepal, Sri Lanka, and Bangladesh. Given that the main languages of North India and Sri Lanka are all modern Indic Languages: Urdu, Panjabi, Hindi, Bihari, Bengali, Nepali, and Sinhala, the modern political divisions belie the common linguistic history they share. However, we must be a little cautious. Language, ethnicity, and geography can be independent variables when discussing culture. This essay mainly concerns languages and the speakers of languages. We cannot be sure of the ethnicity of these people.

We know with some certainty that the speakers of Old Indic languages (now represented only by Vedic) came from outside India. This is an unpopular thesis amongst Indian Nationalists, who try to make a case for Sanskrit arising in India and spreading out. Some would have us believe it is the original language (Cf Eco 1997). However, the relationship of Old Indic with Old Iranian and a variety of other internal evidence show that Indo-Iranian, an early offshoot from Proto-Indo-European that further split into two sub-families, Iranian and Indic, was spoken by nomadic peoples of Southern Central Asia. Old Indic is mostly distinguished from Old Iranian by a few sound changes. Later grammatical forms drifted apart as well, though the attested languages, Vedic and Avestan, were closely related. 

Comparative linguists showed in the late 18th century that Greek, Latin and Sanskrit are all so similar that they must have derived from a common ancestor. That hypothetical languages is nowadays called Proto-Indo-European (PIE) and the language family that it spawned is called Indo-European (IE). PIE also has a Germanic branch giving rise to all the Germanic languages (including English), a Slavonic branch incorporating all the Slavic languages, and takes in many of the languages of Iran and Afghanistan, not to mention Armenian. In addition, we have written evidence of a number of now dead Indo-European languages such as Tocharian and Khotanese from Central Asia. By comparing the changes in many languages, linguists are able to describe pragmatic 'rules' which describe how sounds and forms of words change. This procedure has been very successful in some areas. PIE is probably the best example. But the Sino-Tibetan language family also gives a clear view of the proto language that underlies them all. 

There have been efforts of varying success to try to cover all the languages of the world in this way. And this has naturally led some scholars to propose a further ancient layer of relatedness. So, for example, there is the conjectured Nostratic proto-language (or macro-family) that takes in Afroasiatic (including the Semitic languages), Kartvelian (Caucasian languages and possibly Basque), Indo-European, Uralic (including Finno-Ugric), Dravidian, Altaic (covering the Turkish, Central Asian, and probably Korean and Japanese), and Eskimo–Aleut. These macro-families are still controversial, though many of the objections are ideological, rather than logical.

A major branch of the PIE family is Indo-Iranian, taking in languages that were spoken throughout the combined sphere of influence of Persia and India, including large swathes of Central Asia. In this essay, I will refer to the Indian branch of the PIE or Indo-Iranian as Indic. It has previously been referred to as Aryan or Indo-Aryan, but these terms have been deprecated because of the racial overtones of the word 'aryan' and the discrediting of old ideas about race. Indic is a strictly linguistic term that gives us no information about ethnicity. We can talk about three phases of Indic: Old - principally attested as Vedic, though other variations must have existed (before ca. 500 BCE); Middle - attested by Pāḷi, Gāndhārī, and Apabramsa (ca 500 BCE - 1000 CE); and New or Modern (emerging in the last millennia).

When the speakers of Old Indic crossed the Hindu Kush and entered India, ca 1700-1500 BCE, they met people who spoke languages with a much longer history in Greater India.

There is a whole family of Dravidian languages, for example, including Tamil, Telegu, Malayalam, and Kannada. Today, the people who speak languages from the Dravidian family are a large minority (about 20%). Some linguists (e.g., McAlpin 1974, 1975, 1981) have noted a similarity between Dravidian and the language spoken in ancient Elam, near what is now the border of Iran and Iraq on the Red Sea. Written records of Elamite stretch back to 3000 BCE. McAlpin, et al, believe that Dravidian speakers split off from Elamite speakers and entered Indian very early, perhaps 4000 BC. Others are more doubtful (Blench 2008), dismissing the evidence as flimsy and pointing out affiliations with other language groups, as well. 

Less well known is the Austroasiatic family. This family of languages extends from the North-east of India to Vietnam. One Indian branch of this widely geographically spread out family, is Munda, with several languages spoken in small pockets of India today, but probably more widespread in the past. In Burma there is a strong overlay of Tibeto-Burman languages that descended from the north, but there are still enclaves of Austroasiatic speakers, as well. Genetic studies of Austroasiatic speakers suggest that the Austroasiatic language family may have arisen in India and spread east. 

Additionally, there are a number of languages in India that appear to be unrelated to any known languages. These language isolates, as they are called, are found in the so-called tribal peoples who seem never to have been assimilated into the mainstream of Indian culture (in other words, they were never Brahmanised).

Michael Witzel's exploration of the linguistic history of India begins by establishing his parameters; most important for the purposes of this essay is the periods of composition of the Ṛgveda (1999: 3).
  • I. The early Ṛgvedic period: c. 1700–1500 BCE: books (maṇḍala) 4, 5, 6, and maybe book 2, with the early hymns referring to the Yadu-Turvaśa, Anu-Druhyu tribes;
  • II. The middle (main) Ṛgvedic period, c. 1500–1350 BCE: books 3, 7, 8. 1–66 and 1.51–191; with a focus on the Bharata chieftain Sudās and his ancestors, and his rivals, notably Trasadasyu, of the closely related Pūru tribe.
  • III. The late Ṛgvedic period, c. 1350–1200 BCE: books 1.1–50, 8.67–103, 10.1–854; 10.85–191: with the descendant of the Pūru chieftain Trasadasyu, Kuruśravana, and the emergence of the super-tribe of the Kuru (under the post-RV Parikṣit).
These layers of composition have been established on the basis of "internal criteria of textual arrangement, of the ‘royal’ lineages, and independently from these, those of the poets (ṛṣis) who composed the hymns. About both groups of persons we know enough to be able to establish pedigrees which sustain each other." (1999: 3).

Dutch Indologist, F. B. J. Kuiper, had already identified some 383 words in the Ṛgveda that are not Indic and must be loan words from another language family. We know this because they break the phonetic rules of Indic languages. We can use an example from English to demonstrate this. We have a word ptolemaic, which comes from the Egyptian name Ptolemy. It refers to a particular view of the world as earth-centred. Now we know that ptolemaic cannot be a native English word because English words cannot start with /pt/, and, indeed, native English speakers cannot easily pronounce this sound combination and tend to just say /t/. It is clues like this that linguists use to identify loan words. And we have to take into account that loan words are often naturalised. Many loan words in English are Anglicized. So another loan word like chocolate has been altered to fit English spelling patterns from an original spelling more like xocolātl, which clearly breaks English phonetic rules. We also have a number of Yiddish loan words like shlemiel, shlep, shlock, shmaltz, shmuck, and shnoozle, etc., that defy, but also. to some extent. redefine English spelling. Similarly. no other Indic language has retroflex consonants (ṭ, ṭh, ḍ, ḍh, ṇ, ṣ), but Old Indic absorbed these from languages it met in India and they became a naturalised aspect of the Indic phonology by the time the Ṛgveda was composed.

It's not always possible to identify where a loan word has come from. But Kuiper and Witzel manage to identify most of the 300 words as belonging to Proto-Dravidian or Proto-Munda, with a few from other language families like Tibeto-Burman.

Perhaps the most striking finding that Witzel gives, repeatedly, in his essay, is that in the early Ṛgvedic period there are no loan words from Dravidian, e.g.:
"It is important to note that RV level I has no Dravidian loan words at all (details, below § 1.6); they begin to appear only in RV level II and III." (Witzel 1999: 6)
Ṛgvedic loans from Drav[idian] are visible, but they also are now datable only to middle and late Ṛgvedic (in the Greater Panjab), and they can both the localized and dated for the Post-Ṛgvedic texts. (Witzel 1999: 19)
This is an important finding. The landscape of the Ṛgveda is that of modern day Panjab. This is clear, for example, from the names of rivers that are mentioned, e.g., the Kabul, Indus, Sarasvati (now dried up) and Yamuna rivers.

Loan words from the earliest period are from the Austroasiatic language family, meaning that the people living in this area when the Vedic speakers arrived, spoke a variety of proto-Munda. This is important because it is believed that the people living in this area were the descendants of the collapsed Indus Valley Civilisation (IVC). They had scattered as the climate became much drier and caused their large scale cities to be unlivable. The IVC had disappeared by 1700 BCE. If the people of the Punjab, ca 1500 BCE, spoke a variety of proto-Munda, this strongly suggests that the people of the IVC also spoke an Austroasiatic language, rather than, as is usually supposed, a Dravidian or even Indic language. Indian nationalists often assume that the IVC spoke Sanskrit, but this was never plausible. Interestingly, the very name we have for the north of this region, Gandhāra, is itself an Austroasiatic loan word.

It's often suggested that, because there are northern pockets of Dravidian speakers, with whom the Vedic speakers presumably interacted, that Dravidian was once considerably more widespread and perhaps that the language of the IVC was Dravidian. The loan words in the Ṛgveda argue against this view. The north-western pockets of Dravidian could be isolated populations left behind by the migration of Dravidian speakers into Southern India from Mesopotamia. Those in the North-East are more consistent with a previously larger territory, but if they were ever on the Ganges Plain they were forced out of it completely, leaving remnant populations only as far north as mountain ranges on the southern edge of the Ganges Valley.


Conclusion

The picture that emerges is that Old Indic speaking people crossed the Hindu Kush in small numbers and met people who spoke a form of proto-Austroasiatic; and then later, perhaps as they penetrated further into the sub-continent, people who spoke proto-Dravidian languages. The Dravidian speakers, themselves probably immigrants had lived in India for some thousands of years already, displacing and assimilating even earlier waves of human migrants. The pockets of people who speak language isolates, not related to any known language, have presumably lived in India for a very long time. Indeed, they often pursue a hunter-gatherer lifestyle that reinforces this impression.

Other authors have suggested that the Old Indic speakers had the advantage of superior technology and this led them to dominate the original inhabitants. We can't really know how it happened at this distant time but, in any case, Indic languages came to dominate the North of India - from Afghanistan to the Ganges Delta. Again, it is worth repeating that language, culture, and location may not be correlated. To the extent that we can make comparisons, there were a few surviving similarities between the people who composed the Ṛgveda and those who composed the Avesta. But, in many respects, their cultures had diverged along with their languages. Zoroastrianism was the major innovation in Iran, although the dates of the founder are difficult to pin down; the most likely scenario places him a little after the Ṛgveda. Based on informal comments by Michael Witzel, I have argued for a trickle of Iranian tribes entering India ca. 1000-800 BCE, who ended up settling on the margins of the Central Ganges city states of the second urbanisation, especially Kosala and Magadha (Attwood 2012). Genetic studies suggest that, though their language came to be spoken throughout the Punjab and down into the Ganges Valley, the Vedic speakers contributed little to the gene pool, which is remarkably homogeneous in India. The genetic contribution is far less striking than we might imagine by patterns of culture or language family (Attwood 2012).

This poses a difficulty for Indian Nationalists who want Sanskrit to be the mother tongue of India (I'm not sure how they fit Dravidian into the picture) and for it to have originated within the subcontinent. People with this view often express their hatred of Michael Witzel, referring to him in extremely uncomplimentary terms. But, as rational people, we have to follow the evidence and allow it to guide us to conclusions, even when these are uncomfortable for us. And the evidence is abundantly clear in this case. If any language is the mother tongue, then it is probably Proto-Austroasiatic, the ancestor of the modern Munda and Austroasiatic languages. Sanskrit developed from Indo-Iranian, initially somewhere in Greater Iran, then was carried into India with Vedic speaking migrants. Since we know they were nomadic cattle herders (unlike, say, the Śākyas who were settled agriculturalists) they may have made the journey up the Khyber Pass seeking greener pastures.

In Attwood (2012) I tried to show that certain important features of early Indian Buddhist culture could be tied to Zoroastrianism and/or Iran. Unfortunately, all too often, the history of the region is divided into Indian and Iranian by academics. And thus I fear that many connections between the two regions have been overlooked. The connections that are evident seem to demand more attention from suitably qualified scholars. We know a great deal about the interactions of Greece and Persia, but far too little about relations between Persia and India.

~~oOo~~


Bibliography

Attwood, Jayarava. (2012) Possible Iranian Origins for Sākyas and Aspects of Buddhism. Journal of the Oxford Centre for Buddhist Studies. 3.

Blench, Roger (2008) Re-evaluating the linguistic prehistory of South Asia. Toshiki OSADA and Akinori UESUGI eds. 2008. Occasional Paper 3: Linguistics, Archaeology and the Human Past. pp. 159-178. Kyoto: Indus Project, Research Institute for Humanity and Nature.

Eco, Umberto. (1997) The Search for the Perfect Language. London: Fontana Press.

McAlpin, David W. (1974) Toward Proto-Elamo-Dravidian. Language 50: 89-101.

McAlpin, David W. (1975) Elamite and Dravidian: Further Evidence of Relationship. Current Anthropology 16: 105-115.

McAlpin, David W. (1981) Proto Elamo Dravidian: The Evidence and Its Implications. American Philosophy Society.

Witzel, Michael. (1999) Substrate Languages in Old Indo-Aryan: Ṛgvedic, Middle and Late Vedic. Electronic Journal of Vedic Studies. 5(1): 1–67.
Related Posts with Thumbnails