Should we try to periodise Indian philosophy or shall we give up any attempt, since each one will be criticised and is in some respect flawed? Periodisation, as recently highlighted by Julius Lipner, is a form of classification and as such also a form of controlling (Lipner 2013). It is hardly the case that a periodisation is just a neutral act of recording what has happened (Lipner mentions the case of pre- and post-Copernicus astronomy). Much more often, to periodise just means to superimpose what we now deem to be a decisive criterion. If you studied history in Europe, you probably learnt that the Middle Age “ends” either at the fall of Byzantium, or in 1492 (discovery of America) or with Luther’s theses in 1517. Apart from the Eurocentrism of all three, it is interesting to note how the discovery of America had much less impact on its contemporaries than one could expect. The fact that there were so many human beings who could not have heard of Jesus’ message for more than 1400 years, for instance, did not shake Christian theology from its foundations (for more on this lack of change, see P. Armandi 1982). A similar case is the relative small impact of the Islamic invasions on Indian philosophy, which we discussed already in the comments to this blog post.
Thus, periodisation is a risky enterprise. However, it is hard to avoid it, since one needs some structure while approaching the clumsy mass of uninterpreted historical events.
A similar case is that of the interpretation of the history of a given philosophical school. It is fascinating to look at Kumārila’s and Prabhākara’s main philosophical innovations as replies to Dignāga (as does McCrea 2013), and we as scholars need to have and to provide some interpretative cues unless we want to end up in a Babel’s library, where the critical edition of each 20th c. school’s paper counts as that of a crucial manuscript for the history of Nyāya. However, great theses are also dangerous, insofar as we tend to cling at them and to become blind to other hypotheses (cf. on this point Andrew’s commentary on this post).
Remember that relative who would not listen to your revolutionary ideas and would just say “You think like that because you are young, but you will change your mind in ten years”? Do you remember hating his frame of mind which did not allow for any other possible explanation? I, for one, do not want to exercise the same kind of violence on the texts I read. Nor do I want to read texts only in order to find confirmations of my theory (and to have to disregard blatant counter examples).
Long story short: we need interpretative frames as orienteering tools and because otherwise we would just fall prey of an even more dangerous implicit methodology. But, if you ask me, I think that all such interpretative schemes should be constantly revised. Let us attempt great theories, general periodisations and classifications of authors and ideas, but if and only if we are then not only willing, but also ready to question them. The great interpretative frame is not a goal to be reached once and forever. It is “always to be revised”.
When did the Middle Ages (or Antiquity, or the Modern Age…) “end” according to your school teachers? And according to your grown-up you? And, did you ever radically change your interpretation of something?
On implicit methodologies, see this post. On various hypotheses for a periodisation of Indian philosophy, see this post and its comments, where also Franco 2013 (where the papers by Lipner and by McCrea mentioned above are found) is discussed.
(cross-posted also on my personal blog.)
Pingback: Investigatio semper reformanda | elisa freschi
i agree: at best, a periodization is a hypothesis that can be revised or thrown out if necessary. (for example, sircar’s “age of prakrit” and “age of sanskrit” in indian inscriptions, which most scholars accept in some form, has mostly been held up by recent discoveries—e.g., there are very few sanskrit inscriptions before the 2nd century, and very few prakrit inscriptions after the 4th.) at worst, a periodization is an uncritical transposition of one set of terms onto cultural phenomena on which they have no internal connection whatsoever.
You can see something unfortunate about the Canadian school system (and I don’t think the American is any better) because they… didn’t. I don’t recall any of my history courses talking about the end of the Middle Ages. It just wasn’t covered.
What about the beginning or end of any other age (be it the Modern Age or the Enlightenment)? Or is it just the case that there was no place for history altogether?
The only required history courses were on Canadian history, and even those left out the most important parts (the Seven Years’ War, the Loyalists fleeing the American Revolution). There was an optional “history of Western civilization” course I took in high school, which was intended to deal with the premodern West (because there was a second “modern Western” course for the modern era) – but I don’t remember what it said about periodization, if anything. I certainly don’t remember being given specific years as transitions from one period to another.
[The following may be more useful for the budding graduate student than the seasoned scholars frequenting this blog.]
Perhaps because I’m not an expert in this field I’ve never been academically or psychologically invested in (or attached to) any particular periodization scheme, although I do indeed think these are helpful (and thus unavoidable). As with all such categorization there are phenomena that may be excluded or missed as a consequence and one will invariably be falling back on something analogous to the economists frequent invocation of ceteris paribus clauses. I suspect some of these schemes are strongly influenced by our understanding of the role played by non-philosophical variables in interaction with philosophical ones: the former including existing historical schemes, and anthropological, economic, political, “cultural,” legal, and linguistic variables, for example. Research in these various disciplines will inevitably effect our construction of periodization schemes, including the respective weight we explicitly or implicitly accord to such variables. Analogies and metaphors both within and across the periods will enable us to see assess the methodological virtues and vices of our contingent schemes.
Personally, I’m not interested in the periodization of philosophy per se, but rather something christened “Indic civilization” across time, which includes both continuities and discontinuities and will undoubtedly be influenced by philosophical production. I’ve thus come to rely on the periodization scheme found in Gerald James Larson’s book, India’s Agony over Religion (1995):
1. The Indus Valley (c. 30000 – 1500 BCE), 2. The Indo-Brāhmanical (c. 1500 – 600 BCE), 3. The Indo-Śramanical (c. 600 BCE – 300 BCE), 4. The Indic (Hindu, Buddhist, Jain, Cārvāka) (c. 300 – 1200), 5. Indo-Islamic (c. 1200 -1757), and 6. The Indo-Anglian (c. 1757 – present).
If compelling reasons are proffered to demonstrate the disutility of this particular scheme, I can’t imagine why I should hesitate to abandon it in whole or part. I like the fact that Larson himself made an effort to point out some shortcomings of his own periodization. I‘ve yet to digest the schemes found in the earlier post on this topic, but I hope I’ll be open to an honest assessment of their comparative benefits (or lack thereof). (Our appreciation of the inherent limits of our periodization schemes should also prevent us from spending an inordinate amount of time obsessing over them.) It might also be helpful to see what happens in other fields, say Islamic Studies by way of example, when faced with similar questions and controversies over periodization.
All taxonomic schemes are “theory-laden” to one degree or another so it is important to be clear and frank with regard to our theoretical commitments in comparing competing periodization schemes. There may be sufficient reason to rely on one scheme for one exercise or narrative, and a different one for another such enterprise if only because the abstraction, classification, and simplification intrinsic to such schemes are both interpretive and functional (in which case, we’re striving to be epistemically modest and perspectival). Hilary Putnam somewhere invoked the following from A.E. Singer which reminds of the “entanglement of fact and value” in such efforts (which of course does not mean the ‘confusion’ of fact and value or a denial of the importance of the distinction between the two):
1. Knowledge of facts presupposes knowledge of theories.
2. Knowledge of theories presupposes knowledge of facts.
3. Knowledge of facts presupposes knowledge of values.
4. Knowledge of values presupposes knowledge of facts.*
Finally, I think it’s also helpful to bear in mind the following from Michael Lynch: “[T]he conditions under which a proposition is true are partly determined by the conceptual scheme in which the proposition is expressed. But what makes a proposition true is not its relation to a scheme but whether or not the conditions in question obtain. For a claim to be true (or false), the conditions must be relative to a scheme. Yet the reason that the claim is true is not because it is relative to a scheme (as the truth relativist must hold); it is true because it is the case. [….] A fact, in the human sense, is simply what is the case.”
* We circumscribe a particular descriptive domain of facts because we believe it has function, purpose, and value for the kinds of questions we (or others) are asking (we have contestable assumptions and unavoidable presuppositions as to what are the interesting and relevant questions that would motivate this particular descriptive task). It is a map designed to guide us to getting a grip on what is relevant: it is a RELEVANCE MAP. The act of description is at the porous boundary betwixt and between the domains of information and knowledge, it aims to enable comprehension of something significant. We need to justify this taxonomy rather than that one (i.e., ask ourselves ‘why this, and not that?’). Any description much select from a virtually innumerable number of possible “facts” and perspectives on such facts, so much so that much hinges in the first instance as to the initial choice or act of circumscription with regard to what we decide to objectively or impartially describe as far as possible or practicable (this ‘objectivity’ may be simply or largely consensual in nature). And any particular descriptive endeavor involves (presupposes, assumes, and/or posits) any number of values (some would prefer here the less helpful concept of ‘interests’) and must exemplify certain cognitive virtues (coherence, elegance, abstractness, functional simplicity, or perhaps economy or parsimony and the like) to be a worthwhile description. With regard to values, Robert Nozick wrote: “Values enter into the very definition of what a fact is; the realm of facts cannot be defined or specified without utilizing certain values. Values enter into the process of knowing a fact; without utilizing or presupposing certain values, we cannot determine which is the realm of facts, we cannot know the real from the unreal.”
Or, as Putnam puts it, our knowledge of the world presupposes values, indeed, what comes to count as the real world depends upon our values (and these need not—and I believe should not—be construed in merely emotivist, subjectivist or conventional terms, nor should they be viewed as irrational or non-rational). This is evidenced in the “implicit standards and skills on the basis of which we decide whether someone is able to give a true, adequate, and perspicuous account of even the simplest perceptual facts….” It is Putnam who also reminds us that insofar as facts (or truth) and rationality are interdependent notions, a descriptive statement of fact entails “criteria of relevance as well as criteria of rational acceptability, and…all of our values are involved in our criteria of relevance.” So, should we want to proffer a description that is factual and thus true (that is, ‘true by our present lights,’ or as ‘true as anything is’), we will be answering the relevant questions that motivate the descriptive enterprise, and at the same time revealing (intentionally or otherwise) our values or system of value commitments. Putnam elaborates:
“The way in which criteria of relevance involves values, at least indirectly, may be seen by examining the simplest statement. Take the sentence ‘The cat is on the mat.’ If someone actually makes this judgment in a particular context, then he employs conceptual resources—the notions ‘cat,’ ‘on,’ and ‘mat’—which are provided by a particular culture, and whose presence and ubiquity reveal something about the interests and values of that culture, and of almost every culture. We have the category ‘cat’ because we regard the division of the world into animals and non-animals as significant, and we are further interested in what species a given animal belongs to. It is relevant that there is a cat on the mat and not just a thing. We have the category ‘mat’ because we regard the division of inanimate things into artifacts and non-artifacts as significant, and we are further interested in the purpose and nature a particular artifact has. It is relevant that it is a mat that the cat is on and just something. We have the category ‘on’ because we are interested in spatial relations. Notice what we have: we took the most banal statement imaginable, ‘the cat is on the mat,’ and we found that the presuppositions which make this statement a relevant one in certain contexts include the significance of the categories animate/inanimate, purpose, and space. To a mind with no disposition to regard these as relevant categories, ‘the cat is on the mat’ would be as irrational as ‘the number of hexagonal objects in this room is 76’ would be, uttered in the middle of a tête-à-tête between young lovers. Not only do very general facts about our value system show themselves in our categories (artifacts, species name, term for a spatial relation) but, our more specific values (for example, sensitivity and compassion), also show up in the use we make of specific classificatory words (‘considerate,’ ‘selfish’). To repeat, our criteria of relevance rest on and reveal our whole system of values.”
Putnam also reminds us that norms and standards of a kind are intrinsic to our descriptive projects:
“(1) In ordinary circumstances, there is usually a fact of the matter as to whether the statements people make are warranted or not. [….]
(2) Whether a statement is warranted or not is independent of whether the majority of one’s cultural peers would say it is warranted or unwarranted.
(3) Our norms and standards of warranted assertibility are historical products; they evolve in time.
(4) Our norms and standards always reflect our interests and values. Our picture of intellectual flourishing is part of, and only makes sense as part of, our picture of human flourishing in general. [On this as it relates to the social sciences, please see Andrew Sayer’s Why Things Matter to People: Social Science, Values and Ethical Life (Cambridge University Press, 2011)]
(5) Our norms and standards of anything—including warranted assertibility—are capable of reform. There are better and worse norms and standards.”
Patrick, thanks again for your interesting comments. While I see your point concerning the interaction of values and facts, I am not convinced that in the case of periodisations or interpretations of Indian Philosophy as a whole we are really doing with univocal facts. After all, there is no solid “Indian Philosophy” apart from innumerable thinkers whose work could be interpreted and conceptualised in different ways according to what the interpreter believes to be more relevant. In this sense, I am afraid that in this case we are more on the side of the first part of Lynch’s quote: “[T]he conditions under which a proposition is true are partly determined by the conceptual scheme in which the proposition is expressed.” Thus, I approve your attitude as you described it at the beginning of your comment: we must use periodisations (etc.), but we need to do it while being aware of the fact that they are our construction and while being ready to give them up as soon as contradictory evidence arises.
Thank you Elisa. I don’t see where I expressed the notion that “in the case of periodisations or interpretations of Indian Philosophy as a whole we are really doing with univocal facts,” indeed, the letter and spirit of my comment was rather the converse of such a claim.
Patrick, I did not mean to say that you said it. I just wanted to say that in the case of the history of ideas, the balance between values of facts weighs more on the side of values.
erratum: “inevitably affect…”
I was told at school, that the end of every historical period was connected with the change in social and political structure. I.e. the end of the Middle Ages was the end of feudalism and the beginning of new economical and (as a result) political system. This methodological pattern, no doubt, inherits much from Marx)
But comparing to what you were taught, I think it was not bad.
Still the same periodisation with concern to the history of Indian culture does not seem to have much sense.
Perhaps the problem with periodisation is similar to the one I have been exploring with evolution. The metaphors and structures we employ are outdated and under-sophisticated. I’ve been critiquing the idea of a linear tree with binary divergence as an over-simplified model of development. In it’s place I propose the braided river as metaphor. It allows for multiple divergence and convergence, and rather than tracing origins to a singularity, it allows for multiple tributaries entering the stream at many points, each of which may itself be complex. Ideas and texts almost always involve some hybridisation – and the fact that we tend to call this “contamination” highlights that we are using unhelpful metaphors to conceptualise the process.
It’s easy to see that ideas change over time. And we know something of the dynamics of this process from modern history. Once ideas begin to be more routinely documented and dated we can follow Foucault and do meaningful archaeology on them. I see no reason to believe we cannot do archaeology going back to ancient times as long as we fully acknowledge the uncertainties involved. The recent debacle over the Lumbini “shrine” shows that archaeology is open to misuse, especially when patronage is at stake (and this is a very old issue in the history of Indian ideas, eh?). But the flaws in method and reasoning are obvious even to a non-specialist. If we make serious mistakes as individuals out colleagues will be pleased to point them out to us!
It is clear that rates of development vary over time and place. What takes centuries to develop progressively in one place might be adopted all at once in another. Hybridisation is the norm rather than the exception. Periodisation has to take into account local conditions.
We can take a leaf from the book of physical archaeology. For example an archaeologist may talk about the “Iron Age”, but the Iron Age began at different times in different places. In India ca. 1000 BCE. In Australia or my home in New Zealand, there was no natural transition from Stone Age to the use of metals, no Bronze or Iron Age. This does not mean that Iron Age as a general category is meaningless. As a general term it is useful since cultures change in predictable ways when they discover how to work with iron and steel.
If we have anxiety about categories then it might be well to read about how they work. George Lakoff’s, Women, Fire and Dangerous Things is an excellent introduction to a useful approach to categories and what they represent. One of the useful things about Lakoff’s approach is that, yes, to some extent categories are imposed by us on experience, but in other ways categories grow naturally from our experience as embodied locii of consciousness. The metaphors we use for abstract thought are not random. We “grasp” an idea (in modern European languages as well as in Classical Sanskrit) because grasping is one of the most fundamental ways we interact with the physical world (and one of the first motor skills we master as babies). Lakoff’s critique of classical theories of categories is helpful to loosen rigid categorical thinking too.
As long as we faithfully describe what we see and are open about our methods then the process of conjecture and refutation will result in the accumulation of useful knowledge about our subject (having trained in science I have faith in the scientific method). As scholars we individually and in small groups pursue our pet theories and develop our own ideas – we plant seeds and tend what grows. And then as a community we thresh and winnow the ideas so that the chaff is blown away and only the grain is left. This is consistent with Mercier and Sperber’s account of how reasoning works. In their view reasoning evolved to facilitate small groups making decisions on a course of action. For anyone putting forward a view confirmation bias is a feature, not a bug. Hence individuals are quite hopeless at reasoning tasks as measured by psychologists. But when challenging an idea in a discussion we are far less likely to fall into conceptual bias or logical fallacy. We work as part of a community and make progress collectively. One of my observations is that the best articles and books always thank a large number of people for discussions and input into the final product. The best scholars are usually well connected and have good colleagues to argue with.
It seems to me that categorisation is just one of the tools we use to create maps of our experience and our abstractions from experience. Time categories (periods) are no exception. If we have a moderately rational approach to categories they need not be so problematic that we have to spend a lot of time justifying them or apologising for using them. How are we supposed to think without them?
Thank you for your thoughtful answer, Jayarava. I think you are right in pointing out the risk of remaining trapped in the wrong metaphors. If we, for instance, speak of “contamination” in the case of manuscripts, we might be unconsciously inclined to think of it as a negative phenomenon, and to act consequently. Similarly, categories such as “Medieval” or “Classical” are intrinsically normative and, thus, risky.
As usual, we partly disagree concerning the existence of “solid facts”, and this has probably to do with our different backgrounds. I am not a relativist, but I think that the humanities cannot be interpreted along the lines of natural sciences, unless one wants to loose exactly the specificum of the humanities (i.e., humans’ original contribution to the world, which cannot be accounted with in the same way as a random evolutionary change). In this sense, I am ready to accept that fire is dangerous for all human beings, but I am strongly inclined to think that this basic commonality is comparatively small if compared to the different mythical/conceptual values “fire” has acquired in different cultures. To me, it is as if you would use the fact that I happen to have a X-cromosome more than you to describe me (instead of saying what I read, whom I grew up with, who influenced me and so on).
As for your last question: We need to use categories. But we also need to rethink them constantly. This is perhaps a main difference between, say, biology and philosophy. In the case of the former, one would expect a scholar to stop discussing about the tools she is using after a short time —after all, they are just tools, to be used in order to go somewhere else. In the case of philosophy (which includes, in my understanding, history of philosophy, by contrast, the purpose is not going somewhere else, but rather deepening our understanding. Thus, we cannot stop rethinking the very categories we are using.
I’m not sure if it’s clear that “Women, Fire and Dangerous Things” is a reference to one of four noun declensions in an Australian Aboriginal language, i.e. nouns referring to these categories are declined similarly. The subtitle is “what categories reveal about the mind.” As with our languages noun declensions are not meant to imply an underlying reality, or indeed of any similarity between the members of the catgeory. Indeed as an example it is partly intended to show how arbitrary noun classes are. Neither George Lakoff, nor myself in citing him, are implying anything else as far as I am aware. Our respective genders have never been a factor in how I have processed and responded to your writing. I’m not at all interested in your chromosomes.
All categories are ” intrinsically normative and, thus, risky.” Some scholars are always going to be at work reinforcing norms and some are always going to be undermining them. Norms tend to shift anyway (one funeral at a time according to Max Plank).
The history of scholarship dating well back into the ancient world is full of paradigm shifts in which categories are questioned, reformed, or abandoned. I see no reason to think that we are especially prone to fallacies in thinking compared to scholars throughout history. And those paradigm shifts are some of our layer boundaries.
I would argue that we go a lot further than simply rethinking our categories from time to time and follow Lakoff in rethinking the very concept of categorisation.