In the last post, Part I of this series, I introduced the idea of a text language and that there are some unique constraints in utilizing modern linguistic methodologies on these text languages. In this post I will summarize an insightful book chapter by Suzanne Fleischman entitled “Methodologies and Ideologies in Historical Linguistics: On Working with Older Languages.” Her work specifically targets Old French, but the principles, opportunities, and issues which she lays out will easily apply to the study of Greek.

Approaches of Historical Linguistics

Fleischman breaks historical linguistics up into three main approaches. First, diachronic, which analyses the changes a language has undergone. Second, synchronic, which aims to “produce linguistic descriptions of dead languages or archaic stages of living languages” (34) And, lastly, the theoretical approach. The first two approaches deal with the data that exists in the form of texts. By contrast, the theoretical approach “extrapolates from the data in order to identify general principles and mechanisms of language change” (34). Approaches like grammaticalization theory and many forms of cross-linguistic analysis fit in this camp: they aim to provide descriptions of language change that fill in the blanks, as it were, between points of data that actually exist. Fleischman’s specific concerns in her essay are with synchronic study.

Key theoretical issues

In the paragraph quoted below, Fleischman lays out some of her guiding concerns in question form:

First, are so-called descriptive statements about text languages, whether in the form of rules or otherwise truly descriptive? Or are they ultimately norms masquerading as value-neutral statements? Second, to what extent is it reasonable for historical linguistics to claim ideological neutrality, i.e. that grammars are value-neutral, as practitioners of autonomous linguistics would have it? (I realize, of course, that most historical linguists would probably not pitch their tents in the autonomous linguistics camp, but that doesn’t invalidate the question.) Third, can linguistics rightfully claim for itself the status of a science, given that its object of study has been constructed through linguistic practices, out of the incessantly variable data of real–and in the case of historical linguists, also vanished–language activity? (35)

In a word, she is inquiring how we can write grammars of text languages and what the reality that there are no native speakers of text languages means for the process of understanding said languages. What are the assumptions that allow us to fill in the blanks left by lack of access to the living language tradition?

Grammarians as linguistic dictators

Grammarians write grammars. A fundamental problem faces all grammarians in that there is no self-evident way to write a grammar of a language. Though some categorizations and groupings recommend themselves or at least are strikingly better than others, there are always a variety of possible approaches to take. A problem in Old French grammars, according to Fleischman, is the dominance of the taxonomic approach to grammar. By this she means that grammarians have taken the grammatical categories bequeathed to them from the Latin grammarians and described the language in those terms. This is endemic of grammars written in the Western tradition, not just of Old French, for obvious historical reasons. Problems abound with this approach, ranging from words and constructions which do not really fit into the categories laid out by the Latin grammarians to the more basic question, Whose categories should be used to describe a language and why?

Not everyone is content with a taxonomic approach, though. Flesichman traces two major and one minor response to this issue in the Old French tradition (anyone familiar with the Greek grammatical tradition will doubtlessly see the similarities).

Historicist Reflex

Under the title “the Historicist Reflex” Fleischman describes the approach in which the grammar of a relatively elusive stage of the language is defined in terms of grammar from an earlier or later period of the language that is more established. So, in the case of Old French, the obvious stages of the language appealed to will be either Latin or some version of Modern French. As a specific example of this in the Old French grammatical tradition, Fleischman points out that critical editions of Old French texts have been standardized to avoid “declension errors” which are present in the manuscripts. These corrected texts then become the basis for descriptive grammars, which then make certain claims about the sort of declension system that existed in Old French. The obvious problem is that the actual data provided by the manuscripts is no longer informing the description of the language, rather the data has been filtered through a norm from a different period of the language. In terms of a specific example, a noun declension system for Old French was assumed based on how the noun system functioned at some other point of the language’s history. Next, deviations from this assumed norm were labeled as errors and culled from the texts. Finally, the grammars describe the noun system exactly as it was assumed to exist before the textual data was dealt with in the first place. This approach results in descriptions that are disconnected from the actual data about the language which survives in the texts. This is analogous, in its most overt instances, to a situation where modern linguistics formed a theory of how language users use the language, ask them what they say, and then tell them they are wrong because their usage does not conform to the linguists’ theory.

Conceptual Inertia

Second, the Conceptual Inertia response “involves the straightforward application of the linguistic concepts or grammatical categories of a modern language to the data of an older stage of that language” (39). This is an increasingly common approach. Philologists and specialists in Old French, as per Fleischman’s example, may find categories like “switch reference” to be odd, but they are often found in languages and can be expected to have existed in Old French in some form. This conceptual inertia approach serves to point out that the time-hallowed categories of grammatical description are not self-evident categories and they may not be the best way to describe the language at all. However, since they have been locked in, they become assumed as self-evidently correct descriptions, or if not completely correct, at least good enough for the task at hand. In this way, traditional taxonomic grammars may actually be obscuring description of a text language in key ways, rather than aiding it.

However, the conceptual inertia approach is also rife with problems in that, again, categories of language use from other stages of a language can be foisted upon the text language even if they don’t belong.

Formal Linguistics and the “ideal speaker”

The minor response to traditional taxonomic grammar that Fleischman mentions is the application of Formal Linguistics (such as generative grammar) to text languages. This is a minor approach in that formal linguistics have not been widely used for text languages for the simple reason that these methods are used to elicit what native speakers intuitively know about their language by “generating” the formal “rules” of the language. The obvious issue with text languages is that there are no native speakers to query. If the textual remains for a language are vast, then it is often possible to find “minimal pairs” by which to test theories of various sorts, though this is not without its own problems. To compensate for a lake of speakers and evidence germane to a particular course of study scholars can, and sometimes do, make up examples in text languages, but at the end of the day, unless we are producing a sentence that already exists in our written corpus (and in a similar context), we can never be sure if it actually conforms to the “rules” of the language in question such that a native speaker would have considered it a well-formed utterance.

This problem is made more complex by the fact that real language users regularly never use expressions which are well-formed in terms of grammatical rules. Flesichman points to the example that in English we say “car-dealer” and “drug-dealer,” but not “newspaper-dealer.” This expression does not violate any rules of a well-formed expression, but it simply is not used. While we have a great deal of textual data for a variety of text languages–Old French and Greek being in the same boat here–we lack the requisite intuition to know when a grammatically well-formed utterance would still not be accepted by native speakers for some mysterious reason. In other words, we can’t account for the many possible cases where an utterance is grammatically well-formed but simply not idiomatic, and thus would have been rejected by native speakers. To quote Fleiscman’s conclusion of this section:

“In short, we have no reliable guide to usage. And this, to my mind, poses a serious obstacle to the application to older languages of generative or other formal methodologies that (a) rely crucially on native-speaker judgments to determine the acceptability of constructed data, and (b) have as a primary agenda to produce grammatical descriptions capable of accounting for any and all possible utterances of the language in question. For a language whose data corpus is finite, i.e. it can only be expanded by the discovery of previously unknown texts, a grammar designed to generate any and all possible sentences would seem to be, in a word, overkill. (43)

Following this brief summary of the history of historical linguistics and some key camps within it, Fleischman turns to a meta-question: is linguistics properly speaking a science at all?

Is linguistics a science? How about historical linguistics?

The exact status of linguistics as a science is open to debate, sometimes quite heated debate. On Fleischman’s parsing of the birth of linguistics vis a vis the older discipline of philology:

Philology is interested in linguistic facts solely as a key to understanding the literary monuments of earlier ages. But the new ‘science’ of linguistics insisted that language be studied for its own sake, for intellectual interests of its own, and for no ulterior purpose. (44)

A characteristic feature of a science is that it studies stable objects in a stable manner. Yet language used in actual communities contains significant variation, which poses difficulties for attempts to systematize language descriptions. What exactly does linguistics study? Fleischman writes:

To sum up my point here, the task mainstream linguistics has set for itself has been to describe the language of an ideal speaker-hearer in a homogeneous speech community. This formula, which readers will surely recognize, is Chomsky’s… the crucial point, i.e. that linguistics has to idealize its object in order to describe it, goes back through the structuralist paradigms to Saussure.

The obvious problem with this approach is that an ideal speaker-hearer doesn’t exist in actual spoken-language communities, and there is absolutely zero reason to believe that an ideal language “speaker-hearer” is available to us in our extant documents. The “native speakers” of text languages are the texts we have and they consistently (in both Old French and Greek) differ from the way that we conceive of and write grammars for text languages. This leads nicely into the last major point of her analysis.

Bashing the scribe: language change as decay

One significant limitation in the study of text languages is that the extant texts are rarely direct products of an author, but are mediated by any number of copyists. A question must be asked of our texts: from whom do the “errors” in our text come, the authors or the copyists? Fleischman points out that the authors of Old French texts have often been assumed to be the Chomskian ideal speaker-hearer of their language community. They produced texts which were perfect examples of the language, not containing any “corruptions” or evidence of “linguistic decay.” But, of course, the extant texts often don’t show that. The texts often have “errors”–of spelling, of grammar, surprising usages, and so forth. The general solution of scholars is the assumption that errors belong to the dunces who served as the scribes who copied the works, and that the language used by the author himself (of course, most ancient authors we know of were men) must have been of the sort that is described in our reference grammars.

This assumption, which for most writers is impossible to prove given that we have no access to originals, can be problematic in that it privileges an abstraction. Textual critics who build the base texts that scholars generally use have tended to normalize them in a variety of ways. Sometimes this has been the specific aim, sometimes it is merely a service in making the texts more readable. At this level textual critics introduce spelling changes, regularization of ambiguous noun or verb forms, grammatical constructions, and so forth. The correction of these “errors” by text critics certainly make the texts much easier to read, but also produce a text that is an abstraction. We have no way of knowing if the text a text critic produces ever actually existed or not.

Obviously, textual criticism as practiced in Old French or Greek has its own histories, practices, and ideological commitments. Fleischman’s point is rather simple. Textual criticism tends to produce normalized texts that have been “cleansed” from the errors of the scribes and restored to the “purity” of the original author and it is these normalized texts which serve as the basis for our grammars, histories of languages, cross-linguistic analyses, and so forth. The whole enterprise hides a variety of assumptions and ideologies which have been normalized and sometimes, perhaps many times, work directly against the actual textual evidence extant.

Flesichman, Old French, and Greek

This summarizes the main points which I wish to pull from Fleischman’s stimulating article. As already noted, she is a specialist in Old French and pulls all her examples from that language study heritage. I trust that a reader familiar with the history of Greek study in any detail will readily recognize parallels between the concerns she raises and the way that Greek scholarship has progressed over the years. In the next post, I will tease out a few of these implications as I think out loud about some of the difficulties of using modern linguistic methods in analyzing ancient texts.

[1]Fleischman, Suzanne. “Methodologies and Ideologies in Historical Linguistics: On Working with Older Languages.” In Textual Parameters of Older Languages, edited by Susan C. Herring, Pieter van Reenen, and Lene Schøsler, 195:33–58. Amsterdam Studies in the Theory and History of Linguistic Science, Series IV Current Issues in Linguistic Theory. Amsterdam: John Benjamins Publishing Company, 2000.