Discussing framework-free grammatical theory

A few months ago, there was a lively discussion at Languagehat of my proposal that grammatical theory should be framework-free (Haspelmath 2010), and that each new language should be approached without preconceptions. Some discussants agreed, but others disagreed. As so often, the physics analogy came up:

I cannot imagine a physicist publishing an article saying, “We have no chance of discovering the Higgs Boson, and even if we did it would not solve the problem of where gravity comes from. So we ought to close CERN and go back to tabulating meticulous measurements of planetary motion.” And it seems that this kind of retrenchment is what is proposed for linguistics.

But note that I was not suggesting that linguists should tabulate paradigms in a theory-free manner. The concept of a “framework” is often equated with that of a “theory”, but all I was objecting to was that our theoretical work should take the form of general frameworks that constrain our language-particular descriptions. Physics is not a good analogy, because we are now pretty certain that the same physical laws apply everywhere, whereas it’s clear that different languages have different categories and rules. If we are not born with specific innate categories and architectural formats for grammars (as I assume), then there is no reason to think that a category posited for one language should exist in another one – how would it get there? Another discussant resorts to an analogy from botany:

this seems like an argument that a botanist should describe a newly-discovered flower “on its own terms” without reference to any preexisting knowledge base of accumulated understandings of how previously-studied flowers typically work and what sorts of variations over what ranges have previously been observed. That seems like some sort of weird ascetic desideratum that would entail a lot of wheel-reinvention.

Since each organism is built in a different way, this analogy is much more apt, and I would think that botanists do not assume that they are restricted in their descriptions by pre-established categories such as petal, petiole, stipule, bract, ochrea. These are often helpful labels, but I would think that they are more like comparative categories that allow botanists to compare plants. If a new flower were discovered that has novel shapes, this would not be regarded as a problem for the “framework”.

Framework-bound linguists typically see their frameworks as hypotheses about what could possibly be acquired by human learners; botanical textbook categories are certainly not hypotheses about what kinds of plants could possibly exist. I would think that restrictions on possible plants mostly come from the requirement of survival (i.e. a plant must be adapted to its environment), not from some kind of “universal plant structure” that is instantiated somehow in the plants’ DNA (surely DNA can build many more kinds of plants than we observe, but most of the genetically possible plants have no chance of survival).

Mark Liberman writes in the same discussion:

I’m sympathetic to the idea of approaching new languages — and for that matter familiar ones — without (clinging to) preconceptions. But there’s another danger here, which anyone who has looked seriously at more than a few grammars will recognize. It’s common to find that the same phenomenon is described in superficially quite different ways, to the point that it takes quite a lot of work to see the relationship. This is just as likely to happen because different preconceptionless authors develop their ideas in random directions, as because different ideologically-committed authors bow to different theoretical idols.

Liberman is right that ignoring similarities between languages may be a problem, because we can learn a lot from descriptions of similar phenomena in different languages. Languages do exhibit plenty of similarities, of course, just like flowers. And the grammatical descriptions by American structuralists, who followed Boas’s imperative of describing each language in its own terms, are often difficult to read. This was because typology didn’t exist at the time. Nowadays, it is much easier to describe a language both in its own terms and in such a way that the similarities with other languages are apparent to the experienced reader, e.g. by using familiar labels such as Antipassive or Incorporation for phenomena that are similar to phenomena elsewhere that have been described in these terms. But the identity of labels should not mislead us to think that the categories are identical. The labels we have at our disposal are usually tailored to a few well-studied languages; they rarely even deserve the qualification of “hypotheses” about what is universal in language.

Update: See now also the discussion of framework-free grammatical theory with Richard Larson.

Reference

Haspelmath, Martin. 2010. Framework-free grammatical theory. In: Heine, Bernd & Narrog, Heiko (eds.). The Oxford handbook of grammatical analysis. Oxford: Oxford University Press, 341-365.

 

 


OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (April 4, 2012). Discussing framework-free grammatical theory. Diversity Linguistics Comment. Retrieved December 6, 2024 from https://doi.org/10.58079/nsrv


5 thoughts on “Discussing framework-free grammatical theory

  1. Pingback: Confusing p-linguistics and g-linguistics: Philosopher Ludlow on “framework-free theorizing” | Diversity Linguistics Comment

  2. I’m sorry for writing this comment so long after the post was published, I didn’t see this post until now.

    I fully support this notion that we need to approach languages with less, or hopefully, no preconceptions. It might be impossible and extremely hard and cumbersome, but surely that shouldn’t stop us from having that as our aim?

    If there is indeed “a common nature of language”, a “shared structure” or whatever we want to call it, should we let that be proven in its own right by not presupposing it but letting it reveal itself through framework-free descriptions? Surely such an approach would only do more good than harm, unless we count more time and energy devoted to the study and description hard. It might be hard to see the similarities at times as Liberman said, but surely it is preferable than to only see what we were looking for in the first place?

    I’m noticing more and more grammars that are described as being created primarily for typologists. Often I feel that they’re missing this point of keeping language-specific terminology separate from concepts of cross-linguistic comparison. I don’t want a translation-grammar from “typologese” to a specific lg, I might even prefer a translation grammar from an IE-lg to a specific undocumented lg because it might actually reveal more nuances at times.

    What is also worrying is when cross-linguistic labels are used and it is doubtful whether or not the author fully understands the literature on the subject. Perhaps we need to ask ourselves.. what can we expect from a person documenting a language? Are they to be typologists and language experts? Should the goals be more modest in nature? Should the description of language be done more in teams with a broad spectra of competence than by lone field linguists?

    Either way, this is primarily a discussion of the nature of second hand sources. There are more language experts and speakers available to typologists than we sometimes think. In Linguist Lists directory of Linguists there are experts on 1200+ topics (most of them specific languages), the members of ALT speak 112 different mother tongues and are experts on 839 lgs. (NB many of these entries are duplicates, Deutsch – German etc.) Not to mention the multilingual treasure that is large modern cities.

    If we are worried that a descriptive work is biased or based on preconceived ideas we should turn to other works, language experts and speakers and from that multitude of sources drawn our own conclusions.

    I’m sorry if this was a very obvious comment, this understand that this is not really news.

    • Yes, I agree that there’s also the danger of grammars written in “typologese” (nice neologism!). They are highly readable, but don’t capture the peculiarities of the language well. One has to find a middle ground…

      • Hm, I have found several errors in my comment and wish there was an edit-option. I’m glad that someone managed to understand what I meant (next time I’ll have lunch before I press “send”).

        Some further thoughts (forgive me for repeating some issues)

        What I understand as the point of the article and the blog post (and Bybee & Dahl’s gram-types) is that we should keep the two sets of terms and concepts for language-specific description and cross-linguistic comparison separate. They might go by the same labels at times, but we should make explicit how they differ, be highly aware of these differences and not use the cross-linguistic terms directly (or the French/German/English-terminology for that matter) or in other ways let per-established categories influence our perception and description of a new language.

        Essentially: if the data we have access to is already influenced by typology or other preconceived ideas then how are we ever to draw truly interesting and relevant conclusions about reality? (And even if our frameworks are “true”, shouldn’t we let them prove that by standing on their own legs?)

        This somewhat relies on the pessimistic premise that when we attempt at doing typology we are not good enough/incapable of seeing the differences when they are masked by similar descriptions (and considering that many grammars do not present enough examples supporting the points this is probably quite true). In other words it is worth increasing the differences by promoting more language-specific tailored descriptions even if this “hides” the similarities and makes the job of a typologist harder, because the alternative is (perhaps) only finding what we were looking for.

        Either way (seeing similarities in different descriptions or differences in similar descriptions), the life of a typologist is not easy.. (natural examples supporting the points being argued in grammars are always very, very, very welcome).

        What sometimes happens though is that a grammar written “from a typological perspective” becomes limited by what typologists have found interesting to study so far and what has been given a label. The separation of these two sets of terms and concepts becomes very blurry (or simply non-existent), the very opposite of what typology needs!

        In the ideal scenario we would like all descriptive linguists to first attempt at describing the lg bottom-up, with rigorous proof of each proposition (in these days of PDF-appendixes there is little need to worry about length). This ideal descriptor wouldn’t let pre-established cateogries influence the work, but at the same time would be aware of enough linguistics in order to proceed with the description in an effective manner. Perhaps basic knowledge of some very strong implicational hierarchies would be good? In the super-ideal-scenario the frequency of constructions in a corpus of natural language would be included. (Ah! I’m very much liking this dream-scenario!)

        It is the job of the typologist to see the patterns (or lack of patterns), it doesn’t have to be present already in the language-specific description.

        Linguistic typology might be hard at times (but that doesn’t mean we should make it easier or not do it) and some hints from the individual descriptor together with a little standardized terminology might be nice when one is reading the 15th cryptic grammar – but perhaps this shouldn’t be expected? What can be expected? Should language descriptors care what we think?

        Now, the task of creating a language-specific description totally bottom-up is not easy either, perhaps impossible. I hope that while we strive towards framework-freedom, descriptors are able to openly acknowledge the preconceptions they do carry with them (perhaps best summarized for some people in BLT and Shopen 1985?).

        What do we expect of field linguists and what can we expect? Is this a too heavy burden to carry? Should descriptive works be devoted to narrower subjects and/or done by a larger team of linguists? Perhaps the days are over when one person does one description of one language in 4 years..

        Perhaps the middle ground is to be found in a multitude of sources and grammars with motivation through lots of examples, and, perhaps most importantly, more informant-questionnaire-based-typology.

        There are lots of speakers out there that are accessible to us, there are even lots of speakers quite sympathetic towards linguistics as a field.. shouldn’t we take more advantage of that in linguistic typology than we do? We are discussing grammars here, but this is issue of framework-freedom is equally relevant to questionnaire design.

  3. “If it is not done in physics, it cannot be done in linguistics.” This argument has a long pedigree. More recently it has been advanced against the idea that grammatical descriptions shoud be (or indeed are) formulated in framework-free terms. But how cogent is this argument? It disregards the fundamental dissimilarity between linguistics and physics. The history of linguistics is SHORT in comparison with the history of physics. Whatever was done in physics more than 1000 years ago is absolutely outdated today. But the same is not true of grammatical descriptions. Panini’s grammar (c. 400 BC) is still today “the most complere generative grammar of any language yet written” (Kiparsky in 1993). Apollonius Dyscolus (c. 200 AD) made use of the ‘deep structure – transformation – surface structure’ apparatus and formulated the ‘performative hypothesis’. Sibawaihi (c. 790 AD) gave the quasi-definitive description of Classical Arabic, in a format closely similar to dependency grammar. Furthermore, functional explanations as employed in current typological research are based on the idea that “people use language to achieve purposes and goals” (B. Heine in 1997); and when spelled out, this idea is seen to go directly back to Aristotelian psychology.
    Of course, what these examples show is not the lack of each and every descriptive framework. But they do show that behind the apparently bewildering variety of different frameworks that have been put forward in the world during the last 2500 years, there is something like a COMMON BASIS that just needs to be uncovered. In sum, the analogy from physics to linguistics is not valid.
    On the other hand, some legitimate analogies are being missed. As Martin Haspelmath has pointed out, within physics there is no counterpart to Chomsky-type ‘restrictive explanations’. The differential calculus can be used to describe any kinds of (continuous) motions, not only those that conform to the laws of Newtonian mechanics. To give another example: “the idea of a generative grammar emerged from an analogy with categorial systems in logic. The idea was to treat grammaticality like theoremhood in logistic systems and to treat grammatical structure like proof structure in derivations” (Katz in 1981). But notice that in formal logic theoremhood (= validity) is a property possessed only by a tiny subset of well-formed formulae. Thus, formal logic is not ‘restrictive’. ‘Restrictive formalism’ was a bad idea from the start. Its uncritical acceptance has never ceased to astound me.
    It is not just the case that linguistics is claimed to be analogous to physics. Even more strongly, it is claimed that linguistics IS (reducible to) physics. But, as argued by Putnam (1999), such attempted reductions have – literally – the status of science fiction. What they claim to achieve is NOW totally incomprehensible to us. Perhaps it will be comprehensible in the future. But this does not mean that we understand it today. Simöatly, from the fact that I may someday learn to play the violin, it does not follow that I can play it today.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.