What is the role of innate universal categories in grammatical theorizing? A conversation between David Adger and Martin Haspelmath

Martin Haspelmath:

David, you criticized a blogpost that I wrote a while ago, where I said that Chomsky apparently changed his mind and no longer assumes a rich universal grammar (UG). I didn’t quite understand what you meant in your brief Twitter comments. I have been under the impression that in at least 20th century Principles & Parameters linguistics, the idea was that innate grammatical knowledge explains limits on diversity, and therefore analyses of particular languages should make use of the innate grammatical categories that we have hypothesized to exist (e.g. V, N, A according to Baker (2003), or the various functional heads hypothesized to be innate by Cinque (1999), and the various operations such as head movement and vocabulary insertion that are routinely used by MGG practitioners).

David Adger:

My criticism was fundamentally about the claim you made in that post that the Faculty of Language in the narrow sense consists only of recursion, as there are also the mappings to the interfaces, which is where issues of category come in. I wanted to make clear that there’s a distinction between the substantive content of what’s in UG, not discussed in the Hauser et al, and the architectural (formal) one, which is about Merge and the mappings. The operations you mention are just Merge, though the operation Agree is also used, but we’d like to reduce that to something else. But crucially there are various principles mapping these to interfaces (linearization algorithms, binding theory, etc). That’s the position I took in my 2003 core syntax book, I still think it, and I think it’s pretty mainstream.

Martin:

But some people seem to think that the constructs of MGG don’t have to be innate, e.g. Jason Merchant in his statement for the Athens 2015 conference:

“We can stop tying our analytical proposals to old debates about Universal Grammar, innateness, and learnability, and stop even paying lip service to positions in these debates. These are independent issues, orthogonal to the central theoretical issues we face, and a wonderful red herring for those who would seek to ignore or dismiss all generative syntax work. One can argue for or against UG as a theory of the language faculty, but it makes no difference to whether our proposals about selection, agreement, movement, phrase structure, etc. are right.” 

David:

I think the first thing is to read “can” here as epistemic possibility, and not as any kind of an injunction. Jason is, I think, saying that people dismiss insightful work in GG because they think it’s tied to particular claims about UG. But it could logically be the case that none of this stuff is language specific (say it’s all learned through distributional learning with some Bayes chucked at it and some kind of general cognitive constraints that force tense to be marked hierarchically higher than aspect, or whatever) and that wouldn’t mean that the specifics of the analysis would have to change. So Jason’s point is not about explaining the limits on cross-linguistic diversity, it’s a logical point that an analysis of, say, English Sluicing, doesn’t need to care about claiming that the posits are innate.

Martin

Yes, a particular analysis of English Sluicing could be right regardless of what we think about innate knowledge (=UG). But how can one claim to explain limits on cross-linguistic diversity if one’s analytical proposals are not claimed to be proposals about UG? How can one argue that a language X should be described in terms of notions such as C, T and v (which are not widely understood outside of MGG circles) unless one has good independent reasons (independent of the facts of the language) to think that the language probably uses them? In other words, can one use different symptoms for identifying elements like C or T or v in different languages and constructions unless these are innate knowledge?

David:

Sure. Continuing the point I think Jason was making, the categories could be right, but learned. You could say the the whole of GG mechanisms are a side effect of domain general capacities interacting together with learning algorithms. (I don’t think that’s correct, for poverty of stimulus reasons, but it could be philosophically; I don’t think Jason thinks it’s correct either). It’s just that the correctness of the analyses don’t depend on the idea that the categories and operations are innate. Perhaps the data available to learners across languages (because of historical and functional principles) is very similar and domain general mechanisms acting on these lead to Merge and Move and all the rest. The arguments against that view are not about the specifics of sluicing or whatever. They are about the direction of inductive inferences, the unlikeliness of the mapping from apparently diverse data to similar analytical outcomes, poverty of stimulus effects, the structural uniformities of unrelated languages etc. Particular analyses (of sluicing, or whatever) almost always have direct consequences for these issues of course because they raise the question of how the posits of the analysis can be learned. The real arguments for GG are, I think, mismatches between what is available to a learner and what is learned (both in terms of poverty and surfeit of the stimulus), and a general view that abstraction provides such good explanations of the phenomena that syntax is probably a natural as opposed to social phenomenon. And if you buy that, that of course legitimizes the use of the theory across data it was not developed for (indeed, that’s what a theory is for).

Martin:

It seems to me that the traditional view is represented, or example in the recent NLLT article by Smith and colleagues (“Case and number suppletion in pronouns“), who say:

“the unattested patterns do not arise as they cannot be generated in a manner consistent with Universal Grammar”

I fully understand this – it makes perfect sense if linguistics is like chemistry and its task is to find the ultimate building blocks – the innate categories of the human grammatical mind.

David:

Right. So I think Jason also believes this (though you’ll have to ask him!), it’s that he thinks particular analyses don’t need it.

Martin:

Good, so I understand that not all generative analyses are meant to imply claims about limits on cross-linguistic diversity. But I’m still wondering why generative grammarians rely so heavily on a set of pre-established notions that are thought to be applicable to all languages (e.g. specifier-head-complement patterns, functional categories like v and T), and so far the most plausible explanation that I can think of is that generative grammarians generally presuppose that there are causal factors in the brain that create the same categories in all languages (whether they are specific to Language or not) – this is what I have been calling the presupposition that the universally available categories (or features, or operation types) are natural kinds. Would you agree with this?

David:

I’m not sure I think that they are natural kinds, in the classical philosophical sense (like gold, or water or hydrogen). But I’d agree that there is something about the architecture of the mind that makes certain concepts co-optable by the syntax, while making others inaccessible to it.

There are some nuances here. You say Language, while I say syntax. And you say “create the same categories in all language”, while I used ’’co-optable’’. I think these differences matter. I think most generative grammarians think that there are concepts that are not available to the syntax as a category (though they may be lexicalized in a word). That entails that there is a group of concepts that are available to the syntax as categories. It doesn’t entail that all languages will pick the same group, or that all languages will pick the whole group, or that there are members of the group that are always coopted. The crucial thing for generativists is the idea that certain concepts can never be categories. So this is a weak claim, but it is interesting in two ways, I think. (i) It makes the claim that humans couldn’t learn languages built on the non-cooptable concepts, which is possibly testable through both artificial language learning paradigms and through neuroscience.

(ii) If minimalism is right, then there is a further claim that there should be a very small set of such concept/category pairs, which raises the likelihood that the same ones will keep cropping up in language after language and possibly in context after context (which is what I think – cf. Daniel Harbour’s approach to number or person features).

I think this is a difference between the two approaches you sketch that is maybe more important than the natural kinds issue. The task in generative grammar is to keep trying to get down to the fundamental atoms and their modes of combination. The theoretical approach is abstraction: as we get smaller in the basic atoms, and it’s their interaction that does the work, we achieve a deeper level of explanation. So, I’d expect that the same feature (say something that corresponds conceptually to a boundary) to be at play in grammatical number, in aspect, in mass-count, in mirativity, in tense, etc. The categories get smaller and more abstract, but each does more explanatory work. Ditto for modes of grammatical combination: you start with myriad PS rules and transformations, and you keep pushing and pushing to see what these are built up out of, what the data tells you is wrong about them, what generalizations they may be missing, until you reduce them, first to X-bar schemata and evantually to Merge, which turns PS-rules and transformations into a single operation, while factoring out both order and category. It may even be that Merge itself is just a way of describing what nature generally does with hierarchies, which is organise them via self-similarity. One we get down to these atomic units and the principles of their organisation, they would be artifacts of the way that human cognition is set up (so ultimately biological). They may also be not uniquely human (apes probably have a notion of boundary) but the human thing is that they are coopted by the syntax to build meanings (so there is an interface between the syntactic and conceptual systems), and the syntax itself (the capacity to impose hierarchy on sequences of words) is probably uniquely human, and hence again biological. So the natural kinds, if that’s how we want to think of them, are very abstract.

I think, but I may be mistaken, that the functionalist approach doesn’t expect this, as the categories are fundamentally learned from the data, and so are closer to the “surface“. So in construction grammar, there are many many modes of syntactic combination, and the variety of human languages is seen as an argument for myriad constructions. Ditto categories in say Croft-style construction grammar. The abstraction is then done by a different mechanism. Not the syntax itself, but the learning mechanisms and mechanisms of change (chunking, analogy, frequency, etc). But abstraction is still required somewhere to give an explanation of what we find.

Martin:

Yes, I would say that it’s very unlikely a priori that there are uniform natural causes behind cross-linguistic similarities, because languages are as diverse as human societies and cultures, and nobody assumes uniform natural causes for the similarities between these either.

David:

Not if the natural kinds are abstract enough, and I think the syntax of language isn’t as diverse as you think (though its morphophonological realizations may be).

Martin:

I understand what you meant earlier by saying that:

“the task in generative grammar is to keep trying to get down to the fundamental atoms and their modes of combination. The theoretical approach is abstraction: as we get smaller in the basic atoms, and it’s their interaction that does the work, we achieve a deeper level of explanation.”

But I’m still puzzled that generative grammarians should think that they are close to having discovered the atoms of human syntax – at least that’s the impression that I get when reading MGG papers.

David:

I don’t think anyone thinks this. If you are following the method I sketched, you adopt proposals as hypotheses, ideally keeping a lot constant, as you vary one property, to see whether you can gain any insight into the phenomenon by doing that. That’s why people try to keep the categories, which have been justified in other languages, constant. If you do get insight into the phenomenon (by predicting new patterns that were previously unobserved, or providing an explanation for a different part of the grammar that wasn’t directly under investigation, etc.) then the whole set of hypotheses gains some extra justification, though it requires further work to understand exactly how – which hypotheses are doing the real work?

I think most generative grammarians think we’re very very far from having discovered the atoms (Chomsky always goes on about us being at about the stage of 18th century chemistry). It’s just a matter of working with what you have. But that does assume that the same categories are available to different languages.

(My own, probably uncommon, view is that we can’t ever understand the world/phenomena as is. All we can do is create theories of the world which are hopefully intelligible to us, and these theories hopefully bear a close relationship to the world through their descriptive and predictive capacities. But our theories are always partial, selective and oversimplified. They are just the best we have. )

Martin:

Let me give an example of what I mean by saying that generative syntacticians seem to think that they are close to having discovered the atoms of human syntax. Recently I saw a paper on Maori subject extraction which simply assumes that Maori has things like CP, SpecCP, FP, F’, vP, v’, and so on, and without understanding this claim (which is extremely unlikely, in my view), it’s hard to understand the paper.

David:

I can’t really speak for Maori, but if I take the work that Daniel Harbour and I did on Kiowa (Adger et al. 2009), I think we got a lot of understanding of the language by adopting a certain set of hypotheses about categories and their organization (there’s V, v, Appl, Asp, Neg, Modality and Evidentiality). They are probably wrong (isn’t everything?) but they allow us an advance in our understanding, both of the phenomena, and of the theory. So we used our analysis of Kiowa clause structure to argue against particular hypotheses in generative grammar (that there is roll-up movement), and for others (that the relationship between morphological complexity and syntactic height is looser than most current theory would predict).

Martin:

The general “abstraction” programme that you sketched makes a lot of sense, of course – I think this is how chemists finally discovered the chemical elements in the 19th century. But would you say that there is actually good evidence for the universality of things like CP, vP, as well as C’ and v’ and so on?

David:

I think there’s evidence that they are available to languages (though whether there is C’ as well as C is a theoretical question. I’d say no, but then I’m a minimalist ;-)). I don’t think there’s much danger if they are basically used descriptively, much the same as functionalist grammarians will use „basic linguistic theory“.

Martin:

That’s interesting to hear, because I recently wrote a blogpost about this, asking whether it makes sense to use generative grammar as a descriptive tool. My answer in that blogpost, and my thinking so far, has been that using the same categories for all languages makes sense only if one makes strong assumptions about a rich innate UG: If one had a firm conviction that the categories must be innate (derived from whatever source), then it would make sense to proceed in this way, even in the face of very limited evidence and slow progress.

David:

Well, I think that there are two issues: Innate/Universal. I argued above that some categories were cooptable, others not. So the cooptable ones are innately available to UG, but not necessarily universal (present in every grammar). I do think that V and N are almost certainly universal (for empirical not theoretical reasons, though my own pet theory also makes these universal for theoretical reasons), and I do think that there is only a small number of categories (for the reasons I gave last time), and I do think that the pair ordering is broadly universal though partial (if you have T it is above Asp, if you have C it is above v, Neg is not ordered wrt these categories, etc) – which is basically observational based on Bybee/Rice/Cinque etc, so it follows that most languages will end up having similar clause structures. That’s probably the justification you’re wondering about.

Martin:

I don’t have the impression that assuming universally available categories like C, T, Asp, and v has led to robust new insights. What I want is an explanation of cross-linguistic tendencies in grammatical patterns, and I don’t seem to get any explanations from assuming the innateness of these categories. You probably see things differently, but how do we measure our success? For me, the measure is that the predictions of my theories should hold in any representative sample of the world’s languages and that the theories should be consistent with other things that we know.

David:

I agree with that (that the predictions of theories should hold!), but I’m not sure about consistency with other things we know, as I don’t think we know them. I find, for example, cognitive grammarians impressedness with the results of (gestalt) psychology quite bizarre. We know more about language than general semantic cognition, so it seems backward to me to insist that syntax looks like something we actually know little about. I actually think the same about processing (connecting to a separate Twitter conversation with Stefan Müller). We really don’t have good theories of syntactic processing. We have a few observations, and a lot of guesswork. I have no idea why Stefan thinks that the parser is preeminent, indeed he does. What’s it using to parse things? If we build everything into the parser and have no grammar we’ve lost any hope of understanding crosslinguistic variation. The parser has to be sensitive to the grammar. I’m afraid I also think that we don’t know very much about how the human mind uses frequency. My sense is that grammar is sensitive to rank frequency (à la Zipf) not token frequency, but I don’t think we know that. So I’m not keen on the kinds of frequency explanations you provide in your work. Or the stuff about extravagance and how it affects language change (Haspelmath 1999). I know from my work with sociolinguists that huge amounts of language change don’t follow that pattern, so I think it’s fairly minor compared to broader social effects on language.

Anyway, I agree we want predictive theories that tell us about how languages (all of them) work, and how they can vary. But I’m not compelled by the notion that we “know“ anything well enough to make that thing a condition on our theories of grammar. The theory of grammar has an important explanatory task of its own: what is human language?

Martin:

Many thanks, David, for this exchange of views and ideas! Till next time…

 

References

Adger, David. 2003. Core syntax: A minimalist approach. Oxford: Oxford University Press.
Adger, David, Daniel Harbour & Laurel J Watkins. 2009. Mirrors and microparameters: Phrase structure beyond free word order. Cambridge: Cambridge University Press.
Baker, Mark C. 2003. Lexical categories: Verbs, nouns, and adjectives. Cambridge: Cambridge University Press.
Cinque, Guglielmo. 1999. Adverbs and functional heads: A cross-linguistic approach. New York: Oxford University Press.
Haspelmath, Martin. 1999. Why is grammaticalization irreversible? Linguistics 37(6). 1043–1068.

 

 

 


OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (November 2, 2018). What is the role of innate universal categories in grammatical theorizing? A conversation between David Adger and Martin Haspelmath. Diversity Linguistics Comment. Retrieved December 6, 2024 from https://doi.org/10.58079/nsui


3 thoughts on “What is the role of innate universal categories in grammatical theorizing? A conversation between David Adger and Martin Haspelmath

  1. Wow thank you both very much for this very informative debate. I tend to agree with Dr. Haspelmath in this debate, as the majority of my background is in functionalist syntax as used by researchers like Tom and Doris Payne. However, I have read every word of Dr. Adger’s ‘Core Syntax’, and I am very interested in the current direction of generative literature, especially regarding ergative-absolutive constructions in Austronesian (although in my opinion these are usually driven by voice phenomena instead of case, in the instance of Tagalog or other “symmetrical voice” languages).
    I am curious though, what do you both think about the longevity of the papers written in these respective frameworks? Is it dangerous to put all of your eggs in one basket and publish using a generative framework that may be unintelligible in 50 years? Is using a functional framework even worth it if it is simply description with no explanatory power whatsoever?

    • There are two types of explanations: Language-particular explanations (as in a descriptive grammar) and general explanations (at the level of Human Language or cognition), and both have their value. When you work on an individual language, the former are more important, and the latter may be quite irrelevant, because the phenomena you study may be historical accidents. In most generative work, grammatical phenomena are taken to reflect cognition, as if grammars were not shaped by long histories and functional-adaptive factors.

  2. Thanks for posting this informative debate!
    Could you provide a link to the Twitter debate with Stefan Müller that David Adger mentioned? I tried to find it but wasn’t able to.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.