Dear José-Luis,
Many thanks for your open letter of March 2020 (on your blog and on Lingbuzz), where you discuss a number of recent contributions of mine, and where you argue that in contrast to what my 2020a paper on linguisticality and recent blogposts imply, there are no problems with generative grammar (GG) once one adopts a correct general perspective, because GG does not assume innate building blocks. [There is also a Lingbuzz version of this response.]
You are making many valid and clear points, and I greatly appreciate your attempt to bring more clarity into these discussions. I understood almost all of what you say (which is a great starting point), and there may actually be some scope for agreeing on some key issues. But this would apparently imply that most of the actual current practice of generative grammar (e.g. most of the papers in journals such as Linguistic Inquiry or Glossa) is irrelevant to the key concerns of generative grammar as you portray them from your philosophical point of view. There is no contradiction here, just an obvious tension – which is probably more relevant to me (because I am a practicing grammarian) than to you (because a philosopher can afford to dismiss much of what people do or say in practice). Incidentally, there was an earlier occasion on which I encountered (what looked like) a wholesale dismissal of a large amount of actual practice, and I was totally shocked: This was when I saw Newmeyer’s (1998) paper on “the irrelevance of typology for grammatical theory”, which bears some resemblance to what you are saying. But then I read Newmeyer’s 2005 book, and by that time both he and I had realized that we agreed on most things (namely that grammatical theory is mostly irrelevant to typology, because so little is innate). So maybe the current exchange with you, José-Luis, will have such a happy ending, too.
Mental grammars and social grammars
I was glad to see that you highlighted the difference between Chomskyan mental grammars (“competences”, “I-languages”) and Saussurean social grammars, because this difference is underappreciated in current linguistics. Most linguists seem to think that their task is to describe mental grammars, even though (as you agree) the idea of describing grammars as systems of social conventions is perfectly coherent. The two research programmes are different, but related, and one could imagine linguistics departments with different linguists specializing in either one or the other, but happily cooperating and learning from each other. Because ultimately, the social conventions that a nonmentalist finds and the mental grammars of the psychologist must be compatible. The actual practice, however, is very far from this – and why is this? (“Why can’t we talk to each other?”, as I put it in 2000.)
In practice, the vast majority of generative linguists do not do things that are very different from other (non-Chomskyan) linguists. (Sorry, I don’t have a label for the kind of linguistics that I am advocating – I call it just “linguistics” or “language science”; your label “functional comparative linguistics” gives the right idea, because I do happen to have a particular interest in functional explanations, but these do not define my thinking. I’m interested in all kinds of explanations, and all kinds of theories.)
So for example, many Africanists use generative grammar notions in their papers on syntax (e.g. in the recent volume edited by Clem et al. 2018), and even many sign language researchers use generative grammar notions (e.g. in the recent book by Bross 2020). There are also some well-known dialectologists, e.g. the Italianist Roberta D’Alessandro, who work primarily on dialects of European languages, but still use generative grammar notions (cf. her recent LangSci paper on topic agreement in the dialect of Ripano). Are all of these linguists uninterested in “externalization interfaces” (as you also call the social grammars)? Some generative linguists work on language change, and some even work on language variation (for example, David Adger gave a plenary talk at the 2015 NWAV conference). So in actual practice, generative grammar notions (such as X-bar theory, vP vs. VP, c-command, movement) play a role far beyond the concerns of philosophically oriented authors like Chomsky, Hornstein, and yourself.
Moreover, in actual practice, we cannot distinguish between mental grammars and social grammars, even though conceptually, it is of course clear that they involve different scientific goals. In practice, we always describe social grammars, whether we use generative grammar technology or not. Linguists sometimes use the term “introspection” to describe acceptability judgments, but we do not of course introspect our mental grammars – what we do when we make such judgements is to ask ourselves what the social acceptability of an expression is. Grammatical ill-formedness always boils down to social ill-formedness – and language acquisition means acquiring the social rules of the surrounding communities. So yes, there are mental grammars and social grammars, and it is not trivial to get from the social grammars to knowledge of language. But social learning is what humans are especially good at – we learn a huge amount of social rules, and we can make ill-formedness judgements in all these domains (for example, cheese for breakfast is ill-formed in French culture, and a cappuccino after lunch is ill-formed in Italian culture; both are perfectly fine in German culture, and in fact highly frequent).
How can we study mental grammars?
Since Chomsky (1957), many linguists (including anti-Chomskyans such as Lakoff, Langacker, Bybee and Goldberg) have adopted the goal of describing mental grammars, but how can we actually find out how our knowledge of languages is mentally organized? Linguists rarely address this question directly, but generally content themselves with simply assuming that their theories correspond to some mental reality. This may seem reasonable, but again, it provides no way of distinguishing mental grammars from social grammars, and thus it is understandable that “mental linguistics” and “social linguistics” have not become two separate objects of study (“sociolinguistics” is really social variation linguistics, not the study of social conventions per se). You say yourself that social grammars are “a (privileged) access route” to mental grammars, but you do not say how else one would identify aspects of mental grammars that are not evident from social grammars.
(Of course, some grammarians do take psycholinguistic evidence into account in their general models of Human Language, e.g. Sag & Wasow 2011 for syntax; and Jackendoff & Audring 2020 for morphology; but interestingly, these tend to be the kinds of linguists who are least likely to call themselves “generativists”.)
In your letter, you say that GG assumes that “Human language must be studied with a methodology of natural sciences, that is, with a hypothetico-deductive model”, but I do not see how this might help with mental grammars. You compare language (as part of human nature) with memory and vision, but crucially, memory and vision are not culture-specific. To understand the non-culture-specific part of Human Language, we need to somehow subtract what is culture-specific, but how? There does not seem to be any specific method that is different from traditional structural linguistics, plus perhaps ordinary psycholinguistics.
I would think that a hypothetico-deductive approach is also used by anthropological scientists who study social behaviours (e.g. the prevalence of human sacrifice, or the foraging behaviour of males and females). Of course, literary scholars who evaluate the beauty (or other subjective impact) of some work of art work in a different mode (even though linguists sometimes share departments with them), but I do not see any deep difference between social and cognitive sciences in their methodologies.
I also fail to see why the study of social grammars should be “inductive”, while the study of mental grammars is necessarily “deductive: from language to languages, and not from languages to language”, as you say in your letter. I would think that induction and deduction are necessary aspects of any science, whether cognitive or social. However, if there is a special causal factor that is peculiar to cognition, namely an innate grammar blueprint, this would indeed call for a more deductive approach to particular languages: One would start out with the hypothesis that some element of grammars is innate, and one would test it on new languages. This is what I called the Mendeleyevian vision (or natural-kinds programme), and which you agree is incompatible with Chomsky’s recent thinking.
Human Language and p-languages
I have been observing a certain lack of awareness, on the part of many (generative) linguists, of the distinction between Human Language (studied in general linguistics, or g-linguistics) and particular languages (or p-languages). Linguists often jump to conclusions about Human Language in general from the study of particular languages, but as you rightly say:
“only some properties of particular languages are historical accidents, but not all. We must exclude all the properties which are not subject to historical change”
But how do we know which properties are, and which ones are not, subject to historical change, other than by studying a wide range of social grammars? I do not know any other method of finding out what is a general property of Human Language, other than by studying a range of p-languages (or by studying aspects of language use that are not rooted in conventions, such as slips of the tongue, as I mentioned in my paper on general linguistics, Haspelmath 2021; but I leave psycholinguistics aside here, because it plays no role in your letter either).
One example of the insufficient distinction between p-languages and Human Language appears in your letter when you say that
“When generative grammarians construct a theoretical model of a mental grammar, they do so by including in their model all the necessary and sufficient components of the Faculty of Language, that is, their object of study is the famous Faculty of Language in the Broad Sense (FLB), not just the FL in the Narrow Sense (FLN).”
Since you first talk about “a mental grammar”, I take this to be about the linguistic knowledge of a particular (maybe idealized) speaker. But then you talk about the “Faculty of Language” (in the Broad or Narrow Sense), a term that I always took to refer to human linguisticality (= the biological capacity for language). But linguisticality is a general trait of the human species, and I find it odd to use it to refer to a capacity of a particular speaker. Certainly Ferdinand de Saussure, who is often credited with popularizing the term faculté du langage, would not have used it to refer to an individual’s capacity. (Note that you talk about “a mental grammar”, with an indefinite article, but not about “a (particular speaker’s) faculty of language” – why is this?)
This is relevant here because Chomsky has often described “universal grammar” as “the initial state of the language faculty”, which is mapped onto the mature state (= a mental grammar) through a process of parameter setting. This mode of description is completely non-social and non-cultural, as if language acquisition were not one aspect of the child’s enculturation. And it leaves aside the possibility of learning two or more languages – would one say that a bilingual has two mature language faculties? Are we thus all born with two language faculties? (or with more, maybe dozens in the case of polyglots?)
It is my strong impression that the non-social perspective on human language has always led Chomsky to equivocate on the actual goal of his research programme. For decades, he basically pretended that English is the only relevant language for his concerns, and after the Principles & Parameters period of the 1980s-1990s, he seems to have reverted to this view in the 21st century. This equivocation can also be found in your letter: On the one hand, you say that the goal of a generative grammarian is to “predict (in the simplest and most empirically adequate way) the form and meaning of the expressions of the analysed language” (= p-linguistics), but on the other hand, you say that the objective is to find “the common formal principles that limit or channel the development of each knowledge system that we call ‘I-language’”.
So what is the goal? P-linguistics (= individual mental grammars) or g-linguistics (= common formal principles, presumably innate)? Personally, I find both of these interesting, and they should not conflict with what we know about social grammars and their regularities. So all of them must be studied together – but they are distinct scientific goals, probably achieved with distinct methods.
Why generative grammar in practice assumes building blocks
In your letter, you insist that “generative grammarians do not assume that there is a rich innate grammar blueprint” – you claim that all they assume is that there is a biological capacity for language (something that is not in doubt, of course). You say that generative grammarians make use of
“elements and principles necessary to account for the form and meaning of linguistic expressions, independently of whether these are innate or not, language-specific or not, universal or not”
But if this were the case, the descriptions of individual languages would look as different as the languages themselves, not as uniform as the current generative textbook knowledge. For example, linguists would not have the idea that all languages have DPs (even if there is no definite article, or no single determiner position for articles/demonstratives), or that all clauses are CPs (even in languages that lack complementizers, like Chinese), or that all phrases conform to X-bar theory, or that all anaphors must be c-commanded by their antecedent.
But if these things (DPs, CPs, c-command, etc.) are innate building blocks, then they may occur in the same way in all languages, even though this is not immediately apparent. In other words, if there is an innate grammar blueprint, then the actual practice of generative grammar makes sense (basing comparison of languages on building block uniformity, not measurement uniformity). The Mendeleyevian vision can be regarded as a concrete plan (as laid out in Baker’s 2001 book), and one can be optimistic about its long-term success, even if big successes are not apparent to everyone yet.
You are right that there is nothing about Chomsky’s more recent philosphy that requires a rich innate universal grammar (he doesn’t seem to be worried about Plato’s Problem anymore, which at one time was said to require it). But of course, Chomsky is not a practicing generative grammarian. Practicing generative grammarians cannot assume that different languages miraculously have the same building blocks – there must be a causal mechanism for this cross-linguistic identity, and it can only be innateness, as far as I can see.
So as I said earlier: Since you don’t seem to have a stake in any particular views of how generative grammar should be done (whether with DP or NP, whether with binary branching or multiple branching, whether with HPSG or Dependency Grammar, etc.), you may agree with me that actually existing mainstream generative grammar (with its Mendeleyevian vision) is probably on the wrong track. As you say yourself in your letter: “Where would [the rich innate grammar blueprint] have come from?” Innate building blocks are possible, but inherently unlikely, so Chomsky’s 21st century view (that “merge” is the only part of human linguisticality that is specific to language) is more likely to be on the right track.
Conclusion
I often hear that non-Chomskyan comparative grammar and Chomskyan linguistics have different goals, but as I tried to make clear, I cannot see this: Everyone in practice studies social grammars, and everyone asks more general questions, about cognition and universality. These questions are difficult to answer, because we have no good access to mental grammars, and we do not know which of their aspects are general across speakers and populations (as opposed to culture-specific). So I think that you are wrong in dismissing the study of social (or “externalization”) grammars as somehow irrelevant to the Chomskyan enterprise – though we may actually agree that much of actually existing generative grammar is on the wrong track.
Yours sincerely,
Martin Haspelmath
Jena and Leipzig, March 2020
References
Baker, Mark C. 2001. The atoms of language. New York: Basic Books.
Bross, Fabian. 2020. The clausal syntax of German Sign Language: A cartographic approach (Open Generative Syntax 5). Berlin: Language Science Press. (https://langsci-press.org/catalog/book/256)
Clem, Emily & Jenks, Peter & Sande, Hannah (eds.). 2018. Theory and description in African Linguistics: Selected papers from the 47th Annual Conference on African Linguistics (Contemporary African Linguistics). Berlin: Language Science Press. (https://langsci-press.org/catalog/book/192)
D’Alessandro, Roberta. 2020. Agreement across the board: Topic agreement in Ripano. In Smith, Peter W. & Mursell, Johannes & Hartmann, Katharina (eds.), Agree to Agree: Agreement in the Minimalist Programme (Open Generative Syntax 6). Berlin: Language Science Press. (https://zenodo.org/record/3541757)
Haspelmath, Martin. 2000. Why can’t we talk to each other? (A review article on F. Newmeyer’s “Language form and language function”). Lingua 110(4). 235–256.
Haspelmath, Martin. 2020a. Human linguisticality and the building blocks of languages. Frontiers in Psychology 10(3056). 1–10. (doi:10.3389/fpsyg.2019.03056)
Haspelmath, Martin. 2020b. Two methods for comparative grammar: Measurement uniformity and building block uniformity. Diversity Linguistics Comment (Blog). (https://dlc.hypotheses.org/2305)
Haspelmath, Martin. 2021. General linguistics must be based on universals (or general aspects of language). (to appear). (https://www.academia.edu/42175669/General_linguistics_must_be_based_on_universals_or_nonconventional_aspects_of_language)
Jackendoff, Ray & Audring, Jenny. 2020. The texture of the lexicon: Relational Morphology and the Parallel Architecture. Oxford: Oxford University Press.
Newmeyer, Frederick J. 1998. The irrelevance of typology for grammatical theory. Syntaxis 1. 161–197.
Newmeyer, Frederick J. 2005. Possible and probable languages: A generative perspective on linguistic typology. Oxford: Oxford University Press.
Some further discussion has taken place on Twitter (https://twitter.com/haspelmath/status/1238141549629845504):
José-Luis: “though we may actually agree that much of actually existing generative grammar is on the wrong track”. If you want to imply that I believe that, you are wrong. It’s your opinion, not mine. My letter states just the opposite.
Martin: But much of actually existing generative grammar is based on the „Mendeleyevian vision“, i.e. on the idea that we need „in-depth analyses“ to find the true building blocks. This makes sense only if the building blocks are innate natural kinds, a claim that you reject. I‘d like to see some actual examples of generative claims which you regard as valid but which do not rely on pre-established universal building blocks. I can’t think of any.
José-Luis: Virtually every concept, principle or category used in GG research meet this description. E.g. I can claim that CP is universal without assuming it’s a pre-established building block. UG is not like preformationist homunculi. That’s a caricature. In any case, the fact that I cannot convince you that UG is not an innate grammar does not mean that I have to agree with you that current GG is on the wrong track.
Martin: How can CP be universal but not innate if children (and linguists) don‘t have compelling evidence for it? What causal mechanism could make it universal as a hidden structure, if not its presence at birth? I have not seen any proposal.
José-Luis: A fertilized egg does not have a pancreas. But all human beings have one (although slightly different from others). What is innate is not the pancreas, but the multiple factors that determine its formation in development. The same applies to cognitive organs, including language.
Martin: That‘s just what I mean: just as there‘s a pancreas blueprint (that needs to mature) right from the start, there must be a uniform grammar blueprint, given by human biology. Otherwise generative grammar makes no sense. This is what I call „natural kinds“. Unlikely, but possible.
What I mean by “innate” is “determined biologically, and thus invariant across cultures”. This is the only way in which we can have the “hidden invariance” that is so typical of generative grammar. Without innateness, cultural invariance cannot be “hidden”, but must be measurable.
José-Luis: Here we agree. But again: this does not imply that the innate component of language is a set of pre-established grammatical categories, or a “rich innate grammar blueprint”, as you call it. Sorry, but I don’t know how to explain it better.