One prominent way of expressing the goal of what is often called “grammatical theory” (or “linguistic theory”) is to say that it aims to establish an innate architecture and a set of features and categories that are rich enough to account for everything we find in the world’s languages, but restrictive enough to explain the gaps in what we see and to explain why we can acquire languages despite the poverty of the stimulus. I always found the first goal absolutely compelling (of course each language must be cognitively representable), and the second and third goals at least coherent: Yes, it could be that the limitations on diversity that we find are due to innate representational constraints, and yes, it could be that language acquisition is guided by the same kinds of constraints.
For example, it could be that a property of the innate UG (a lexicalist architecture) specifies that syntactic rules cannot “look inside” words, and that this not only explains why word parts are never accessed by syntactic rules, but also that we can readily learn the morphological and syntactic sytems of our languages.
But how do we make progress in understanding cross-linguistic diversity and the possibility of language acquisition? It seems that progress would consist in finding more and more constraints of UG and finding more and more evidence for them, converging from different domains. So are we making progress? Strangely, few people seem to be asking this question.
I do not see much evidence for more and more restrictiveness in the generative literature – on the contrary, I keep seeing papers that argue for a richer UG that is less restrictive. Here are a few cases in point:
- Baker (2015) argues that a case feature can either be assigned via agreement (as in traditional Chomskyan syntax since the 1980s) or through a novel powerful mechanism called “dependent case” that considers only the configuration of nominals (see Haspelmath 2018 for a critical review and more discussion of restrictiveness)
- Bruening (2018) argues that the Lexicalist Hypothesis, according to which the grammar of words and the grammar of sentences are separate, is wrong, thus removing one possible source of restrictivenss. (Of course, there are many other authors that have argued the same point over the last two decades, but few of them so directly; my 2011 paper makes the same point but does not assume an innate UG to begin with.)
- Citko (2005) argues that in addition to external Merge and internal Merge, there should be a third type, Parallel Merge.
- G. Müller (2007) argues that Distributed Morphology should include not only a mechanism of impoverishment, but also the opposite mechanism of enrichment.
- G. Müller (2017) argues that generative syntax should include not only a mechanism of Merge, but also the opposite mechanism of Structure Removal. (I have not looked into the details, but it seems that this is quite similar to David Pesetsky’s mechanism of exfoliation.)
- In phonology, there have been various proposals for gradient constraint effects, especially Boersma & Hayes’s Stochastic OT, and more recently Smolensky & Goldrick’s Gradient Symbolic Representations.
- Jenks & Rose (2015) argue that the Kordofanian language Moro shows that phonological constraints can indeed precede morphological placement rules, thus making the phonology-morphology interaction less restricted.
- More generally, van Oostendorp (2018) found in his overview of the last 25 years of Optimality Theory that most proposals for modifying OT have amounted to making the system less restrictive.
That the idea of macroparameters that was prevalent in the 1980s and 1990s was basically given up around 2000 was already noted in an earlier paper of mine (Haspelmath 2008: §2.4).
Nobody can blame scientists that they rarely admit openly that an approach that they followed for a long time is not working. Things are rarely so clear-cut, and who knows, maybe there are successes around the corner after all.
But I would still like to see more discussion of the central point: What does it mean that generative grammatical theory is (apparently) getting less and less restrictive? In what sense does this constitute progress, if at all? Isn’t this rather a gradual retreat from the idea of an innate UG that explains acquisition and limits on cross-linguistic variation?
I have always been open to the idea of innate UG constraints (most recently, I have discussed three constraint types explaining cross-linguistic patterns, among them innate representational constraints, although the paper primarily focuses on distinguishing between functional-adaptive and mutational constraints). I also understand that linguistics works in terms of communities, and that in some subcommunities, formalisms of the generative sort are an unquestioned part of the community standard.
But we also want to do objective science and move closer to the truth as a discipline. I don’t see how this can work if we don’t say clearly what is the goal of (framework-bound) “grammatical theory”. A very large amount of the discipline seems to be doing business as usual, even though the foundations of the enterprise are less and less clear (at least to me).
Postscript: After sketching a draft of this paper, I had a conversation with two prominent generative syntacticians of the younger generation whose work I had criticized, telling me that they were not committed to the idea of an innate UG, or that at least they wanted to separate their work on grammar from innateness claims. But without the restrictiveness claim (and without restrictions coming from the innate knowledge), I don’t see why one needs all the specificities of universal frameworks. I am truly puzzled.
References
Baker, Mark C. 2015. Case. Cambridge: Cambridge University Press.
Citko, Barbara. 2005. On the Nature of Merge: External Merge, Internal Merge, and Parallel Merge. Linguistic Inquiry 36(4). 475–496. doi:10.1162/002438905774464331.
Haspelmath, Martin. 2008. Parametric versus functional explanations of syntactic universals. In Theresa Biberauer (ed.), The limits of syntactic variation. Amsterdam: Benjamins.
Jenks, Peter & Sharon Rose. 2015. Mobile object markers in Moro: The role of tone. Language 91(2). 269–307. doi:10.1353/lan.2015.0022.
Müller, Gereon. 2017. Structure removal: An argument for feature-driven Merge. Glossa: a journal of general linguistics 2(1). doi:10.5334/gjgl.193. http://www.glossa-journal.org//articles/abstract/10.5334/gjgl.193/
van Oostendorp, Marc. 2018. History of Phonology: Optimality Theory. http://ling.auf.net/lingbuzz/003827
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (May 24, 2018). Does less restrictiveness mean progress in grammatical theory? Diversity Linguistics Comment. Retrieved November 12, 2024 from https://doi.org/10.58079/nsu3
I feel that this post confuses “loss of restrictiveness” with normal scientific progress. Take the parallel merge case, for example. Parallel merge does not need to be stipulated: it is what happens when you merge the same element into two non-c-commanding positions. It is predicted to exist by Merge and we might wonder what rules it out. The answer, according to Citko, is linearization, in most but not all cases. Parallel merge is allowed as long as it can be linearized. This perspective provides an explanation for phenomena like ACD and right node raising, which are otherwise enormously problematic for traditional phrase structure based approaches. Great!
To summarize, syntacticians have a theory which makes testable predictions. Working out these predictions allowed us to discover a natural account for data that is otherwise problematic without any additional machinery. This is not only normal scientific progress, it is a striking illustration of the success of minimalist machinery in accounting for natural languages.
Well, maybe sometimes scientific progress means that we have to abandon earlier strong claims and accept that we actually know less than we used to think. – But I admit that I have neither understood “parallel merge” well, nor the more fundamental “merge/Merge” notion. The claim seems to be that “merge” and “linearization” are separate mechanisms, and that a particular way in which they interact makes the right predictions about what can occur in languages. All of these mechanisms are part of the innate grammar blueprint. All of this would be nice and well if it were clear to all linguists what exactly is predicted, so that they could test it. So it seems that what we need is someone to explain work written in LI-style jargon to non-insiders…
I think one of the difficulties here is in trying to draw a clear line between richness and restrictiveness, which perhaps shouldn’t be done. The main issue with this is that it isn’t only richness that is used to account for variation; its opposite, sparseness, is also used when underspecification is argued to permit multiple options. Now, if sparseness might be an explanation of variation (as it is these days in Minimalism), richness amounts to the same thing as restrictiveness, so we arrive at a point where we’re unable to distinguish them sensibly.
As much as this seems a frustration, it motivated P&P in its earliest days and continues to motivate what I agree is post-P&P theory. The mark of progress was never just taken to be the discovery of UG constraints but that their nature should make the requirements of richness and restrictiveness turn out to be two sides of a coin. That is, the aim is not to start with an empty grammatical theory and fill it with as many representational possibilities as you need to account for variation, restrictiveness coming from *not over-filling* the theory – this wouldn’t get you much past descriptive adequacy – the aim is to fill it with representations that give variation and its limits at one and the same time, the specification of a representation being a kind of constraint in itself.
Realistically, we’re unable to achieve all these goals at once, so here and there we divorce richness and restrictiveness and hope that they’ll one day be unified. So what does it mean that generative grammatical theory is (apparently) getting less and less restrictive? I would say its significance is that the Minimalist program aimed for a restrictive unification and in so doing its means and amount of empirical coverage changed, so the less restrictive theories we see now (if that’s what they are) are renewed attempts to account for variation without a vision yet for how they might one day receive a more unified treatment (though the chances are that most of these ideas in present form won’t be current in 50 years).
Certainly, none of this is a retreat from the idea of an innate UG, if only because the examples you gave relate to the character of a domain-specific syntactic module. What we’re seeing is not something peculiar to our period of history, it’s an inevitable swing in one direction in our constant toing and froing between data coverage and theoretical unification.
Having said this, I do have concerns about the issues you raised in the last few paragraphs. In particular, though linguistics has its communities as does any other science, it seems bizarre to me that the formalisms of generative theory should be adopted without commitment to a UG and, indeed, a commitment to giving a theory of UG, rather than theories of particular linguistic phenomena couched in UG terms. If I were to criticise some of the recent attempts to make UG richer, though not necessarily the ones you listed, it would be that they add new formalisms to the vocabulary of generative grammar in the hope of explaining variation without at all considering what these new formalisms mean for the language faculty as a cognitive system.
Sometimes, then, it seems that description masquerades as explanation, yet there’s nothing wrong with description, so why can’t we present it for what it is? Of course, if you find a phenomenon that seemingly can’t be explained by the computations presently supposed to characterise UG, you might have the urge to posit an additional computation to cover it. The issue there, of course, is that in positing a computation to account for this loose end, you create a formalism that does little more than technically redescribe an observation, rather than actually explaining it. It might *one day* be the basis of a deeper explanation but I think at the moment there’s a lot of reaching around in the dark and linguists who use the formalisms of generative theory but are not so much concerned with UG as a psychologically real object are too quick to think that every linguistic phenomenon should be treated in terms of innate structural principles until proven otherwise. More pluralism is needed there without doubt.
Thanks, Benjamin, for this comment – and thanks for confirming that the field as a whole is confused (not only me). OK, so “restrictive” can mean two different things: (1) few mechanisms, and (2) few generated outputs. The first conclusion I would draw from such a situation is that we need two different terms of we want to be less confused. Any suggestions?
Personally, I’m mostly interested in proposals for explaining why certain language types are not found (or found very rarely), and it seemed to me that the Lexicalist Hypothesis did make some interesting proposals there – though I agree that they failed (and they were never very clearly formulated). It seems that in the new system without a lexicalist UG, MORE language types are allowed, so it’s “less restrictive” in sense (2) (even though it may be “more restrictive” in sense (1), just like Kayne’s Antisymmetry).
I am puzzled to see my paper (Bruening 2018) on the Lexicalist Hypothesis listed as one that argues for less restrictiveness. I thought it was arguing for MORE restrictiveness. The Lexicalist Hypothesis has two components of grammar with very different rules and constraints: a syntax, and a lexical word-formation component. My paper argues that we should get rid of the second, and have only a single component, a morphosyntax. This is a MORE restrictive theory: word-formation obeys exactly the same rules and constraints as the syntax. There is not a second component with completely different rules and constraints. The model with only one component countenances fewer possible principles, rules, constraints, etc. In that sense it is more restrictive.
It also seems to me that this blog post, and the field in general, is confusing two issues with the term “restrictive.” A model of grammar can be more restrictive in permitting fewer structural options or derivational mechanisms, but it can still be less restrictive in not actually restricting the output of those mechanisms. For instance, Kayne’s Antisymmetry (Kayne 1994) claims to be very restrictive in only permitting structures of one particular form, but it does not actually restrict surface outputs because it allows all kinds of derivational operations. So it is not restrictive at all in what it generates, in fact it wildly overgenerates.
It is important to keep these two senses of “restrictive” apart. Restricting a theory is useless in the abstract, one also needs to see if the posited restrictions actually make a difference in generated outputs. My 2018 paper also restricts possible surface occurrences: the model with only one component says that “words” can never involve structures that could not have been built by the same rules and constraints that we see operative in the phrasal syntax.