In a number of publications over the years, Stephen Anderson has advanced the idea that phonological and morphosyntactic phenomena should often be explained diachronically, rather than with reference to the innate Language Faculty (a.k.a. Universal Grammar) (cf. Anderson 2005; 2008; 2016). For someone who has been a very prominent generative phonologist and morphologist (cf. Anderson 1974; 1992), this is remarkable. In the generative meaninstream, very few linguists have even entertained the possibility that core properties of grammars (such as distinctive features and alternations in phonology, or case-marking rules in syntax) might be explained by anything other than UG. The notion that “linguistic theory” (= what generative linguists are engaged in) consists in elucidating the constraints of our cognitive apparatus on possible mental grammars is still widely taken for granted. Thus, Anderson’s arguments are interesting and deserve consideration (see also Plank (2007) and Cristofaro (2012) for similar considerations).
In the 2016 paper, Anderson focuses on phonology and goes so far as to claim that “there are at present no convincingly demonstrated substantive universals governing the set of possible regularities”, citing work by J. Blevins (e.g. 2004) for phonological alternations (such as final devoicing) and by Mielke (2008) on distinctive features. For morphosyntax, he gives the example of the association between ergative alignment and perfective aspect, and between accusative alignment and imperfective aspect, and he also cites Aristar (1991) for a diachronic explanation of word order correlations.
But even after reading the three papers by Anderson, it wasn’t quite clear to me what exactly he is claiming (or whether he isn’t really making a claim, just voicing serious doubts about the importance of UG for understanding grammatical patterns; cf. the pessimistic conclusion of the 2008 paper).
In the 2005 paper, he distinguishes between three possible explanatory factors: (i) input data, (ii) the learning process, and (iii) what is cognitively possible (“the Language Faculty”; these distinctions reappear later, but somewhat less clearly). As a result of a narrow view of diachronic change, he regards diachronic change as belonging to the first factor, and he stresses how important this factor is. He concludes that “things are as we find them in substantial part because that is the outcome of the shaping effects of history, not because the nature of the Language Faculty requires it” (2016: 18).
I think this constitutes progress over the mainstream generative view, but Anderson doesn’t make it fully clear what exactly it is that he wants to explain, and one doesn’t get a sense of how the historical developments can have “shaping effects” if they are as “contingent” as he implies they are.
Linguists want as many explanations as possible, of course – not only explanations of universals (and universal tendencies), but also explanations of particular phenomena of particular languages. For example, why does Lezgian have final voicing, as in gat-u [summer-OBL] vs. gad [summer]? It turns out that a diachronic explanation of this quirk is available. A similar example from English is the voicing in singular/plural pairs like thief/thieves, which is synchronically puzzling but turns out to be a well-understood remnant of an earlier phonetically grounded sound change. The great discoveries of 19th century linguists (whose abandonment Anderson regards as premature) were mostly of this type: Explanations of synchronic idiosyncrasies as remnants of earlier patterns. And indeed, quite possibly, a very large part of the patterns we see are idiosyncratic and not more than remnants from accidental developments of the past (just as many other cultural patterns that surround us are arbitrary and remnants of the past: the layout of streets on our old cities, the ownership of the land, the way we dress (e.g. men’s ties), the games we play, etc.).
But the programme of linguists in the second half of the 20th century became more ambitious: They wanted to explain general properties of human language(s). They wanted universal scope of their explanatory theories. This applied equally to generative theories such as Chomsky & Halle’s Sound pattern of English and in the Principles & Parameters framework in syntax, and to functional theories such as Bybee’s Morphology and Hawkins’s Efficiency and complexity in grammars. Generative theories achieve universal scope by hypothesizing that the gaps in attested systems are due to innate constraints on possible grammars, while functional theories achieve universal scope by explaining universal tendencies through functional and cognitive biases in language use (which might also be innate but are not domain-specific).
Now my question to Anderson is: Can diachronic explanations also account for universal tendencies (not only for idiosyncrasies, as seen earlier)? Anderson is not as clear about this as he should be. One of his examples in the 2016 paper, the lack of a nominative reflexive in Icelandic, clearly concerns an idiosyncratic phenomenon of one language, Icelandic. The other examples do seem to concern universal tendencies (and the 2005 paper is explicitly about morphological universals), but Anderson does not say clearly that the universal tendencies are due to universal factors. He repeatedly talks about “common paths of diachronic development”, but “common” is not the same as “universal”. If tone “commonly” develops from the loss of a syllable-final consonant, then this may lead us to expect that tone languages have open syllables, but if there is another, equally (or more) common source for tone (maybe a syllable-initial consonant quality), then the expectation disappears. In order to explain a universal tendency by diachrony, one needs to claim that there is a diachronic asymmetry: Not only is the diachronic path A > B common, but the reverse diachronic path B > A is impossible (or uncommon). In other words, we need a notion of universal directionality of language change if we want to explain universals by diachrony.
It seems that in the case of final devoicing, there are good phonetic reasons for postulating such a universal, but what about the association between ergative case and perfective aspect? Anderson notes that there are multiple ways in which perfective constructions give ergative patterns, and imperfective constructions give accusative patterns, which he says “happen to converge”, resulting in a “synchronically accidental correlation”. But if the correlation is accidental, the prediction would be that we should see an equal number of cases of the opposite development, once we look at enough languages. In other words, we would not be explaining a universal (or universal tendency), but we would be explaining a set of idiosnycrasies that happen to be shared by a few languages.
Thus, in order to convince me that we should pursue diachronic explanations of universals, I need to see evidence that the diachronic mechanism is universal, too. Accidental diachronic developments cannot give rise to universal patterns, by definition.
But what would a universal diachronic mechanism look like? Generative linguists have repeatedly denied that there are universals of diachrony (cf. Haspelmath 1999, a review article on a book by David Lightfoot), and when functional linguists have proposed them (e.g. universals of grammaticalization, Lehmann (2015[1982])), they have generally regarded these universals of change as consequences of universal cognitive and functional biases of language use, not as due to the “transmission of grammars across generations” (the only mechanism of change that Anderson recognizes, 2016: 5).
How would this fit into Anderson’s three explanatory factors ((i) input data, (ii) the learning process, and (iii) what is cognitively possible)? Probably it would be part of (i), but for a functional perspective, “input data” sounds terribly impoverished. There are a large number of general patterns in language use, and this is what one can (and should) relate to language change. But if the diachronic changes that result in universal patterns are themselves due to universal usage patterns, how helpful is it to say that synchronic patterns are due to diachronic change, rather than to synchronic factors? Wouldn’t it be more insightful to say that universal usage biases result in universal grammatical patterns via diachrony?
References
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (January 7, 2016). Stephen Anderson on “diachronic explanation” (of what?). Diversity Linguistics Comment. Retrieved October 4, 2024 from https://doi.org/10.58079/nsth
Pingback: How can diachrony help explain typological distributions? | Diversity Linguistics Comment
Pingback: An interview with Sonia Cristofaro about diachronic change and typological explanation | Diversity Linguistics Comment
Pingback: An interview with Sonia Cristofaro about diachronic change and typological explanation | Diversity Linguistics Comment
Hi Natalia,
sorry for returning to this only now. Your point about children being possible agents in language change, potentially in both innovation and propagation, is well taken and I do not disagree with it. In fact, my statement was somewhat more hedged, namely that children have no PARTICULAR role to play in change. This formulation is taken literally from Bybee and Slobin’s (1982) study, in which they tested children of different ages and adults on their innovative productions of past tense forms. What they found was that children’s behaviour in the experiment does conform to the changes we see historically, but so does the adults’, and the latter even accounted for more of the historical patterns that that of children. Their conclusion, therefore, was that children do not have a PARTICULAR role to play in change, just as I phrased it. This simply argues against the traditional generative claim that ONLY children can instigate grammatical change by resetting parameters during the period of early acquisition. That’s exactly the kind of position that Anderson’s paper reminded me of. In the generative camp, it always involves reference to UG and since UG is invoked as an acquisition device helping particularly infants during their early learning processes, it places the locus of change on early childhood. That’s the position I tried to argue against. It is undisputed that especially older children contribute to the propagation of change, and this is what a lot of sociolinguistic studies target (cf. also Labov’s “transmission with incrementation”) – the period between 6 and 11 years appears to be crucial here. But again, just as in innovation, they do not seem to have an exclusive prerogative to the propagation of changes as opposed to adolescents and adults, that’s all. Finally, I know too little about the case of Nicaraguan Sign Language, but I’ve seen it being described as a ‘special case’ in the acquisition literature (e.g. in Tomasello 2003, if I remember correctly); if so, it may follow somewhat different dynamics, perhaps not unlike what Slobin (2002) argued for pidgin and creole communities, where he claims children to play a more vital role in the development of the language than ‘usually’ (e.g. by fostering processes of grammaticalization, making grammatical marking obligatory, etc.). But again, I know too little about this case at the moment, and I’m more than willing to accept that – whatever the circumstances – children can indeed instigate and propagate changes; it just seems to be neither the typical nor the exclusive road to grammatical innovations.
I have two general comments inspired by Anderson’s paper and previous comments:
Theoretically, one may assume that there are two types of universals/principles (of the Language Faculty, whatever): (i) a “super-universal” and (ii) a statistical universal (I think B. Bickel uses this term in various papers). The first one is always the strongest principle constraining a particular linguistic domain and overrides all other possible factors. The evidence for this kind of universal is “primitive” (but probably not easy to find): the superficial picture must fully adhere to such a universal in any language and under any circumstances. The second universal, in turn, is a different one. It might be always valid, so to say, but it need not be the only principle that constrain a particular linguistic domain or category, admiting other similar principles and case-specific constraints. In this case, one would have a complicated mechanism consisting of different principles which work “in conspiracy” together or in competition to each other. The evidence for this case is extremly difficult to provide because it requires that one has reasonably disentangled all these principles in a particular case, while the superficial picture might even be very contradictiory. One way to avoid this kind of proof is statistical, i.e. saying that one deliberately neglects other potential factors and just looks for statistical tendencies. Be it as it may, but it seemed to me that Anderson was looking for the super-universal type only and not for the statistical-universal type, when he argued, for example, that if final consonant devoicing has counterexamples this is not a universal because there are languages that do not adhere to this principle (Lezgian). Probably there exist also superuniversals such as “no syllables without a nucleus” but there might be also statistical universals – a possibility that Anderson doesn’t take engage with. In other words, if a particular linguistic domain in a particular language does not adhere to a (statistical) universal this is not sufficient evidence for me that this universal is not operative at some level/diachronic step (and even less so that this universal would not exist), this is only sufficient counter-evidence for a superuniversal.
As to diachronic vs. synchronic universals distinction, could we perhaps think that this is merely one and the same thing? Speakers don’t remember, so to say, what was the source (construction) and don’t think about what would be the target (cxn) and if they impose some universal principles these principles must be, so to say, synchronic (for that particular step). However, these principles sometimes become visible to us linguists only in the diachronic perspective. So the distinction would be just the distinction between synchronic vs. diachronic evidence not synchronic vs. diachronic universals.
Thanks, Martin, for this nice review and discussion of Anderson’s paper. I fully agree, especially with your conclusion on diachrony being itself explained by cognitive and functional usage biases (more on this below); incidentally, then, it’s great that Natalia comments on Hermann Paul in her comment – I think his Principles of Language History is sort of the first book on the usage-based approach as we now know it, or at least anticipates many of its central ideas (notably the effect of usage frequency on the cognitive organization of grammar)!
Let me briefly add two comments of my own on Anderson’s paper. On the plus side, I am, of course, generally sympathetic to the historical, anti-UG line of argumentation developed, so that I fully agree with his statement that “languages as we find them are the complex product of a complex history, and that diachrony has shaped them in particular ways that persist beyond the effect of the original determining conditions” (p. 5.18). Also, the specific examples he discusses (e.g. ergativity) seem to be nicely in line with, e.g., Cristofaro’s (2012) recent argumentation, according to which different diachronic paths may converge on similar outcomes, without any architectural ‘supraprinciple’ (functional or formal) being necessary to invoke.
At any rate, however, I disagree with Anderson’s second important conclusion, namely that “generalizations that we do observe […] may usefully be attributed to biases in the way the learning algorithm constructs and assesses the hypothesis space in grammar construction” (ibid.). In several places, Anderson causally links typological generalizations/ statistical universals to ‘grammar induction’ by children, which clearly betrays his origins in the generative camp, where such child-based theories of change still predominate. However, there is now a rather extensive usage-based literature arguing against this assumption, so it does not seem adequate (and unnecessary, actually) to attribute diachronic changes in grammatical patterns to shifting probabilities in the primary data that children are exposed to, and from which they construct different grammars from the previous generation. In fact, it may be argued that child language acquisition has no particular causal role in diachronic changes, neither in innovation nor in spread. The diachronic changes that lead to universals (e.g. in the domain of split ergativity) are due to processes like reanalysis (just as acknowledged by Anderson) among adults rather than children. What is missing in the discussion is that reanalysis is not simply a “diachronic” process, but in the first place a synchronic process with a whole lot of cognitive underpinnings (analogy, inferencing, etc., cf. Bybee 2010 or Fischer 2007), and it is these (synchronic) underpinnings which seem to (probabilistically) constrain the states of grammars. To echo M.Tomasello, languages are not constrained by what is ‘learnable’ – children learn all kinds of structural complexities that are handed down to them, but Anderson suggests precisely that (“if recurrent change shapes grammars so that they will usually conform to some regularity, regularity could profitably be incorporated into the learner’s expectations (and thus into the Language Faculty) as a bias in the learning algorithm that would facilitate the rapid and efficient learning of the languages likely to be encountered.” (p.5.7)).
In sum, then, I think that Anderson has many important usage-based points to make about universals (which, frankly, I did not really expect given his generative origins), but I don’t agree with his alternative proposal of the “learning algorithm” being the locus of explanation.
Thanks for your comment, Karsten! I totally agree with your evaluation of Anderson’s proposal. However, I feel a bit uneasy about placing all responsibility for language change on adults. You write:
“In fact, it may be argued that child language acquisition has no particular causal role in diachronic changes, neither in innovation nor in spread. The diachronic changes that lead to universals (e.g. in the domain of split ergativity) are due to processes like reanalysis (just as acknowledged by Anderson) among adults rather than children.”
As a counterexample, I’d like to mention the work of Ann Senghas and her colleagues on Nicaraguan Sign Language, which emerged relatively recently, so the linguists could actually study its development in real time. She shows that young children (especially those who joined the signing community early) are the most important innovators and basically creators of the new grammar, and that they are responsible, among other things, for the switch from more iconic and holistic to more symbolic discrete signs.
Also, many variational linguists (most prominently, Labov) believe that there are two kinds of processes that result in language change: transmission and diffusion. Transmission is incremental and roughly corresponds to the change according to the family tree model. It can manifest itself in the increase of frequencies, extent or scope of linguistic variables (in sociolinguistic sense). Diffusion corresponds to the wave model and is best exemplified by borrowing. Usually, transmission is associated with children’s language acquisition, whereas diffusion relates to contact between communities and speakers (i.e. mostly between adults). Although I personally think that this dichotomy should not be perceived as something rigid, there are quite a few empirical variational studies that have managed to pin down both processes, e.g. in phonetic variation in American dialects (Labov 2007) and grammatical variation in the geographic varieties of Dutch (de Vogelaer 2009). So, children ARE agents of language change.
I think it’s worth pointing out that Paul’s maxim, “The only truly scientific study of language is historical”, which was quoted in Anderson’s paper, is often misunderstood (also by Anderson, not surprisingly). “Historical”, in Paul’s works, is not equal to what we are nowadays used to call diachronic as opposed to synchronic. Rather, it involves all previous facts of language use (including what happened a moment ago) in their context. In this way, Paul’s approach can be legitimately regarded as usage-based and emergentist (Hopper 2015). As Magda Reis put it (1978), the opposition between synchronic and diachronic reaches Aufhebung (in Hegel’s terms) in Paul’s concept of language history, that is, usage. Thus, Paul’s “historical” includes both synchrony and diachrony. I think Martin’s proposal is Paulian in the sense that it also contains this Aufhebung. It’s interesting how history repeats itself – and again, Leipzig is involved!
References
Reis, Magda. 1978. Hermann Paul. Beiträge zur Geschichte der deutschen Strache und Literatur 100 (2): 159-204.
Hopper, Paul J. 2015. Hermann Paul’s emergent grammar. In Peter Auer & Robert W. Murray (eds.), Hermann Paul’s “Principles of Language History” revisited, 237-256. Berlin: Mouton.