At least since Greenberg’s seminal work on grammatical universals, comparative linguists have often talked about worldwide preferences in probabilistic terms. For example, Greenberg (1963) noted that “With overwhelmingly greater than chance frequency, languages with normal SOV order are postpositional” (Universal 4).
Since the 1970s, there has been increasing awareness that it is not sufficient to look at a few dozen languages that we happen to have easy access to (e.g. Bell 1978, Bakker 2011). “Convenience samples” of the type that Greenberg used may not be representative of worldwide linguistic diversity, because the languages are not independent of each other (as is required for statistical representativeness). They may show the observed similarities because they inherited them from a common ancestor (and not enough time has passed for them to change away from it), or they may have borrowed them from each other. Thus, Dryer (1988) showed that some of Greenberg’s universals (e.g. those about adjective-noun order) hold in Eurasia, but not in a truly worldwide sample. Greenberg’s sample of 30 languages contained 16 languages from Eurasia.
From the 1990s onwards, all typology textbooks and handbooks have contained a section or chapter on language sampling, so the level of methodological sophistication has been fairly high in the field (though this has not led to any changes in generative typology (e.g. Baker 2011), where sampling continues to be considered irrelevant, because only absolute universals are considered relevant in this approach, for some reason). Typologists have thus known for quite some time that probabilities are not easy to determine, given that sampling cannot completely eliminate the possible biasing effects of borrowing and genealogical relatedness at a deeper level. On the other hand, many robust universals have not really been in doubt – for example Greenberg’s Universal 35: “There is no language in which the plural is not sometimes overtly marked, whereas there are languages in which the singular is expressed only by zero.” We cannot test this claim by examining all languages, so we need to estimate its probability – and in view of the fact that only one potentially contradicting language has come up (Imonda), everyone is confident that there is an extremely high probability for languages to conform to the generalization.
Now an alternative way of estimating probabilities has recently been proposed (ultimately based on Felsenstein (1985) and Mace & Pagel (1994), it seems): Instead of sampling many different families, the idea is to study trees of a few large language families and to look at transitions (changes from one type to another) instead of family-independent distributions. Let’s call this approach “tree-based probability estimation”. It is exemplified in a recent paper by Manuel Widmer and colleagues (Widmer et al. 2017), on “NP recursion over time” in 59 Indo-European languages. The paper examines adpossessive constructions with a full nominal possessor, and it takes the Indo-European data provide evidence for the claim that languages have a general tendency “to maintain or develop syntactic recursion in NPs”. Now this is not a surprising conclusion at all – we have not heard about languages lacking recursive nominal structures (again, with a single possible exception: Pirahã).
Widmer et al. point out – and this is indeed very interesting – that there are a variety of cases in Indo-European (IE) languages where an adpossessive nominal is not recursive. This is the case in the Russian “adjectivized” construction (e.g. mam-in-a kniga [Mom-POSS.ADJ-F.G book(F) ‘Mom’s book’), in the German prenominal Genitive construction (e.g. Bernards Buch), and in some further constructions in a variety of other IE languages. The availability of recursion has disappeared over time in a number of different adpossessive construction types (see their §3.3), and Widmer et al. suggest that this raises “the possibility that at many hypothetical times in the history of the family, recursive embedding could have been lost simultaneously”, so that an IE language would have ended up having no recursive nominals (like Pirahã). They use a high-tech quantitative method (“Bayesian phylogenetic analysis”) to estimate the probability that an IE language will end up without any recursive adpossessive type, and they find that this probability is close to zero.
What does this tell us that we didn’t already know? This is not so clear to me. The first serious problem that I see with the paper is that the authors seem to assume that the loss of a recursive adpossessive pattern is independent of the existence of other recursive patterns. But this assumption is quite likely false. To take an example from a different domain: One would not conclude from the possibility of losing [i], [u] or [a] (events that are repeatedly attested in IE languages and elsewhere) that a language could lose all these vowels simultaneously and end up without any vowels. Similarly, the fact that a negation strategy can be lost (i.e. Modern Greek lost the Classical Greek negator ou, and German lost the Old High German negator ne) would not lead us to conclude that a language can end up with no negative marker. Thus, even though individual adpossessive constructions occasionally lose the ability to be recursive, this does not tell us anything about possible earlier (unattested) IE languages without any nominal recursion.
But more generally, why should tree-based probability estimation be preferable to the obvious alternative? Namely, to minimize genealogical bias and borrowing bias by using a maximally stratified sample, e.g. 100 languages from unrelated families in many different parts of the world. This is the method that has long been used, and it has given very significant and mostly undisputed results (e.g. Dryer 1992; Siewierska 2004; Hawkins 2014; Dryer 2018).
Genealogical bias cannot be excluded by the tree-based method, because this method can take into account only very few large families (a single family in Widmer et al. 2017; four families in Dunn et al. 2011). It could be that for some reasons, these families behave in an unusual way – and this possible source of bias is excluded by the stratified-sample method, because it allows us to include languages from families of any size. And borrowing bias cannot be excluded by the tree-based method either, because the changes that we see in the trees are not independent of borrowing. This can be seen clearly in Widmer et al.’s results: They find that recursive genitives are preferred in the diachrony of IE languages (“Genitives favor evolution toward being available for recursion”, p. 819), but this is of course not independent of the areal setting: Recursive genitive nominals are also found in Basque, in Uralic, Turkic, two Caucasian families, in Dravidian, and in Munda, i.e. almost all of the families that IE languages have been in contact with over the last few millennia (as far as we know). It is well-known that northern Eurasian languages strongly favour flagged (dependent-marked) constructions, as a general attribute of the area. (The worldwide picture is rather different, so this is a macro-areal effect.)
Another argument for the tree-based method that has been suggested is that we do not know all the families yet, so languages thought to be unrelated may turn out to descend from a common ancestor after all. Thus, there may be hidden generalogical bias (this was mentioned by Cysouw 2011: 417). But while this is true in principle, I have not seen any case where structural types that vary significantly across the world’s languages could be so super-stable that they might be inherited from an unidentified protolanguage, thus skewing the estimation of universal probabilities. In practice, almost all those types that we see varying across families have also been observed to vary commonly within large families. Thus, I do not see that this consideration, while certainly interesting in principle, has much practical relevance.
I am very sympathetic with Widmer et al.’s conclusion that languages probably show a very strong tendency to have recursive nominals because “they are preferred by our processing system”, but why should one study a set of closely related languages for this? Maybe another reason for this choice is that Widmer et al. think of the causal factor as an “evolutionary bias”:
“This suggests that languages in general prefer the evolution of structures that allow recursion over the evolution that ban recursion. That is, there is an evolutionary bias that favors maintenance of recursion…” (Widmer et al. 2017: 800)
To be sure, causal factors that impinge on variable language structures take effect only in diachrony – but this does not mean that the bias (=the causal factor) is itself somehow “evolutionary” or “diachronic”. Languages tend to have a set of fairly different vowels (e.g. [i], [u], [a]) because this is efficient for communication, not because of en “evolutionary bias”. On the contrary: there are a range of diverse changes that can give rise to these vowels – what is uniform across languages is the outcomes, not the changes leading toward these outcomes. In a recent paper (Haspelmath 2019), I emphasized that multi-convergence of pathways of change is the best indicator that the causal factor is functional-adaptive, and does not reside in the pathway of change itself (though I do not deny the existence of such “mutational constraints” – but they seem to play a very limited role in explaining language structures). Likewise, as beautifully documented by Widmer et al. (2017), there are a wide range of changes that can result in recursive nominal constructions – so we have multi-convergence toward a preferred situation. We see evolutionary diversity, but uniformity of outcome – an excellent reason to think that there is a functional-adaptive force pulling all languages into the same direction.
References
Bakker, Dik. 2011. Language sampling. In Jae Jung Song (ed.), The Oxford handbook of linguistic typology, 100–127. Oxford: Oxford University Press.
Bell, Alan. 1978. Language samples. In Joseph H. Greenberg (ed.), Universals of human language, vol. 1, 123–156. Stanford: Stanford University Press.
Dryer, Matthew S. 1988. Object-verb order and adjective-noun order: Dispelling a myth. Lingua 74(2–3). 185–217.
Dryer, Matthew S. 1992. The Greenbergian word order correlations. Language 68(1). 81–138.
Dryer, Matthew S. 2018. On the order of demonstrative, numeral, adjective, and noun. Language 94(4). 798–833. doi:10.1353/lan.2018.0054.
Dunn, Michael, Simon J. Greenhill, Stephen C. Levinson & Russell D. Gray. 2011. Evolved structure of language shows lineage-specific trends in word-order universals. Nature 473(7345). 79–82. doi:10.1038/nature09923.
Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.), Universals of language, 73–113. Cambridge, MA: MIT Press.
Haspelmath, Martin. 2019. Can cross-linguistic regularities be explained by constraints on change? In Karsten Schmidtke-Bode, Natalia Levshina, Susanne Maria Michaelis & Ilja A. Seržant (eds.), Competing explanations in linguistic typology, 1–23. Berlin: Language Science Press. http://langsci-press.org/catalog/book/220.
Hawkins, John A. 2014. Cross-linguistic variation and efficiency. New York: Oxford University Press.
Mace, Ruth & Mark Pagel. 1994. The comparative method in anthropology. Current Anthropology 35(5). 549–557.
Siewierska, Anna. 2004. Person. Cambridge: Cambridge University Press.
Widmer, Manuel, Sandra Auderset, Johanna Nichols, Paul Widmer & Balthasar Bickel. 2017. NP recursion over time: Evidence from Indo-European. Language 93(4). 799–826. doi:10.1353/lan.2017.0058.
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (April 6, 2020). Do we need trees to estimate worldwide probabilities of structural types? Some comments on Widmer et al. (2017). Diversity Linguistics Comment. Retrieved November 13, 2024 from https://doi.org/10.58079/nsvo
nice information