Since Caldwell (1856: 271), linguists have thought that the universal tendency of differential object marking (DOM) is explained by the pressure for languages to converge on efficient coding patterns, i.e. to concentrate their coding on the most atypical objects (namely referentially prominent, e.g. animate and definite, object nominals). This explanation was formulated clearly by Bossong (1991) (and also by Comrie 1989), and it seems that more and more evidence for it has been accumulating (e.g. Jäger 2007; Iemmolo 2011; Seržant & Witzlack-Makarevich (eds.) 2018). Even the OT-based formalization by Aissen (2003a) is explicitly functionally based. A particularly clear statement can be found in Aissen’s (2003b) paper, where she says that differential argument coding is found when the associations between roles (A and P, or subject and object) and referential prominence of the arguments are “deviations from the norm” (this is very similar to my formulations in my 2021b paper, where I should have cited Aissen’s second paper too, in addition to the better-known 2003a). The following is an excerpt from Aissen (2003b: 2).
Differential object marking is thus explained in the same way as many other kinds of asymmetric differential marking patterns (Haspelmath 2021a). So have there been serious challenges to this very successful theory of efficient coding?
On the one hand, one might cite Bickel et al. (2015), who claimed that they found “Typological evidence against universal effects of referential scales on case alignment”, but I think that Schmidtke-Bode & Levshina (2018) have shown convincingly that this evidence is very slim, and that the data cited by Bickel at al. should be interpreted quite differently.
So have there been challenges from generative grammar? Bárány & Kalin edited a recent book on Case, agreement and their interactions (2020), where they have an “Introduction” chapter that gives a good overview of different kinds of approaches, including the functional approach taken by Caldwell, Bossong, Aissen and many others. This is thus a “fair and neutral” overview, but it is also a bit frustrating, because one does not get a good sense of what the generative approaches are adding. In their discussion of Aissen (2003a), they say that “OT is particularly useful… because it is able to model variation through constraint re-ranking as well as capture the effects of universal prominence scales”. But what does it mean to “model” something, and to “capture” something? It is not clear to me that “modeling/capturing” is different from describing, whereas the issue in the present context is how to explain a universal pattern. If the OT constraints (and constraint hierarchies) are pre-established building blocks (part of the innate blueprint for grammars), then the OT analyses would amount to explanations, but nobody seems to believe that they are innate (at least Aissen doesn’t, it seems; see also §11.4 in my 2021b paper).
A challenge might come from Baker’s (2015) dependent-case theory, where he discusses some types of differential object marking and derives them from his blueprint-based theory (see my 2018 review for a critical assessement). But here there is a challenge to Baker from within generative grammar: As Kalin & Weisser (2019) have shown, quite a few languages have “asymmetric” different object marking in coordinations (i.e. only one of the conjuncts gets the object marker), e.g. Romanian:
They present this as counterevidence to Baker’s analysis of DOM in terms of object movement, because it would presuppose that the second conjunct is moved while the first object isn’t (which goes against the idea that movement out of coordinations is never allowed). So Kalin & Weisser agree that movement-based analyses are not a challenge to the functional explanation.
Where might challenges come from? Bárány & Kalin (2020) distinguish “morphological” (§2.3), “syntactic” (§2.4) and “information-structure” (§2.5) approaches. The first operates with the DM concept of “impoverishment”, on which B&K comment that “the default on the impoverishment approach is that objects have Case but markedness results in this Case not being realized”. However, there does not seem to be a clear causal connection between the constraints and the outcomes that is different from Aissen’s, so this seems to be merely a notational difference. Maybe the most salient difference is that the “morphological approach” can subsume the alternation between shorter and longer accusative-case markers (as found in Yukaghir), but this is of course quite expected on the functional explanation, too (in fact, zero/overt is just a special case of longer/shorter in my 2021b formulation). In the syntactic accounts surveyed by B&K, they mention the intuitions that “objects lacking certain features are invisible for case”, or that some types of objects “need (special) licensing” (that is somehow different from Case). But how would this contribute to the explanation? This is not clear to me, and I hope that generative syntacticians will take up the challenge of explaining it better. (This is also why I wrote earlier blogposts about generative papers about DOM, e.g. here, and here, and here. I keep hoping for more dialogue of the explanatory scopes of functional and generative theories, but progress seems to be slow.)
In their discussion of “information structure approaches”, B&K note that information-structural conditioning “poses challenges for a unified analysis of DOM”.
“Cases where information structure directly conditions DOM pose a number of additional challenges for a unified analysis of DOM. First, not all theories of grammar allow modeling the influence of information structure in syntax in a straightforward way. [Unacceptability of a sentence may follow] from its use in a certain context, not ungrammaticality per se. In other words, pragmatics plays a certain role in determining the felicity of utterances with and without DOM. Indeed, Danon (2006) suggests that DOM might follow a grammaticalization path from more pragmatic to more formal in the history of languages.” (Bárány & Kalin 2020: 14)
This is precisely what one expects from the perspective of the functional explanation: The conditioning is first of a more pragmatic or variable type and it then gets rigidified and syntacticized to varying degrees. This is thus a further argument for the functional view. But characteristically, B&K talk about a “unified analysis”, which seems to be the usual mode for generative explanations of cross-linguistic generalizations: Languages are thought to be similar if they are made up of the same building blocks, which means that they are analyzed in the same way. In practice, this does not seem to work, though – again and again, generative grammarians have concluded that there are different ways in which a particular broad typological pattern can come about (e.g. different kinds of VSO languages). For ergative constructions, Deal (2015) presents a similar kind of overview of different kinds of generative analyses, and she says:
“What accumulates from ergativity studies then is not an overarching theory of ergativity as a single parameter or a primitive. From a theoretical perspective there is no particular reason why this should exist.” (Deal 2015: 655)
Similarly, there is probably no deep reason why a single overarching formal system for analyzing DOM patterns should exist – different languages may have different analyses (in general, different languages have different systems and different categories, as has long been known, at least since Boas). The explanation for the cross-linguistic trend is clearly of the functional-adaptive, it seems. But again and again, we see authors seemingly pursuing the unrealistic goal of “unified analyses” for cross-linguistic purposes. It may be that we first need a thorough understanding of the crucial difference between language-particular analyses (p-analyses) and general (universal) findings and explanations (g-linguistics, see Haspelmath 2021c).
P.S. Here is the passage from Caldwell (1856), cited after Filimonova:
References
Aissen, Judith. 2003a. Differential object marking: Iconicity vs. economy. Natural Language & Linguistic Theory 21(3). 435–483.
Aissen, Judith. 2003b. Differential coding, partial blocking, and bidirectional OT. Annual Meeting of the Berkeley Linguistics Society, vol. 29, 1–16.
Bárány, András & Kalin, Laura (eds.). 2020a. Case, agreement, and their interactions: New perspectives on differential argument marking. Berlin: De Gruyter Mouton. (https://doi.org/10.1515/9783110666137) (Accessed March 3, 2021.)
Bárány, András & Kalin, Laura. 2020b. Introduction. Case, Agreement, and their Interactions, 1–26. Berlin: De Gruyter. (doi:10.1515/9783110666137-001)
Bickel, Balthasar & Witzlack-Makarevich, Alena & Zakharko, Taras. 2015. Typological evidence against universal effects of referential scales on case alignment. In Bornkessel-Schlesewsky, Ina & Malchukov, Andrej & Richards, Marc D. (eds.), Scales and hierarchies: A cross-disciplinary perspective, 7–43. Berlin: De Gruyter.
Bossong, Georg. 1991. Differential object marking in Romance and beyond. In Kibbee, Douglas & Wanner, Dieter (eds.), New analyses in Romance linguistics, 143–170. Amsterdam: Benjamins. (https://www.rose.uzh.ch/dam/jcr:ffffffff-c23e-37d9-0000-00006e1a9200/Bossong_80.pdf)
Caldwell, Robert. 1856. A comparative grammar of the Dravidian or South-Indian family of languages. London: Harrison.
Comrie, Bernard. 1989. Language universals and linguistic typology: Syntax and morphology. Oxford: Blackwell.
Danon, Gabi. 2006. Caseless nominals and the projection of DP. Natural Language & Linguistic Theory 24(4). 977. (doi:10.1007/s11049-006-9005-6)
Deal, Amy Rose. 2015. Ergativity. In Kiss, Tibor & Alexiadou, Artemis (eds.), Syntax: Theory and analysis (Volume 1), 654–708. Berlin: De Gruyter Mouton. (doi:10.1515/9783110377408.654)
Filimonova, Elena. 2005. The noun phrase hierarchy and relational marking: Problems and counterevidence. Linguistic Typology 9(1). 77–113.
Haspelmath, Martin. 2021a. Explaining grammatical coding asymmetries: Form-frequency correspondences and predictability. Journal of Linguistics 57(3). 605–633. (doi:10.1017/S0022226720000535)
Haspelmath, Martin. 2021b. Role-reference associations and the explanation of argument coding splits. Linguistics 59(1). 123–174. (doi:10.1515/ling-2020-0252)
Haspelmath, Martin. 2021c. General linguistics must be based on universals (or nonconventional aspects of language). Theoretical Linguistics 47(1–2). 1–31. (doi:10.1515/tl-2021-2002)
Iemmolo, Giorgio. 2011. Towards a typological study of differential object marking and differential object indexation. University of Pavia. (PhD dissertation.)
Jäger, Gerhard. 2007. Evolutionary game theory and typology: A case study. Language 83(1). 74–109. (doi:10.1353/lan.2007.0020)
Kalin, Laura & Weisser, Philipp. 2019. Asymmetric DOM in coordination: A problem for movement-based approaches. Linguistic Inquiry 50(3). 662–676. (doi:https://doi.org/10.1162/ling_a_00298)
Schmidtke-Bode, Karsten & Levshina, Natalia. 2018. Assessing scale effects on differential case marking: Methodological, conceptual and theoretical issues in the quest for a universal. In Seržant, Ilja A. & Witzlack-Makarevich, Alena (eds.), Diachrony of differential argument marking (Studies in Diversity Linguistics 19), 463–489. Berlin: Language Science Press.
Seržant, Ilja A. & Witzlack-Makarevich, Alena (eds.). 2018. Diachrony of differential argument marking (Studies in Diversity Linguistics). Berlin: Language Science Press.
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (July 26, 2022). Marking atypical objects is efficient – does generative grammar challenge this functional explanation of DOM? Diversity Linguistics Comment. Retrieved October 4, 2024 from https://doi.org/10.58079/nswo
I don’t think the “functional” and “generative” approaches necessarily conflict here.
Assuming that at the level of speech communities, there is a tendency to converge to more efficient coding patterns [for example, as seen in typical patterns of Differential Object Marking], the question remains: how do so many speech communities manage to converge on these more efficient patterns?
A generativist might say that speech communities manage to achieve these efficient patterns because languages with efficient patterns are innately easier to learn, or something like. [Presumably, the bias towards learning languages where atypical objects are marked was selected for, rather than the reverse bias, because if enough people have such a learning bias, they are more likely to end up with “efficient” languages, which is what is ultimately selected for.]
The question is, are there alternative potential mechanism for how speech communities manage to regularly converge to efficient coding patterns? From a functionalist perspective, perhaps something related to the social/cultural transmission of languages might be the solution? I feel like whatever mechanisms explain how languages converge on Zipfian word length/word frequency distributions are likely to be useful for explaining how languages converge on efficient DOM patterns, but unfortunately I don’t really remember any such mechanism off the top of my head.
How does efficient coding become conventional, i.e. how do speakers converge on a set of regularities? That’s indeed an interesting question, but how does the generative perspective contribute to answering it? That is not clear to me. Your formulation is: “How do so many speech communities manage to converge on these more efficient patterns?”, but in this formulation, I don’t understand the question. These communities do not “converge” with each other because they develop these patterns independently, and “manage” isn’t appropriate. The converge happens *within* each community, through some kind of conventionalization process that linguists rarely discuss, probably because we know so little about it.
Thanks for your response!
“How does efficient coding become conventional, i.e. how do speakers converge on a set of regularities?” is exactly what I was trying to ask, but more clearly worded.
The convergence happens *within* each community, through some kind of conventionalization process that linguists rarely discuss, probably because we know so little about it.
I think your 2019 paper, “Can cross-linguistic regularities be explained by constraints on change?” seems relevant to how and why multiple languages independently develop efficient marking patterns.
” […] our analysis provides a motivation for DOM that is different
from the claims of much previous research. Most work on DOM assumes
that object marking originates from the need to differentiate the object from
the subject. However, we claim that DOM actually marks similarities rather
than differences between subjects (canonical topics) and topical objects: topics
tend to bear grammatical marking, no matter what their grammatical function.
Thus, our analysis does not relate the formal markedness of objects with their
functional markedness, at least if the latter is assessed in terms of frequency
or typicality. Instead, it highlights the coding or indexing function of marking
as an indicator of topicality. Our approach stands in opposition to the common view that objects are prototypically aligned with the focus function: we
have argued that SUBJ/topic, OBJ/(secondary) topic alignment is equally likely,
where both core arguments are topical. In support of this view, we have discussed evidence which shows that topical objects are as least as frequent in
discourse as focused objects, and in this sense the former cannot be considered
functionally marked.
[….] Another crucial difference between our work and many previous analyses of
DOM is that we do not discuss only grammatical (morphological) marking of objects, but also pay special attention to their syntactic behaviour. Typologically
based work does not usually address the syntax of objects, while most generative research concentrates on positional differences. Our analysis does not define DOM in terms of object position because we do not assume that syntactic roles are defined configurationally: following the standard LFG view, we
take grammatical functions to be primitives which are not defined in terms of
their syntactic position. In our investigation of the grammatical behaviour of
grammatically marked and unmarked objects, we found that languages differ:
in some languages they are both primary objects, while in other languages
they bear different object-like functions. In languages like Ostyak, Khalkha
Mongolian and Chatino, grammatical marking of objects may seem to depend
on information structure: topical objects are marked, while nontopical objects
are unmarked. However, closer examination reveals that, in fact, marking patterns in these languages are defined in completely syntactic terms, just as in
English or Latin. The distinguishing characteristic of these languages is the
obligatory linkage between grammatical functions and information structure:
primary objects are always topical, while secondary or restricted objects are
nontopical. This implies that in some cases grammatical structure may arise
diachronically under pressure from information structure constraints. The need
to distinguish two types of information structuring (with and without a topical
object) has led to grammatical differences that go beyond patterns of agreement or casemarking.”
(Dalrymple & Nikolaeva, Objects and information structure, 2011)
Many thanks, I should reread that part of your book! It looks more relevant than I had remembered…