Rigour is more important than depth: Why language universals should not be based on in-depth analysis

Many linguists think that broad cross-linguistic comparison is sometimes “too shallow”, and that instead, language universals can be detected only if they are based on “in-depth”, “abstract” and “detailed” analyses. Here I give reasons to think that this is the wrong approach. This discussion is not new (cf. Comrie 1981; Coopmans 1983), but it needs to be revisited, because this erroneous idea remains very strong in the discipline. Rather than in-depth analyses, I suggest that we need rigorous objective testing of generalizations, as elsewhere in science. Of course, there is no a priori reason to think that in-depth analyses should be incompatible with objective testing, but below I explain why in practice, they do not go together (for the current generations of linguists – maybe the distant future will bring changes).

The idea that we need in-depth analyses was recently expressed by David Pesetsky (in an interview published on this blog), and similarly by Luigi Rizzi in a 2014 paper:

“In conclusion, assessing the configurational (or other architectural) properties of language requires much detailed analytical work on individual languages: the simple scrutiny of superficial properties will not allow us to reach firm comparative conclusions, such as the proper assessment of hypotheses on the universal structure of language. As soon as a detailed analytical work is undertaken, much as in the cases just quoted, a rich invariant structure always emerges from the variability of surface arrangements.” (Rizzi 2014: 29)

Anders Holmberg puts it as follows:

“as linguistic theory progresses…, the more confident we can be that the observations are accurate, and the more abstract the properties can be that are subject to typological research” (Holmberg 2016: 363)

These authors are generative syntacticians, and these quotations are typical of the emphasis on detailed and deep language-particular studies in the generative tradition. Instead of describing entire languages in reference grammars, generative linguists usually encourage their students to work on small corners of a single language. In the past, the bigger languages tended to be in the focus (e.g. English, Italian, Japanese, Chinese), but more recently, the in-depth study of particular phenomena in smaller languages (including dialects of languages such as Italian or Dutch) has become much more prominent in this tradition.

Now there is certainly nothing to be said against the in-depth study of particular phenomena in small languages – one of my first papers (in 1991) was an in-depth study of the ergative construction in Lezgian, which I recalled in a recent invited talk at the TMP conference in Moscow. However, how do we get from language-particular phenomena to universals? This is a question that is rarely addressed by generative grammarians.

By contrast, comparative grammarians that do not have a Chomskyan pedigree have often discussed methodological issues of comparability (e.g. Levinson et al. 2003; Slobin 2008; Croft 2009; Rijkhoff 2009; Stassen 2011; Dryer 2016; Haspelmath 2018; Corbett & Round 2021). It is widely recognized that each language is structurally unique (has its own emic categories), so that language comparison must be based on concepts derived from conceptual or phonetic substance (etic concepts). As Levinson & Evans (2010: 2737) put it: “comparison of languages has to be undertaken in an auxiliary language designed to generalize over language-specific categories” (Evans’s views are discussed further here). Such comparative concepts are not literally “a language”, but they can be of diverse kinds: nonverbal stimuli in experiments, Bible verses in parallel-text studies, elicitation questionnaires, and category-like comparative concepts (Haspelmath 2018). These concepts can be applied uniformly in all languages, and allow rigorous and objective testing of universal claims.

Generative grammarians, by contrast, usually presuppose that language comparison must be based on the same categories that are used in language description (i.e. they make no distinction between emic and etic units). These categories must be part of a rich innate grammar blueprint – in other words, this approach is incompatible with Chomsky’s 21st century view that there is no such rich grammar blueprint (as discussed earlier in this blogpost, and in Haspelmath 2020). The idea is that there is a set of innate building blocks of grammar of which all languages are composed, just as all stuff is made of the chemical elements. I have also called this the “Mendeleyevian vision” in my Moscow talk, because the idea is that by analyzing different languages, we will slowly converge on a set of innate building blocks, analogous to the Periodic Table of Elements, which allow both elegant description and comparison, with the comparison resulting in constraints on possible languages (see Baker 2001 for the analogy with chemistry).

But why should this innatist approach require “in-depth analysis”, and why am I saying that it is less rigorous and objective?

The short answers to these questions are: (i) in-depth analyses are thought to be necessary because generative grammarians think that comparisons must be based on “true analyses”, which require a complete and maximally general picture of each language, and (ii) this method is not rigorous because it involves many subjective decisions by indvidual researchers.

What is a “true analysis”? For generative grammarians, this means an analysis that reflects the mental grammars of speakers, and these are assumed to be maximally general. These assumptions are so widespread that many people are not even aware that they are making them. But it is not clear that there is such a thing as a “uniform mental grammar”, because different speakers may have different mental grammars, and we have very limited access to these mental grammars. By contrast, the social grammars that speakers use are very uniform, and we can readily describe these (in fact, this is what we do in practice, because acceptability judgments are about social acceptability). But there is no such thing as “a single true social grammar”. Generality of description is a practical issue, not a matter of truth. Many pedagogical grammars are not maximally general, because they need to be readily comprehensible. For the description of social grammars, this approach works very well too – descriptive grammars are meant to be comprehensible, not “true” in a sense that goes beyond the social rules.

Now it is true that there are many beautiful generative analyses that manage to derive a surprising set of disparate phenomena from a single general concept. I take it that this is the key experience of generative grammarians, and that they often miss in non-generative comparative work. For example, David Pesetsky says:

“You first need an “in-depth” analysis. That’s the whole point of our work, and why what passes for “typology”, while sometimes useful in generating guesses about promising generalizations or correlations, often does not look like maximally useful research to people like me – since uncovering the real generalizations does require in-depth prior analysis.”

It is clear that by looking closely at a wide range of facts, linguists are often able to discover generalizations that would not have been apparent at first glance. So indeed, “in-depth” analyses often have a certain beauty. The emphasis on such non-obvious language-internal generalizations is a hallmark of the structuralist approach since the 1920s, and few linguists find such generalizations irrelevant. In generative grammar, they were extended from phonology and morphology to syntax, which has been the most important contribution of this approach.

Coopmans (1983: 466) made very similar comments about Comrie’s (1981) book about syntactic universals:

“Recent theories of generative grammar show that detailed analysis of a number of well-studied languages may shed light on a range of problems, giving rise to more explanatory theories with more abstract principles and covering a wider range of data.”

Such optimism expressed almost forty years ago can perhaps be understood, but what is the status of these “explanatory theories” at present? Are there strong converging ideas about how we can infer “universal grammar” from the study of particular languages?

Despite the large generative literature, there do not seem to be any specific findings about “universal grammar” that are robust and generally recognized. Adger’s (2019) book focuses on the importance of recursion, which was never in doubt, and the ambitious proposals of Baker (2001) have mostly been abandoned (just as all the specific proposals cited by Coopmans 1983).

I think that this is perhaps inevitable because the abstract concepts that such in-depth analyses result in (movement, zero, VPs, underlying forms, blocking mechanisms, etc.) make it very difficult to compare languages, because these concepts are not directly observable. They must be inferred by complex processes, and as all linguists know, there are no unique results of these inferences (though the specific proposals are often beautiful, as I noted). Different linguists arrive at different solutions, and it is often poorly understood what makes one linguist to choose one solution and another linguist to prefer another one. There are too many “moving parts”. (Moreover, linguists often associate with particular research traditions or “frameworks”, and these seem to determine the outcome as much as empirical considerations.)

Thus, there is not only a lot of subjectiveness in decisions about abstract categories, but also a large amount of social baggage. So in practice, in-depth analyses are not a good basis for rigorous cross-linguistic comparison, despite all their (language-particular) beauty. In my 2019 paper on “Ergativity and depth of analysis”, I elaborate on this a bit more, giving examples from the study of case-marking patterns. (The most fundamental problem is the use of different criteria for different languages, or “diagnostic-fishing”, discussed in §7 of my 2018 paper on comparative concepts. This process is quite unconstrained and thus often subjective.)

In principle, there is nothing wrong with the idea of a rich set of innate building blocks (if we leave aside Darwin’s Problem, which acording to Berwick & Chomsky (2016) makes this a priori unlikely). And many linguists seem to think that we are sufficiently close to knowing what these innate building blocks are (Baker (2001) suggested that the “Mendeleyev of linguistics” may be just around the corner; though Baker (2008) offers a more sober assessment). But to turn this into a serious testable hypothesis, what we need is a specific proposal – a list of innate features and categories for morphosyntax that are proposed to be innate, analogous to Wierzbicka’s list of semantic primes, or to Chomsky & Halle’s list of phonological features. Once we have such a list, this proposal could be taken very seriously, and one could attempt to test it. But as long as all we have from generative grammar is “mid-level generalizations” (D’Alessandro 2019), it seems more advisable to adopt a less speculative approach in terms of comparative concepts that are defined uniformly at the level of observation, not at a level of abstract analyses in terms of innate building blocks.

References

Adger, David. 2019. Language unlimited: The science behind our most creative power. Oxford: Oxford University Press.

Baker, Mark C. 2001. The atoms of language. New York: Basic Books.

Baker, Mark C. 2008. The macroparameter in a microparametric world. In Biberauer, Theresa (ed.), The limits of syntactic variation. Amsterdam: Benjamins.

Berwick, Robert C. & Chomsky, Noam. 2016. Why only us: Language and evolution. Cambridge, MA: MIT Press.

Comrie, Bernard. 1981. Language universals and linguistic typology: Syntax and morphology. Oxford: Blackwell.

Coopmans, Peter. 1983. Review of Comrie, Bernard (1981) “Language universals and linguistic typology.” Journal of Linguistics. Cambridge University Press 19(2). 455–473.

Croft, William. 2009. Methods for finding universals in syntax. In Scalise, Sergio & Magni, Elisabetta & Bisetto, Antonietta (eds.), Universals of language today, 145–164. Dordrecht: Springer.

D’Alessandro, Roberta. 2019. The achievements of Generative Syntax: a time chart and some reflections. Catalan Journal of Linguistics 0(0). 7–26. (doi:10.5565/rev/catjl.232)

Dryer, Matthew S. 2016. Crosslinguistic categories, comparative concepts, and the Walman diminutive. Linguistic Typology 20(2). 305–331. (doi:10.1515/lingty-2016-0009)

Haspelmath, Martin. 1991. On the question of deep ergativity: The evidence from Lezgian. 44/45(1–2). 5–27. (doi:10.5281/zenodo.225289)

Haspelmath, Martin. 2018. How comparative concepts and descriptive linguistic categories are different. In Van Olmen, Daniël & Mortelmans, Tanja & Brisard, Frank (eds.), Aspects of linguistic variation: Studies in honor of Johan van der Auwera, 83–113. Berlin: De Gruyter Mouton. (https://zenodo.org/record/3519206)

Haspelmath, Martin. 2019. Ergativity and depth of analysis. Rhema 2019(4). 108–130. (doi:10.31862/2500-2953-2019-4-108-130)

Haspelmath, Martin. 2020. Human linguisticality and the building blocks of languages. Frontiers in Psychology 10(3056). 1–10. (doi:10.3389/fpsyg.2019.03056)

Holmberg, Anders. 2016. Linguistic typology. In Roberts, Ian (ed.), The Oxford handbook of Universal Grammar, 355–376. Oxford: Oxford University Press. (http://www.oxfordhandbooks.com/view/10.1093/oxfordhb/9780199573776.001.0001/oxfordhb-9780199573776-e-14)

Levinson, Stephen C. & Evans, Nicholas. 2010. Time for a sea-change in linguistics: Response to comments on ‘The myth of language universals.’ Lingua 120(12). 2733–2758.

Levinson, Stephen & Meira, Sérgio & The Language and Cognition Group. 2003. “Natural concepts” in the spatial topological domain-adpositional meanings in crosslinguistic perspective: An exercise in semantic typology. Language 79(3). 485–516.

Rijkhoff, Jan. 2009. On the (un)suitability of semantic categories. Linguistic Typology 13(1). 95–104. (doi:10.1515/LITY.2009.005)

Rizzi, Luigi. 2014. On the elements of syntactic variation. In Picallo, Carme (ed.), Linguistic variation in the minimalist framework, 13–35. Oxford: Oxford University Press.

Round, Erich & Corbett, Greville G. 2021. Comparability and measurement in typological science: The bright future for linguistics. to appear.

Slobin, Dan I. 2008. Breaking the molds: Signed languages and the nature of human language. Sign Language Studies. Gallaudet University Press 8(2). 114–130.

Stassen, Leon. 2011. The problem of cross-linguistic identification. In Song, Jae Jung (ed.), The Oxford handbook of language typology, 90–99. Oxford: Oxford University Press.

 

 

 

 

 


OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (March 1, 2020). Rigour is more important than depth: Why language universals should not be based on in-depth analysis. Diversity Linguistics Comment. Retrieved October 4, 2024 from https://doi.org/10.58079/nsvj


3 thoughts on “Rigour is more important than depth: Why language universals should not be based on in-depth analysis

  1. There was quite a bit of further discussion of this on my Facebook page (https://www.facebook.com/martin.haspelmath/posts/10219780369272088), some of which I summarize here (without giving the names of the colleagues):

    Colleague: Self-styled typologists also assume cross-linguistically applicable categories not observable without some degree of analysis. How else do they manage to discuss VO vs. OV, or variation in relative clause construction, if they’re not presupposing some non-observables? What puzzles me, and in my view holds up progress, is the weird acceptance of non-observables as the basis of cross-linguistic comparison just so long as they weren’t discovered by “generativists”. “Direct object” yes, “Ergative” sure, “Movement to C” no. Why?

    Martin: So how do we distinguish VO and OV? By looking at the (observable) position of patient nominals and action words. These are not p-cateories (“language-particular categories”), but concepts designed for comparison.

    How do we identify relative clauses? By looking at clauses (constructions minimally involving an action word) that are used adnominally to restrict the reference of a noun which has a semantic role in them. This does again involve some fairly abstract concept, but crucially, it does not involve abstract p-categories. All these notions are compartive concepts.

    This has nothing to do with who first used (or discovered) these concepts, but with how they are defined. If “unaccusative” is defined as “event word denoting a telic change-of-state event”, it can be used cross-linguistically, because the same criteria work for all languages. (But if it is defined by auxiliary selection or ne-cliticization, that doesn’t work, because these diagnostics cannot be applied uniformly to all languages.)

    Likewise, there is no way to identify “movement to C” by using the same criteria in all languages. (We CAN identify wh-fronting, of course – see Dryer’s WALS chapter about this.)

    Colleague: I am also puzzled by the “argument from they-don’t-agree-with-each-other” that you bring up here and elsewhere concerning categories and claims about universality in so-called “generativist” work. From where I stand, at least, it looks like there is massive agreement in general about how languages work in general, going way beyond mere reference to “recursion” — because our in-depth analyses have discovered clear and unmistakable signs of cross-linguistic similarities of some depth and non-obviousness. But of course we don’t have it all figured out, and new discoveries do surprise us from time to time. That’s what’s meant by a living field, isn’t it?

    Martin: If there is “massive agreement about how language works”, why is it so hard for you to express what this agreement consists in? What I see is massive agreement on certain kinds of notation and certain technical words (binary branching, little v, probes and goals, etc.), but I cannot recognize anything that resembles Chomsky & Halle’s distinctive features, or Wierzbicka’s semantic primes.

    Colleague: You begin with a question worded as follows: “If there is ‘massive agreement about how language works’, why is it so hard for you to express what this agreement consists in?” Your wording of course presupposes that it is in fact hard for me or my colleagues to express what the agreement consists in. So the first task if one is to reply, is to deny that presupposition, because in the form intended it is not in fact hard to express what the agreement consists in. It’s not a table of distinctive features that fits on a half-page, but fills a textbook, or a two-semester sequence of courses — as is typical of any serious field.

    Likewise your characterization of binary branching, little v, and probes and goals as “certain kinds of notation and certain technical words”, rather than what they are: substantive hypotheses about how language works on which there is considerable agreement about the main points, coupled with competing hypotheses about details (with a slightly different research picture for each of the items that you mention). So a meaningful reply would first have to explicitly dispense with the negative presuppositions of terminology like “technical words”, present the actual picture of knowledge and non-knowledge, before turning to its significance for the broader issue of how one should investigate universals, the human language faculty, etc.

    Martin: Thanks – these exchanges may be useful, because many people simply don’t seem to understand what it means to have concepts that can be applied to all languages using the same criteria (rigorous comparative concepts). I think most people know that generative grammar is very complex and you can’t express your insights in a table of features – but this is what I mean: In this two-semester course you will teach your students many things that are not at all shared by a substantial segment of the discipline. Everything is constantly under discussion – maybe it’s a “serious living field”, but it’s not a field that has clear successes.

    Colleague: I am baffled by the claim that generative syntax is a field with no successes. To me it looks like an enormously fertile project. What do you count as a success?

    Martin: There are many local successes, but I‘m interested in global successes, because generative grammar makes strong claims about language universals (UG). A global success would be a widely understood claim that is testable without making a large number of very uncertain assumptions. Or even better, a claim that is widely thought to be true because it has been supported by a lot of evidence. I see a vibrant field that keeps making very general suggestions and claims, but they are not being subjected to systematic tests. It seems too difficult, because such tests presuppose in-depth analyses.

    Colleague: This strikes me as a very odd way of evaluating scientific evidence/progress. In every scientific field, we make progress by making precise predictions that might hold only in very unusual circumstances (think smashing atoms together, restricting access to horizontal lines, dichotic listening, etc, etc). These extremely narrow tests allow us to test for aspects of the underlying systems that are not easily seen through analysis of surface features of the world. The highly abstract constructs of generative linguistics have shown themselves to be valuable scientifically because their use continues to lead to new insights and discoveries of highly specific properties of particular grammars. Therein lies one major success of the field. Using its tools to analyze new languages leads to new ways to test their predictions. And they are pretty consistently borne out. And when they aren’t, we try to figure out why. I can’t imagine any other way to proceed.

    Martin: The alternative is to describe each language in its own terms, and to use a distinct set of comparative concepts for comparison, as is usual in all other sciences that study cross-cultural patterns (anthropology, musicology, economics, sociology, archeology).

    Maybe the use of notions like vP and feature-checking “leads to insights” in the sense that people who know of no other way of doing syntax still learn something about the languages they study. But I don’t see any “predictions that are consistently borne out”. In the blogpost, I noted that Chomsky & Halle and Wierzbicka gave lists of innate categories, so these can be said to entail predictions (in a weak sense). But there is nothing like this in MGG, as far as I can see. What predictions does the claim that all languages have vP make? Or that all branching is binary? Or that all reflexives must be c-commanded by their antecedent, if the position of reflexives is habitually used to support a particular tree structure? And where is “progress”? What is it that MGG knows now that it didn’t know in the 1990s? These are genuine questions, because I know that many smart people practice MGG. So I keep asking these questions, because I hope that I will get concrete answers.

    Colleague: Your answer of what counts as success reveals that generative lingusitics and (your conception of) typological linguists are really engaged in a different enterprise. Generativists are aiming to understand a cognitive capacity by examining the many ways that that cognitive capacity is expressed in human languages. Your description of social science is about studying patterns. That’s just a different enterprise. We study the patterns as a window into cognitive architecture and computational capacity, not simply to look for patterns of patterns. This is why universal claims like “all syntactic dependencies involve c-command” or “movement dependencies are cyclic” have real content – they reveal something about the basic computations that underlie the human capacity for language. And they make quite clear predictions for particular languages. If you want to know whether a given dependency is syntactic, check to see if the two elements necessarily stand in a c-command relation. If they don’t, then either (i) it’s not a syntactic dependency, (ii) the premise was false or (iii) there is some other language particular property that is masking the c-command relation. All three of those possibilities suggest avenues for further research and can lead to further insight about the computational system that underlies (the syntactic component of) human language. I have no idea what the problem with this approach is, unless you take the aims of the field to be different. That’s ok, but if we have different aims, then we’re not engaged in the same discipline and while we might learn from each other, we shouldn’t insist that the standards of one be applied to the other.

    Martin: Yes, many people have said that we study different things, but unfortunately, that isn’t the solution. Because necessarily, everyone is trying to understand BOTH the cognitive capacity AND the cultural patterns. If you don’t care about cultural patterns, you won’t be able to understand cognition, because most of what we find in languages is culture-specific. You have to somehow subtract culture, which is very hard. And likewise, if you want to understand cultural patterns at worldwide scale, you have to take into account cognition, because we’re not born with a blank slate. So unfortunately, we cannot peacefully divide the territory.

    What I highlight in the blogpost is that your proposal does not work in the general case: “If you want to know whether a given dependency is syntactic, check to see if the two elements necessarily stand in a c-command relation.” – That doesn’t work, because there is no general way of checking this. One has to hope that somehow the multiple pieces of evidence will all point in the same direction in the end. In the remote future, this may well happen (just as Columbus did reach the Americas), and clearly, you remain optimistic. So I wish you good luck, and I’ll keep paying attention (unlike most of my community, who think that I’m crazy paying so much attention to generative linguistics).

    Colleague: Of course there is a general way of checking. The general way is to conduct an analysis.

    Martin: „Conduct an analysis“ – of what? Of one phenomenon in one language? I don’t understand how that could be taken to support a universal claim. General linguistics must be based on universals.

    Colleague: I really don’t understand the issue. The universals are stated in terms of structural features, not surface features. So, if you need to look for c-command between two positions, then you first have to figure out constituency and hierarchical structure, since those will be the clues to c-command relations. Those things will express themselves in slightly different ways in different languages, but a good analyst can determine much about constituent structure and hierarchy by deploying general principles and trying out stuff that has worked before (e.g., coordination tests, proform tests, variable binding, etc) and by deploying what they understand about other features of the language. Maybe there’s some phonological rule which only applies within a certain syntactic/prosodic domain, or whatever. One does not conduct an analysis of one phenomenon in one language by only looking at that phenomenon, but by looking at how it interacts with everything else they know about the language and about linguistic structures in general. The reason generative linguistics has been so fertile is because the structures it proposes turn out to be relevant in many different languages. When those structures turn out not to be relevant, they get jettisoned from the theory. I don’t really see any other way to do scientific inquiry. I’m really trying to understand your point of view, but I’m not seeing the force of the general complaint, only a reluctance to engage at a grain size that will allow for constructive investigation.

    Martin: Thanks for explaining it again so clearly – because this is precisely what I‘m saying doesn’t work if you want to be objective. I‘ve been following MGG since 1983, and I see one subjective choice after another. They gain prominence via prestige, not via objective testing. Objective testing would mean that all languages are compared by exactly the same criteria, as in economics. You may not find this so interesting, but it can lead to lasting insights (see my 2019 paper cited in the blog for an example – a theory of split ergativity that has stood the test of time since it was developed in the 1970s).

    Colleague: I know you keep saying stuff about objectivity. I don’t really see a fundamental difference in objectivity. Pressure is measured differently in gases and liquids, even though pressure is essentially the same concept in both. Does the difference in measurement imply that physics/chemistry is somehow not objective? Simiarly, constituency can be measured in different ways in different languages, even though it is a single concept.

    Martin: Yes, I see your points, and maybe “objectivity” isn’t quite the right word. Physics has managed to arrive at lasting insights in this way, and that’s I‘ve called your approach the natural-kinds approach: You are hoping to find some basic innate building blocks for grammars, analogous to the chemical elements. That‘s perfectly coherent, and I wish you good luck (and I‘ll keep paying attention). But I haven’t seen global successes so far. This whole thing started with PCC effects and Anagnostopoulou‘s critique of my 2004 paper. But that paper itself started with a local success of generative grammar (Bonet 1991). I proposed a cultural-evolution explanation, which does not rely on innate natural kinds or on in-depth analyses. Anagnostopoulou misunderstood my paper. That‘s why I thought it might be helpful to contrast natural-kind explanations with cultural-evolution explanations more generally. The former require in-depth analyses, the latter merely require measurement uniformity. For PCC effects, I think I‘ve shown quite well how they can be explained (also for differential object marking and many other argument coding splits). But maybe other things are better explained by a natural-kinds approach. (Which ones?)

    Colleague: I don’t think ‘in-depth’ analysis is only executed by generative syntacticians, and I think surface generalizations based on the average grammatical descriptions are prone to all kinds of really problematic errors that corrupt the rigor of the typological project in a really near-fatal way.

    Only after a long time of trying to understand various specific problems from a language-internal (and incidentally generativist) grammatical perspective have I come to understand, e.g, that active voice is overtly marked in Hiaki by a final vocalic suffix, and that complex locative expressions are actually relative clauses formed over an aspectually-marked finite verb phrases.

    Both of these features can be robustly motivated by an in-depth analysis within the language — the former by attempting to understand the morphophonology of verb stems while at the same time trying to understand the voice system, and the second by trying to understand the case-marking properties of subjects in non-locative relative nominalizations — but neither of them would leap to the eye from any of the fairly extensive documentation of the language by good descriptive linguists that currently exists.

    These ‘hidden’ features of Hiaki grammar would affect Hiaki’s place in a typological classification in a number of ways; for example, if you were trying to identify Hiaki as an equipollent-Voice-marking language or a marked-Passive language, you’d choose the later category based on existing descriptions but it would be a mistake. Similarly if you were trying to identify whether Hiaki allowed relativization of obliques in the Accessibility Hierarchy you’d also make a mistake if you were basing your generalization on existing grammars.

    And the more in-depth understanding I get of Hiaki grammar, the more I find cases like that. And I don’t think my experience of Hiaki is an outlier here. I feel like that is going to be the case for ANY language whose main grammatical description is based on just one or two years of field work — they’ll get a lot right, but there’ll be a lot of generalizations that are flat-out wrong and no one will ever know if the in-depth work isn’t done. Proposing ‘universals’ based on typological patterns in such descriptions strikes me as the opposite of rigorous… my two cents.

    Martin: Why would a marked-passive classification be a „mistake“? I assume that (since we don’t know the innate categories of UG) we need to identify the comparative concepts by the same criteria in all languages. „Hidden“ categories cannot play a role in such comparisons – they would be too subjective. – I fear that most of these super interesting features of Hiaki will not be very relevant for worldwide language comparison. Most phenomena in most languages are historically accidental, and hard to compare with other languages. That may be a pity, but I think it‘s worth studying Hiaki also for Hiaki‘s sake. General linguistics is not the only kind of theoretical linguistics.

    Colleague: ? how can it not be relevant for typologies of relativization whether Hiaki does or doesn’t have an oblique relativizer? and how can it not be relevant for categorizing Hiaki as equipollent or marked-passive whether Hiaki does or doesn’t have an active Voice suffix? I put ‘hidden’ in scare quotes not to suggest that they aren’t detectable (a learner surely would detect them) but to suggest that the grammatical descriptions to date hadn’t detected them…

    Martin: Of course typologies must be based on complete descriptions, but they need not be based on fully general descriptions. This distinction is not often recognized. Pedagogical works must be complete, but only linguists usually strive for maximal generality. We don’t need this for general comparisons, and it’s often
    distracting.

    Colleague: [the ‘general’ vs ‘complete’ description distinction] is indeed pretty unrecognized by me… like i would have thought typology should be based on *correct* descriptions? like, it’s correct to say that Hiaki has an active voice suffix, and incorrect to say that it doesn’t. And that doesn’t depend on anything about generative syntactic categories (though I would disagree that the categories posited don’t translate from lg to lg, obviously, since we have so much fruitful discussion based on them, but that’s neither here nor there). The fact is, Hiaki has an active voice suffix that I only found through ‘in-depth’ analysis. Isn’t that typologically relevant?

    Martin: I find „correct“ too vague. What I mean by „complete“ is observationally adequate, but many linguists strive for much more: descriptive adequacy, or mental reality. Or they want Jakobsonian general meanings (you find this in Goldberg‘s work, for example). Many such analyses involve abstract elements (underlying forms, zeroes, movement, etc) which are not necessary for observational adequacy, and which typically make crosslinguistic comparison impossible. So by „in-depth“, I mean analyses that are more general than necessary to explain speaker behaviour. Most analyses that linguists find „revealing“ fall under this – and they are indeed revealing at the p-level, but irrelevant for the g-level.

    Colleague: I can’t really understand your aversion for in- depth analyses and complex assumptions. Do you think that Schrödinger equation is simple and immediately understandable? That also describes reality – but it requires a huge number of assumptions and a VERY in-depth analysis of the facts.
    The same holds for even the most elegant mathematical equations like Einstein’s e=mc2. It looks simple, it is very complex. So why should we treat language differently from anything else in the world and only observe the surface?

    Martin: There‘s no aversion at all. I like the beauty of depth. But in practice, it doesn’t work, as I explain in the blog. There are no testable universals based on in-depth analyses, as far as I can see. I‘m not saying that they‘re impossible in principle, but perhaps too difficult in practice.

    Colleague: Difficult: for sure. TOO difficult: so is the world. But this shouldn’t stop us from trying.

    Martin: Yes, why not. My main point is that the approach based on measurement uniformity has greater promise, and is fully legitimate (even though few textbooks mention this).

  2. Thought provoking as always! I think there is often a perception that there are no deep linguistic universals that generative syntacticians agree on, this is not my experience. Below I list 16 universals that I do not think would be considered controversial.

    The theoretical framework in generative syntax serves exactly the function of an auxiliary language that Evans lays out; it turns out that proper comparison needs this kind of abstract language, because a sentence in a language is not just a string of words. I disagree with Dryer’s assessments about conflating description and explanation: the explanation comes from the particular claims within the context of that metalanguage. In either case, I have a hard time believing that these universals would have been found if the metalanguage was not formal.

    1. Syntactic operations are structure dependent (Merge) and category dependent (Agree), meaning that syntactic rules and constraints must make reference to structure and categories.
    2. All languages have at least N and V as lexical categories.
    3. All languages make use of formal features, i.e., N and V are operative in the grammar.
    4. All languages have endocentric structures, where the head provides the label for the phrase.
    5. All languages have some notion of a functional projection.
    6. In all languages C>T/Asp/Pol>Voice>V, at least when these elements are realized as heads
    7. In all languages Case>D>Num>Adj>N, at least when these elements are realized as heads
    8. In all languages, Superlative>Comparative>Adj (Bobaljik 2014: https://mitpress.mit.edu/books/universals-comparative-morphology)
    9. All languages have movement to specifier position.
    10. In all languages with movement, this movement can be modeled as copy/remerge of the lower copy.
    11. In all languages with syntactic agreement, this agreement can be modeled with Agree, which itself is structure (and feature) dependent.
    12. Phi-agreement is finite clause-bound, though long-distance agreement into the edge of an embedded clause is possible (Polinsky & Potsdam 2001: https://link.springer.com/article/10.1023/A:1010757806504)
    13. Case assignment is finite clause-bound, with the same caveat for edges of embedded clausees (Baker 2015: https://www.cambridge.org/us/academic/subjects/languages-linguistics/grammar-and-syntax/case-its-principles-and-its-parameters?format=PB)
    14. Obligatory control, when found, is impossible into a finite clause with agreement (Landau 2015: https://mitpress.mit.edu/books/two-tiered-theory-control).
    15. Wh-movement, when found, is subject to island constraints and subjacency.
    16. Syntactic structure building precedes semantic and phonological interpretation, at least at a cyclic level.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.