Affixes are bound forms of a special kind – they are not defined by their phonological properties

A few weeks ago, a new paper of mine on bound forms and affixes was published by the journal Voprosy Jazykoznanija. This has long been the most important Russian linguistics journal, and it now publishes articles in English as well. Here’s an introduction to some of the key points of the paper (Haspelmath 2021a). Continue reading

On David Adger on reduced innateness and “placeholders for a better understanding”

In a recent blogpost, David Adger replied to my earlier post about “abandoning innateness”, trying to explain to me how one can be a mainstream generative grammarian (MGGer) and still say that most of the technical devices of one’s analyses are not innate. (I’m saying “MGGer” here, because practitioners of HPSG have long been explicit that they do not assume that the devices of their framework are innate; cf. Borsley & Müller (2021), in the forthcoming HPSG handbook). Continue reading

Some (ex-)generative grammarians who are abandoning innateness

In the 1960s, a view of language became famous according to which key aspects of grammatical structures are innate and “grow” in the child (rather than being learned). This came in two prominent versions: the “formal and substantive universals” of Chomsky (1965), and later the “principles and parameters” of Chomsky’s (1981) Government and Binding (GB) approach. But in the 21st century, there seems to be less and less certainty about the idea of innate grammatical structures (called “universal grammar”, Continue reading

We are all constructionists

Many linguists use ideological-sounding labels to identify themselves (or their colleagues), and I keep wondering about the purpose and content of these labels – what exactly is a cognitive linguist , for example? Is it someone who shares Lakoff’s (1991) “cognitive commitment” (to make their account of human language accord with what is generally known about the mind and brain from other disciplines”)? But why “commitment” – isn’t this simply a general and uncontroversial principle of science? Continue reading

The innate grammar blueprint: What is it, and why isn’t it a crazy idea?

Many linguists assume that languages are made up of the same basic building blocks – not the same words, of course, but the same phonological features (e.g. Chomsky & Halle 1968), the same morphosyntactic features (e.g. Corbett 2012), the same semantic primitives (e.g. Goddard (ed.) 2008), the same types of rules or constraints, and the same components (e.g. syntax vs. morphology), as well as the same overall architecture (e.g. Jackendoff 1997). Continue reading

How prominence scales help us explain differential object marking: A reply to Ormazabal & Romero (2019)

Scales of referential prominence (animate > inanimate, definite > indefinite, locuphoric (1st/2nd) > aliophoric (3rd), topical > non-topical) are known to play a role in differential object marking generalizations. But what exactly is their role? Do they merely “capture descriptive generalizations”, or is there “explanatory power” in theories that invoke them (such as Aissen 2003; Haspelmath 2021)? Continue reading

Acceptability judgements tell us about social norms, not about internal systems

Since the 1960s, many works on syntax have primarily relied on acceptability judgements, rather than on examples attested in corpora, as was common in earlier times. In Jespersen’s Essentials of English grammar (1933), there were many invented examples, but also quite a few observed corpus attestations (from authors such as Shakespeare, Austen, Thackeray, Carlyle). But over the last five decades, syntacticians have relied much more on experimental methods, which allowed them to make great progress in exploring the incredibly rich patterns of the major languages. (Note that I include acceptability judgements of all kinds under “experiments” here, because they all go beyond pure observation of naturally occurring speech.) Continue reading

Locuphoric person forms and speech act participants

Grammarians often have occasion to distinguish between 1st/2nd person forms on the one hand, and 3rd person forms on the other – that these two behave differently in many languages has been well-known since Benveniste (1947) and Forchheimer (1953). Over the last two decades, the term “SAP” (= speech act participant) has become fairly common in typological circles when reference is needed to 1st/2nd person forms (e.g. Zúñiga 2006; Jacques & Antonov 2014; DeLancey 2018). Continue reading

Zeroes and transformations: Good for p-analyses, useless for g-linguistics?

Since the mid-20th century, structural linguists have often made use of two types of abstract devices that were not part of the earlier arsenal (which did of course include rules and paradigms): zero elements (or empty positions), and transformations (or derivations, or operations). Continue reading

Do modern grammars retain traces of Proto-World?

We know almost nothing about the earliest language(s) of humans, because humans had language(s) over 100,000 years ago, and there are no records or other good methods for learning about those languages. But there is a lot of interesting speculation, and some of this is potentially relevant to understanding similarities among present-day languages. In particular, one might ask whether some similarities or universals are due to inheritance from Proto-World (the language from which all modern languages descend). Continue reading

Some issues with the correlated-evolution method for testing causal hypotheses in comparative linguistics

While comparative grammar research in the 20th century based its universal claims on stratified sampling (e.g. Bell 1978, Bakker 2011), in the 21st century, some authors have emphasized that sampling does not solve the issue of non-independence because all languages probably derive from a common ancestor or ancestral bottleneck (e.g. Maslova 2000; Levinson et al. 2011: 512). They have therefore given preference to the correlated-evolution method (Felsenstein 1985; Mace & Pagel 1994) that is firmly established in biology. Some representatives of this trend are Dunn et al. (2011), Widmer et al. (2017), and Jäger (2019). Continue reading

We are all structuralists

Linguists who study the structures of languages in a systematic way are structuralists (or structural linguists) – so this label basically applies to all linguists who are interested in language structures (not necessarily to those who only study the social roles of languages, or who only study pychological correlates of a narrow range of phenomena, e.g. word meanings). Continue reading

Long live the morph, down with the morpheme!

If you ever wondered what’s the difference between a morph and a morpheme, this blogpost contains an easy answer: Your stereotypical “morpheme” is actually a morph! No need to worry about all the problems with morphemes anymore: We can simply say “morph”, and continue to live happily. A morph is a minimal form, and if anyone has further questions about this definition, I’ve answered them in a forthcoming paper (“The morph as a minimal linguistic form”) Continue reading

The peculiar flag-article suffixes in Circassian and general linguistics: Comments on Arkadiev & Testelets (2019)

How can our general understanding of Human Language contribute to making peculiar language-particular patterns more comprehensible? This is what I keep wondering about, and in the recent excellent paper by Peter Arkadiev and Yakov Testelets, we find a really interesting case for discussion: the flag-article suffixes in the Circassian languages (Kabardian and Adyghe, two very closely related languages of the northern Caucasus region). These suffixes (Absolutive suffix -r, Oblique suffix -m) occur only on a nominal only when it is specific, and may be omitted when it is nonspecific: Continue reading

Rigour is more important than depth: Why language universals should not be based on in-depth analysis

Many linguists think that broad cross-linguistic comparison is sometimes “too shallow”, and that instead, language universals can be detected only if they are based on “in-depth”, “abstract” and “detailed” analyses. Here I give reasons to think that this is the wrong approach. This discussion is not new (cf. Comrie 1981; Coopmans 1983), but it needs to be revisited, because this erroneous idea remains very strong in the discipline. Continue reading