Syntax and didactics (A reply by Koeneman and Zeijlstra)

The following text is a reply by Olaf Koeneman & Hedde Zeijlstra to Martin Haspelmath’s earlier post (Confused by syntax)

We thank Martin Haspelmath for allowing us to reply to his review of our book. We have divided our reply in two parts. In the first part, we make explicit what our choices have been in writing this textbook and why we made them. We believe that quite a few of Martin’s criticisms relate to these, often didactic, choices. In the second part we reply to some of the more detailed comments Continue reading

Confused by syntax: Some notes on Koeneman & Zeijlstra (2017)

(See also a reply to this critical review by the authors: “Syntax and didactics“)

A new authoritative textbook on Chomskyan syntax

Papers in the framework of current mainstream generative grammar (MGG) are often difficult, or even impenetrable, to read, even when the reader is well-versed in syntax and in other models of generative syntax. They are mostly written for the community of practitioners, who naturally do not see a need to motivate their choices. I was thus happy to see a new textbook (“Introducing syntax”, Koeneman & Zeijlstra 2017), published by an authoritative publisher, and approved by Noam Chomsky himself Continue reading

Does less restrictiveness mean progress in grammatical theory?

One prominent way of expressing the goal of what is often called “grammatical theory” (or “linguistic theory”) is to say that it aims to establish an innate architecture and a set of features and categories that are rich enough to account for everything we find in the world’s languages, but restrictive enough to explain the gaps in what we see and to explain why we can acquire languages despite the poverty of the stimulus. I always found the first goal absolutely compelling Continue reading

What’s the point of the negative reviews?

Scientists don’t get a lot of positive feedback for their work: Often it’s just two or three questions after a conference talk, by friendly colleagues who understood the talk only partly – and all this after months of work that went into this talk. And reviewers of journal papers are often downright negative – getting one’s journal-paper reviews back can be a depressing experience. Continue reading

More on universals of case-marking from the perspective of nanosyntax: Van Baal & Don (2018)

In a recent blogpost, I promised that I’d pay more attention to the nanosyntactic approach if the authors look at more representative samples of the world’s languages, and it turns out that this is not difficult, because the fair open-access journal Glossa regularly publishes papers in this vein. A recent paper is van Baal & Don (2018), on universals of possessive pronouns, based on a sample of 50 languages. Continue reading

Asymmetric coding in grammars and frequency-induced predictability

Over the last decade, I have often argued that grammatical coding patterns can be explained by frequency of use. In this blogpost, I provide a short summary of the claims for those who are not familiar with the argument.

What I’m claiming is not that I can explain language-particular patterns – the claim is entirely at the level of general linguistics, i.e. I am proposing an explanation of cross-linguistic tendencies. The tendencies that can be explained in this way are coding asymmetries, i.e. pairs of grammatical meanings that are in paradigmatic opposition and where one of the members shows a strong cross-linguistic tendency to be expressed by a longer form (which often means that the shorter form is zero). Continue reading

Coexpression patterns of complementizers, nanosyntax, and productivity

Since the 1980s, typologists have often summarized coexpression patterns (or “polysemy patterns”, or “syncretism patterns”) by semantic maps, as illustrated here for case expression (Narrog & Ito 2007: 282):

(For general introductions to semantic maps, see Haspelmath 2003; Georgakopoulos & Polis 2018). The claims about possible coexpression pattern that a semantic map makes Continue reading

Facing the challenge of general linguistics when nature doesn’t help us

The following is a summary of an invited talk I presented at NoSLiP 2018 in Oslo in February 2018. I used the subtitle “Toward an IPA of morphosyntax”, echoing some remarks of an earlier post, though this is still a fairly distant goal. But in this talk I say more to motivate the need for it.

1. The general linguistics problem: Human Language is unobservable

In the 19th century, most linguists were particular linguists Continue reading

An interview with Dan Slobin on diversity of categories, acquisition, and sign language

Martin Haspelmath: Dan, you have shown an interest in my distinction between comparative concepts and descriptive categories, and you told me that you recently read my new paper “How comparative concepts and linguistic categories are different”. Can you say what you liked about it and how it relates to your own work?

Dan Slobin: I read your paper with great enthusiasm and pleasure. It makes your familiar argument precise, elegant, and, in my opinion, strongly convincing. Continue reading

A plea for pronounceable language names

Suppose you hear that a colleague is working on a language called “@t~q^M#%”. What is your reaction? What’s wrong with the language name “@t~q^M#%”? It’s perfectly unique, it consists only of ASCII characters so is eminently typable, and it has a certain beauty. But of course it lacks pronounceability, so is it good as a name? In general, we expect a language to have a name that we can use in speaking about the language, not only in writing. Continue reading

Why should we bother about terminology in linguistics?

Those who know me better will be aware that I keep insisting on careful use of terminology in linguistics, especially in grammar (my main area of research), but also in other areas – for example, I often point out that it’s very problematic to use the term borrowing only for cases of copying (of words and other features from a donor language) that were NOT due to imperfect learning of a second language (i.e. substrate effects). The reason is that we need a general term for all kinds of copying, because in many or most cases we don’t know the circumstances under which the copying took place. Continue reading

Dictionaria: Farewell to linear dictionaries

Dictionaries are structured databases, and they are linear only because of the inflexible paper medium of earlier times. Like linear phonebooks, linear timetable books, or antiquarian book catalogs, they are bound to disappear, but the process seems to be much slower. I’ve been wondering if there is a reason for this – does linearity perhaps serve a useful purpose in dictionaries? Continue reading

Do we need a “framework” for syntax? A conversation between Richard Larson and Martin Haspelmath

(The following is a slightly edited conversation that took place on Facebook recently, on Roberta D’Alessandro’s page. There’s also one comment by Roberta. It’s reproduced here with permission.)

Martin Haspelmath (Reacting to a Facebook comment that it’s hard to understand the syntax of human languages): Syntax suddenly starts working if it’s framework-free! But I admit it may not be so cool…

Richard Larson: Same is true for physics! It suddenly “starts working” if you toss out all this silly theorizing about forces, particles and least effort principles! 🙂

Martin Haspelmath: Have you read my paper about framework-free theory, Richard? I’d be curious to hear what you’d say about it. I find the reasoning quite compelling, and I’d like to hear counterarguments. Continue reading