The following is a summary of an invited talk I presented at NoSLiP 2018 in Oslo in February 2018. I used the subtitle “Toward an IPA of morphosyntax”, echoing some remarks of an earlier post, though this is still a fairly distant goal. But in this talk I say more to motivate the need for it.
1. The general linguistics problem: Human Language is unobservable
In the 19th century, most linguists were particular linguists – they studied particular languages, and especially their particular histories. Holger Pedersen’s (1924) account of 19th century linguistics hardly mentioned general linguistics but concentrated on the findings of philologists and historical-comparative linguists. Two prominent Norwegian linguists of the 20th century (Georg Morgenstierne and Hans Vogt, the latter at some point rector of the University of Oslo) contributed by describing languages that had previously been little known.
But since the 1950s, general linguistics has become much more prestigious – so much so that the term general linguistics is hardly used anymore. Apart from some historical linguists who work on reconstructing the human past, most people who call themselves “linguists” nowadays are trying to make a contribution to the study of Human Language.
However, Human Language is not observable – what we can observe is how humans use particular languages, and from such observations, we can construct grammars and dictionaries. So there is a huge challenge: without further assumptions, we cannot make statements about Human Language, because it is not immediately clear which statements carry over to all other languages. Consider the following claims, which one might base on the study of Latin, French and German:
– “Language” has vowels of different lengths, but not vowels of different pitch
(pitch is only used at the utterance level for intonation) (?)
– conditional clauses in “Language” can precede or follow the matrix clause, but
relative clauses always follow the head noun (?)
– when there is a split in case-marking in “Language”, special marking may
occur on animate and/or definite objects, but not on inanimate and/or
indefinite ones, as seen in Spanish:
Veo a Juana. ‘I see Juana.’ (animate object / P-argument)
Veo (*a) la casa. ‘I see the house.’ (inanimate object / P-argument)
The first two statements are by now widely known to be wrong, but only because we looked at many more languages. The third claim has turned out to be very largely confirmed (even though there are a few exceptions), and it will play a role further below.
2. Nature might be able to help us
In the challenge of jumping from particular languages to Human Language, nature could help us by restricting the possible building blocks of grammar to a few dozen or a few hundred – as it helps us in chemistry by restricting the number of elements to about 100, and as it possibly helps us in the psychology of emotions, by restricting the number of emotions that humans may have to six (anger, sadness, fear, disgust, happiness, surprise; argued by psychologist Paul Ekman, and popularized in the Pixar movie “Inside out”).
It could also be that there are tens of thousands of natural building blocks of grammar, just as there are tens of thousands of plant species and animal species (as listed in the Encyclopedia of Life). And similarly, genomes may consist of tens of thousands of genes, which are also discrete natural building blocks.
In other words, “nature could help us” by providing a set of building blocks,
or natural kinds, out of which more complex structures are built. Natural kinds are universally presupposed in chemistry (and physics), and widely assumed for biological species. They are also being discussed seriously in psychology (Barrett 2006). Thus, it is a priori perfectly reasonable to think that nature restricts the possible categories and features (as well as grammatical architectures) to a manageable number, and that linguists should look for the natural kinds of grammar. But unfortunately…
3. Linguistics has no research programme for finding the natural kinds of grammar
Universal features (as natural kinds) are presupposed
In linguistics, it is widely presupposed that categories and features are natural kinds, i.e. aspects of the innate language faculty. For phonological features, this was explicit in Chomsky & Halle (1968) (The sound pattern of English). For syntax, it was explicit even earlier:
“We require that the grammar of a given language be constituted in accord with a specific theory of linguistic structure in which such terms as “phoneme” and “phrase” are defined independently of any particular language.” (Chomsky 1957: 50)
A well-known example from Chomsky (1970) is the use of the universal features [±N], [±V]:
noun: [+N, –V]
verb: [–N, +V]
adjective: [+N, +V]
adposition: [–N, –V]
In other words, universal grammar is thought to provide a “toolbox“ of categories that languages may use (Jackendoff 2002).
But while many linguists who work on grammar (probably the majority) share this overall view, it is not so clear what the true natural kinds of language structure are (the „categories of universal grammar“, as they are often called, or aprioristic categories, as I have also called them). For phonology, we have some specific proposals which have found their way into the textbooks (there is even a list on Wikipedia, in the article distinctive feature; but all serious phonologists will acknowledge that this list is very controversial, and Mielke (2008) argued against a universal set of distinctive features on the basis of large-scale cross-linguistic evidence).
For morphosyntax, there are no comprehensive proposals, and there are few if any small-scale proposals that have been generally accepted. For example, for the feature „person“, there are a variety of feature-value systems that have been proposed: ±I, ±II, ±III, or ±ego, ±tu, or ±author ±hearer ±participant (cf. Harbour 2017). For syntactic parameters, Baker’s (2002) brilliant popular book made some far-reaching proposals about parameter hierarchies (explicitly modeled on the periodic table in chemistry), but he never published any technical work to justify them, and he never referred back to this earlier work. Thus, even though many linguists presuppose natural kinds of morphosyntax, they rarely make specific proposals (and textbooks such as Koeneman & Zeijlstra (2017) spend a lot of energy on focus specific language-particular analyses, but say little about the cross-linguistic justification of the categories that are assumed). But how would we judge whether a proposal is successful?
Linguistics has no good criteria for success
To my mind, a more serious problem than the lack of comprehensive proposals is that linguistics has no clear criteria for assessing whether a feature or category should be assumed to be a natural kind (= part of the innate language faculty).
The typical linguistics paper considers a narrow range of phenomena from a small number of languages (often just a single language) and provides an elegant account of the phenomena, making use of some previously proposed general architectures, mechanisms and categories. It could be that this method will eventually lead to convergent results, and many linguists apparently have this hope, but I do not see much evidence for this over the last 50 years (convergence seems to come primarily from the impact of fashions and some influential individual scholars). But linguists who have observed the scene over the last four decades might object: Isn’t there convergence in some areas, e.g. the issue of configurationality and lexical categories? One can observe the following general trends:
– 1980s: some languages are nonconfigurational and lack a VP
(e.g. Warlpiri, Hungarian, German)
> 2000s: all languages have a VP
– 1800s through 1990s: some languages lack a distinction between verbs and nouns,
or at least between verbs and adjectives
> 2000s: all languages have a verb-noun distinction, and a verb-adjective distinction (cf. Haspelmath (2012) for ample references)
Is this shift indicative of true convergence of results? I don’t think it is, because “having a category” is not a claim that can be falsified. Let us look at the specific example of word-classes in the Austronesian language Chamorro, discussed a few years ago by Chung (2012).
Does Chamorro “have” a noun-verb-adjective distinction? (Chung 2012)
According to Topping’s (1973) pre-generative grammar, Chamorro has two word-classes: Class I (transitive verbs / ‘see’-type roots), and Class II (intransitive verbs, nouns, adjectives / ‘go’-type, ‘person’-type and ‘big’-type roots). Class I is defined as combining with preposed subject person forms (cf. preposed hu in 1a), while Class II is defined as combining with postposed subject person forms (cf. postposed yu’ in 1b).
(1) a. Hu li’i’ i dångkulu na tåotao. (Class I)
1SG see the big LK person
‘I saw the big person.’ (Chung 2012: 11)
b. Håhanao yu’ gi chalan. (Class II, action-root)
go.PROG 1SG LOC road
‘I was going on the road.’ (Chung 2012: 11)
Not only action-roots, but also thing-roots and property-roots combine with postposed subject person forms in this way, and they do not require a copula, so by this salient criterion, Chamorro roots fall into two broad classes.
Now Chung (2012) claims that on closer inspection, Chamorro has nouns, verbs and adjectives after all – because if we consider further phenomena, more distinctions emerge. She discusses the six criteria in Table 1.
Clearly, if all these criteria have the same weight, then five different ways of setting up major classes are possible, and it is not immediately clear which of the major-class divisions, if any, is better than others. But Chung does not even ask which is the most elegant way of describing the language – she merely asks whether there is SOME (however flimsy) evidence for grouping Chamorro words into verbs, nouns and adjectives (as suggested by Baker (2003)). Chung thus asks primarily whether the noun-verb-adjective classification could be made to work for Chamorro, following the general thinking of the Uniformity Principle:
(6) Uniformity Principle
In the absence of compelling evidence to the contrary, assume languages to be
uniform, with variety restricted to easily detectable properties of utterances.
(Chomsky 2001: 2)
But of course, Chamorro and English ARE different in their grammar, and these differences must be expressed somehow – saying that Chamorro has nouns/verbs/adjectives just pushes the non-uniformity elsewhere. And more importantly, the Uniformity Principle would also be satisfied if we said that all languages are like Chamorro (in having Class I and Class II words), because all languages have SOME difference between transitive verbs and all other words (e.g. that only transitive verbs take objects). I conclude that there is currently no method for convergence of findings concerning the natural kinds of language structure – and there are very few researchers who are making an attempt to pull various findings from around the world together (I think the same could be shown for the configurationality debate – if any diagnostic can be used to argue for a VP, one would not be able to prove that there is no VP).
Since in this approach, languages can only be classified typologically after in-depth study, large-scale comparison is very difficult, and very few natural-kind linguists make claims of world-wide scope. Broadly cross-linguistic works such as Harbour (2016) and Baker (2015) are quite exceptional (for Baker 2015, I have argued that the attempt fails in this review).
Thus, linguistics currently has no promising research programme that would give us reason for optimism that we can find the natural kinds of language structure. Grammatical structures seem to be similar to sound inventories (cf. the 2160 segment types of Phoible.org) and lexical structures in that they allow an open-ended and highly variable range of features and categories, which generally need to be described in language-particular terms.
4. But languages do not vary randomly (as Greenberg noted)
But languages do not vary randomly in their grammatical structures, and we want to formulate the limitations, because they can truly be said to characterize Human Language. Ultimately, we would also like to be able to explain them. Joseph Greenberg famously formulated universals like (3).
(3) The subject-before-object universal (Greenberg 1963)
In declarative sentences with nominal subject and object, the dominant order
is almost always one in which the subject precedes the object.
Also in my own research, I have tried to formulate universals of this sort. Three examples from recent papers are given in (4), (6) and (9).
(4) The split P flagging universal (“differential object case-marking”)
If a language has an asymmetric split in P flagging (case or adpositional marking)
depending on some prominence scale, then the special flag is used on the
prominent P-argument (Bossong 1985; cf. Haspelmath 2018a).
This was already exemplified by the Spanish example seen earlier, and here is another example:
(5) Purepecha (Mexico; Capistrán-Garza 2015: 31)
a. (indefinite P)
xuchá arhá-s-ka kurúcha(*-ni)
we ingest-PRF-1.IND fish
‘We ate fish.’
b. (definite P)
xuchá arhá-s-ka kurúcha-ni
we ingest-PRF-1.IND fish-OBJ
‘We ate the fish.’
(6) The synthetic causative universal
If a language has synthetic causatives of transitive verbs,
it also has synthetic causatives of intransitive verbs (Haspelmath 2017a).
This is exemplified by Indonesian, where the causative suffix -kan cannot be used with transitive verbs, so (8b) is not possible with a causative meaning.
Indonesian (Cole & Son 2004: examples 1, 2, 5)
(7) a. Cangkir-nya pecah.
cup-DEF break
‘The cup broke.’
b. Tono me-mecah-kan cangkir-nya.
Tono ACT-break-CAUS cup-3
‘Tono broke the cup.’
(8) a. Dia meng-goreng ayam untuk saya.
he ACT-fry chicken for I
‘He fried chicken for me.’
b. *Dia meng-goreng-kan saya ayam.
he ACT-fry-CAUS I chicken
(‘He made me fry the chicken.’)
(9) The inalienable coding universal
If a language has an adnominal alienability split, and one of the constructions is
overtly coded while the other one is zero-coded, it is always the inalienable
construction that is zero-coded, while the alienable construction is overtly coded
(Haspelmath 2017b).
This is exemplified by the following pair of adpossessive constructions, where the first shows an alienable construction, and the second an inalienable construction (with a possessed body-part term).
(10) Lango (Nilotic; Noonan 1992: 156-157)
a. gwôkk à lócə̀
dog of man
‘the man’s dog’
b. wì rwòt
head king
‘the king’s head’
So clearly, there are non-trivial similarities between languages, and universal patterns that need to be explained. So how do we deal with these similarities if they are not attributed to natural kinds of Human Language structure? My proposal is to shift the perspective from an essentialist approach to a measurement approach.
5. From an essentialist approach to a measurement approach: Mendeleev vs. Passy
The idea that similarities between languages is due to the basic building blocks provided by nature can be called an essentialist approach, because it looks for some underlying uniform reality beneath the superficial diversity. This is not a priori unlikely, because an essentialist approach has been successful in chemistry (as nicely illustrated by Mendeleev’s periodic table of elements) and also in biology (where genes were shown to be discrete elements of heredity).
But in linguistics, as in other social sciences such as anthropology, a measurement approach seems to be more promising (as first argued forcefully by Bickel 2007). So instead of
– looking for the “correct analysis” of a language-particular pattern
– debating the “status” of a phenomenon
– proposing “underlying” entities that allow a more elegant description
general linguists should “measure” the differences between languages and use these measurements to find general pattern.
Units of measurements are not claimed to correspond to anything in nature – they merely serve as yardsticks to allow scientists to compare related phenomena and to state possible generalizations. Science should “cut nature at its joints”, but when nature appears to be continuous, a measurement approach may be fruitful.
In comparative linguistics, we use comparative concepts as yardsticks to compare languages (Haspelmath 2010). They are artificial concepts designed for the purpose of “measuring” (= identifying and assessing) differences and similarities between languages. They are not claimed to correspond to “nature’s joints”, but merely serve the practical purpose of allowing comparison, just like the symbols of the IPA.
The contrast between an essentialist approach and a measurement approach can be likened to the contrast between Dmitri Mendeleev and Paul Passy. Both developed famous tables in the second half of the 19th century. Mendeleev’s periodic table of elements shows the natural kinds of chemistry (“nature’s joints”). But the IPA table (developed by Paul Passy and his colleagues since the 1880s) does not show the natural kinds of phonology – it shows comparative concepts (as is now widely accepted in phonology, cf. Ladd 2011). It seems that we cannot identify the natural kinds of phonology and syntax, as I noted earlier – if there are any. So even though the IPA table looks similar to the periodic table of elements, it is very different in nature.
The situation in linguistics is more as in comparative anthropology, where we cannot identify natural kinds either (maybe with the exception of kinship): categories like “tribe”, “chief”, “trade”, “land ownership”, “religion”, “god”, “art”, “taboo” are highly variable across cultures, so in order to compare societies, anthropologists must work with artificial comparative concepts such as “moralizing high god” (e.g. Botero et al. 2014). (I say more on the conceptual parallels between categories and comparative concepts in linguistics and categories and comparative concepts in other discipline in Haspelmath 2018b). But let us get back briefly to the issue of explaining language universals.
6. Explaining language universals
Quite a few language universals can be explained in the way in which anthropological and ecological universals in biology are explained routinely: through functional adapation. For example, it turns out that the three universals mentioned earlier (in (4), (6) and (9)) are all special instances of the meta-universal in (11).
(11) The grammatical form-frequency correspondence universal
When two minimally different grammatical patterns (i.e. patterns that form an opposition) occur with significantly different frequencies, the less frequent pattern tends to be overtly coded (or coded with more coding material), while the more frequent pattern tends to be zero-coded (or coded with less coding material).
As the papers cited note, corpus evidence from a variety of languages shows that
– referentially prominent arguments (definite, animate, etc.) are less frequently in P-role than non-prominent arguments
– transitive verbs occur less frequently in caused situations than intransitive verbs
– inalienable nouns (such as body-part terms) occur more frequently in possessed contexts than alienable nouns
The three universals can therefore be explained by Zipfian economy, or Hawkins-style efficiency of grammatical coding, via the following causal chain:
Meanings that are expressed frequently are more predictable and can therefore occur with zero or shorter coding.
This does not make any reference to innate “natural kinds”, nor to language-particular categories. It is entirely based on comparative concepts. And I would argue that evidence for the universal in (11) has been accumulating gradually over time from various directions (starting with Zipf in the 1930s, Greenberg 1966, and much other work that is scattered over diverse traditions), so that we are dealing with true convergence here.
7. Toward an IPA for morphosyntax
In order to formulate testable universals, we need to have basic concepts for elements that can be identified in any language, regardless of its structure. In the generalizations seen earlier, I made use of comparative concepts like the following:
P-argument
flag (case-marker or adposition)
definite argument
transitive verb
causative construction
synthetic causative
inalienable possessive construction
etc.
The meanings of these terms should ideally be standardized, i.e. there should not be any need to define such basic concepts each time one uses them. Linguists often act as if it were OK that a single term has multiple meanings (e.g. they use an old term in a deliberately new meaning, or they treat terminological polysemy as somehow inevitable), but it should be easy to see that polysemy of grammatical terminology is not any better than polysemy of letter symbols. Before the IPA’s standardization, the letter “j” could be understood as [j] or [dʒ], and the letter “y” could be understood as [j] or [y], and likewise with quite a few other letter symbols. Standardization required some painful compromises (we can imagine Passy having favoured a different value for the letter “j” because of his French background, and Jespersen having favoured the [j] value), but it was evidently possible and highly beneficial.
But standardization across the discipline is of course very difficult to achieve, because one would probably need a nomenclature committee, which presupposes that the discipline generally recognizes the need for standardized definitions of some basic comparative concept that everyone uses in their work. For grammatical terminology, we are not there yet, but it does not seem too early to advance this as a vision for the future.
For lexical comparison, by contrast, we already have an IPA-type standardization, the Concepticon, pioneered by Johann-Mattis List and Michael Cysouw, and implemented by Robert Forkel. The comparison meanings (called concept sets) of the Concepticon are not a small set that fits into a table and can be memorized, but rather an open-ended set of currently 2632 meanings. I envisage the counterpart of the Concepticon (to be called “Grammaticon”) in similar terms, because there are no clear limits to the kinds of comparisons of grammatical paterns that could be fruitful.
References
Baker, Mark C. 2003. Lexical categories: Verbs, nouns, and adjectives. Cambridge: Cambridge University Press.
Baker, Mark C. 2015. Case. Cambridge: Cambridge University Press.
Barrett, Lisa Feldman. 2006. Are emotions natural kinds? Perspectives on Psychological Science 1(1). 28–58. doi:10.1111/j.1745-6916.2006.00003.x.
Bossong, Georg. 1985. Differenzielle Objektmarkierung in den neuiranischen Sprachen. Tübingen: Narr.
Botero, Carlos A., Beth Gardner, Kathryn R. Kirby, Joseph Bulbulia, Michael C. Gavin & Russell D. Gray. 2014. The ecology of religious beliefs. Proceedings of the National Academy of Sciences 111(47). 16784–16789. doi:10.1073/pnas.1408701111.
Capistrán-Garza, Alejandra. 2015. Multiple object constructions in P’orhépecha: Argument realization and valence-affecting morphology. Leiden: Brill.
Chomsky, Noam A. 1957. Syntactic structures. ’s-Gravenhage: Mouton.
Chomsky, Noam A. 1970. Remarks on nominalization. In R.A. Jacobs & Peter S. Rosenbaum (eds.), Readings in English transformational grammar, 184–221. Waltham, MA: Ginn.
Chomsky, Noam A. 2001. Derivation by phase. In Michael Kenstowicz (ed.), Ken Hale: A life in language, 1–52. Cambridge, MA: MIT Press.
Chomsky, Noam & Morris Halle. 1968. The sound pattern of English. New York: Harper & Row.
Chung, Sandra. 2012. Are lexical categories universal? The view from Chamorro. Theoretical Linguistics 38(1–2). 1–56. doi:10.1515/tl-2012-0001.
Cole, Peter & Min-Jeong Son. 2004. The argument structure of verbs with the suffix -kan in Indonesian. Oceanic Linguistics 43(2). 339–364.
Cristofaro, Sonia. 2007. Deconstructing categories: Finiteness in a functional-typological perspective. In Irina Nikolaeva (ed.), Finiteness: Theoretical and empirical foundations, 91–114. Oxford: Oxford University Press.
Greenberg, Joseph H. 1963. Some universals of grammar with particular reference to the order of meaningful elements. In Joseph H. Greenberg (ed.), Universals of language, 73–113. Cambridge, MA: MIT Press.
Harbour, Daniel. 2017. Impossible persons. Cambridge, MA: MIT Press. doi:10.7551/mitpress/9780262034739.001.0001.
Haspelmath, Martin. 2010. Comparative concepts and descriptive categories in crosslinguistic studies. Language 86(3). 663–687.
Haspelmath, Martin. 2012. How to compare major word-classes across the world’s languages. In Thomas Graf, Denis Paperno, Anna Szabolcsi & Jos Tellings (eds.), Theories of everything: in honor of Edward Keenan, 109–130. (UCLA Working Papers in Linguistics 17). Los Angeles: UCLA.
Haspelmath, Martin. 2013. Argument indexing: A conceptual framework for the syntax of bound person forms. In Dik Bakker & Martin Haspelmath (eds.), Languages across boundaries: Studies in memory of Anna Siewierska, 197–226. Berlin: De Gruyter Mouton.
Haspelmath, Martin. 2017a. Explaining alienability contrasts in adpossessive constructions: Predictability vs. iconicity. Zeitschrift für Sprachwissenschaft 36(2). 193–231.
Haspelmath, Martin. 2017b. Universals of causative and anticausative verb formation and the spontaneity scale. Lingua Posnaniensis 58(2). 33–63. doi:10.1515/linpo-2016-0009.
Haspelmath, Martin. 2018b. Role-reference associations and the explanation of argument coding splits. to appear.
Jackendoff, Ray. 2002. Foundations of language: Brain, meaning, grammar, evolution. Oxford: Oxford University Press.
Koeneman, Olaf & Hedde Zeijlstra. 2017. Introducing syntax. Cambridge: Cambridge University Press.
Ladd, D. Robert. 2011. Phonetics in phonology. In John A Goldsmith, Jason Riggle & Alan C. L Yu (eds.), The handbook of phonological theory, 348–373. Chichester: Blackwell.
Mielke, Jeff. 2008. The emergence of distinctive features. Oxford: Oxford University Press.
Noonan, Michael. 1992. A grammar of Lango. Berlin: Mouton de Gruyter.
Pedersen, Holger. 1972. The discovery of language: Linguistic science in the nineteenth century. London: Indiana University Press. (Originally published in 1924 as Sprogvidenskaben i det Nittende Aarhundrede. Metoder og Resultater. København: Gyldendalske Boghandel).
Topping, Donald. 1973. Chamorro reference grammar. Honolulu: University Press of Hawaii.
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (February 18, 2018). Facing the challenge of general linguistics when nature doesn’t help us. Diversity Linguistics Comment. Retrieved April 21, 2025 from https://doi.org/10.58079/nstx
A typo at the end: “but it does not seem to early to advance” –> “but it does not seem too early to advance”
Thank you!
One noteworthy exception to this, I think, is the category generally known as ‘noun’, which I think can be defined by a two-step process, first, an assertion that could be false but apparently isn’t, and then the definition that depends on the truth of the assertion.
The assertion is that there is always a grammatical category into which the overwhelming majority of names of kinds of people, places, things and living things fall.
The definition is that ‘nouns’ are this category.
Something that this definition does *not* imply is that ‘nouns’ so defined contrast with any other category; but if we find them failing to contrast clearly with ‘demonstratives’, ‘adjectives’ etc we tend to call them ‘nominals’, and if they fail to contrast with ‘verbs’ we might call them ‘content’ words, but until we have decent definitions of these other categories, we should think of these terms as merely convenient traditions.
An interesting near exception is gender classes, which often split up the noun category in crazy ways, but, afaik, gender classes (almost?) never affect the basic principles of word order that involve nouns. So there also seems to be a basis for distinguishing between ‘parts of speech’ and ‘inflectional categories’, but at the very least, the gender categories come out subcategories of the noun category.
I think it might be possible to come up with something similar to this but a bit more complicated for verbs, but, of course, the more complicated the definition gets, the more questionable become the motivationa for trying to produce it. About adjectives, adverbs, and prepositions, I’m not presently inclined to suggest anything!
But I think that ‘Noun Phrases’ can also be defined in a reasonably solid way, as pursued in my 2011 lingbuzz paper “Guessing rule 1: towards a modest UG”
Thanks, Avery – yes, nouns are generally easy to identify, both within and across languages. As I recognize in my forthcoming paper (“How comparative concepts…”), some concepts/categories (“portable concepts”) are really very similar across languages, and not much goes wrong if we don’t worry much about definitions. “Noun” is certainly in this group. But I would still take issue with your assertion that “there is always a grammatical category such that…”, not on factual grounds, but on methodological grounds – because there are a huge number of “grammatical categories” if one does not put any limits on the criteria by which they could be identified. In English, count nouns and mass nouns behave strikingly differently, but only count nouns fall into the class delimited by your semantic definition – so does this mean that English mass nouns are not nouns? There are other kinds of nouns that could be singled out (picture nouns, kinship terms, nouns that can serve as epithets, etc.), and again, if one does not specify the criteria, it’s unclear whether they would fall under “noun” or not. Hence, I think a rigorous but still cross-linguistically applicable approach must specify the criteria by which particular expression types are classified they way they are.
Indeed, but I wanted to start with something reasonably short where there wasn’t much doubt about the main facts. I see establishing some categories in a somewhat sensible way as primary, whereas ascertaining their boundaries and working out subcategories and the right criteria for subcategories comes second.
In the original post I suggest a bit indirectly that word order constraints are of particular importance for the main `Part of Speech’ categories, on the basis that gender-subclasses really are crazy but don’t seem interact with such constraints. (But they do, massively, with inflection, suggesting that this traditional basis for categorization is not really as solid as it has often been thought to be.)
Extending this, I note that I am not aware of any languages where count/mass distinctions affect positions of adjectives, demonstratives, etc., but on the other hand what they do appear to affect is selection of determiners. Causing me to think that workable criteria and empirical facts associated with them can be sorted out with a sustained effort (involving a lot of bottom-up work, I suspect). Ditto picture nouns. Location nouns would be another subclass that often seems to exist, with biggish differences, and then there’s that language where kinship terms appear to be verbs, so not everything that is usually some kind of noun is always some kind of noun.
Thanks for pointing out this problem – I simply used “j” and “y”, which works.
I think you wrote <j> and >y> in there, and the software interpreted them as inexistent HTML tags and therefore deleted them as usual. The trick is to write < as < and > as >. Or just use italics.