To compare many different (including little-studied) languages around the world, comparative linguists need access to good data, which is often difficult to get. Many research questions cannot be answered easily by consulting reference works such as dictionaries and grammars. We often see some interesting variation between a number of languages we know well, and we’d like to know how the parameter in question is distributed elsewhere, ideally throughout the world. What should we do?
The patient ones among us will publish a paper about the variation in the small set of languages that they started out with, hope that other linguists will be intrigued by the question and wait for papers on other languages to be published. If enough specialists of diverse languages take an interest and publish their results, then we can maybe see the bigger picture after a few years or decades. This has been mostly the approach in formal generative typology (for this term, see Baker 2010). The problem with this is that it is normally not sufficient to simply say “My language has such and such properties” to get a paper published – one needs to make a specific proposal of one’s own. And one gets a fairly dramatic reporting bias: If the original paper’s suggestion is confirmed by other languages, specialists of these languages cannot publish their papers. Only papers that provide counterevidence are easily publishable. So this is not a very good way of finding out whether a hypothesis concerning linguistic variation is true.
So some kind of systematic data-gathering seems to be called for. If the question is relatively easy, we can ask our colleagues personally. For example, David Gil was interested in the occurrence of “para-linguistic” click sounds in the world’s languages (like tsk-tsk or tut-tut in English), and since information about this is almost impossible to find in grammars or dictionaries, he simply asked many colleagues – see the WALS chapter on “Para-linguistic usages of clicks” (Gil 2005). The sources on the data page of WALS Online make it clear that the data almost all come from personal communications. Now saying whether your language makes use of clicks, and roughly in what sense, is not so difficult. But what if a typologist has more complex research questions, e.g. if they want to know about tense-aspect usage in a language?
Since Dahl’s (1985) ground-breaking comparative study of tense and aspect systems, the power of the translation questionnaire is well known. If you have translations of a set of sentences in a large number of languages, you can say a lot about world-wide variation. A closely related method is to make use of parallel texts, i.e. sets of sentences that were translated for independent reasons (e.g. Stolz 2007 on the use of Harry Potter and Le Petit Prince translations, Wälchli 2009 on New Testament translations).
One disadvantage of these methods is that you don’t easily get answers to highly specific questions, and on phenomena that are rare. Another disadvantage is that you need to know quite a bit about a language to understand what is going on. So minimally, the translations should be provided with a morpheme-by-morpheme gloss – and there are no glossed parallel texts.
So comparative linguists sometimes make use of description questionnaires as well, where they ask their specialist colleagues to provide answers to a set of descriptive questions. This presupposes that the specialists understand the questions and know their language well, but above all it assumes that the specialists have an interest in helping their comparativist colleagues. But why should they be so altruistic? Of course, I’ll be happy to help a close colleague, but the less close the comparativist colleague is, the longer it will take to find the time to fill in the questionnaire (I am speaking from personal experience – not that I didn’t want to help my colleague when I once got such a request, but it never seemed to get sufficient priority to make it to the top of my to-do list). So maybe simply post a query on the LINGUIST List or LINGTYP List? In this way, you make sure not to impose on anyone, but the results one gets are usually quite haphazard, featuring mostly the bigger languages that one knows about anyway.
Thus, there are good reasons to try out another method: turn your more distant colleagues into close colleagues (i.e. collaborators) by asking them to join a specialist consortium. Offer them the opportunity to publish the answers to your questionnaire, thus giving them a strong incentive to prioritize them. Of course, if you happen to have a big lab with access to plentiful funding, you can pay them, indeed employ them (see, e.g., some of the collaborative work at the MPI for Psycholinguistics, e.g. Enfield et 17 al. 2012+). But if you have less funding, it works as well: Language specialists are typically very happy to be part of a bigger project, because they want their expert knowledge to be useful to the world at large, i.e. to comparative and general linguistics.
Here at the MPI for Evolutionary Anthropology, we have the privilege to have generous funding to invite people to workshops, and we have worked on three fairly big projects involving specialist consortia over the last half dozen years: The Loanword Typology project, the Atlas of Pidgin and Creole Language Structures (APiCS) project, and the Valency Classes project. With these projects, the difficulty has not been to find first-rate linguists willing to participate, but rather to manage the amount of data that comes in. The workshops that we organized to discuss the project with the consortium members have certainly helped to motivate them, but the approach should work without such workshops as well – or with cheaper workshops organized around a big conference such as SLE or LSA that many people are attending anyway.
Originally we called our collaborators simply “project contributors”, but then we realized that what we were doing was not all that different from what the large groups of scientists in other fields such as genetics are doing when they work together on big projects such as the ENCODE project. When such projects publish their results, the author is sometimes simply the relevant project consortium.
Of course, as long as there aren’t too many consortium members (say, 18, as in Enfield et al., or 12, as in Hengeveld et al. 2012), they can all be listed as coauthors, but over 30 authors are probably impractical, and it may be better to adopt the concept of a consortium that can be an author. Or a co-author: In the forthcoming APiCS chapters, the first author is always the editor who was in charge of the feature and who actually wrote it, but the second author is “the APiCS Consortium” (e.g. Michaelis & the APiCS Consortium 2012+). The APiCS consortium consists of 88 linguists, whose names will be listed prominently in the Atlas. In the case of the Loanword Typology project, we didn’t do this, but maybe we should have: It would have been easy to list “the Loanword Typology Consortium” as the coauthor of our result chapter (Tadmor 2009).
But how does one publish answers to a questionnaire? Normally, we publish papers, and one can of course invite the authors to basically publish a prose version of their answers. The St. Petersburg typology school has a long tradition of doing this (e.g. Nedjalkov 2007). But questionnaire answers can be used more easily when they are in the form of a database, such as the World Loanword Database, one of the results of the Loanword Typology project. How does one publish such a database? The World Loanword Database is one attempt, the online WALS (http://wals.info/) is another attempt, and another related site is the Electronic World Atlas of Varieties of English (eWAVE). This is a very innovative publication format, and many things are still unclear. But it seems a promising path to me.
References
OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (September 10, 2012). Collaborative comparative linguistics via specialist consortia. Diversity Linguistics Comment. Retrieved December 6, 2024 from https://doi.org/10.58079/nss5
Note that another precursor to the consortium-as-author strategy is Levinson, Meira & The Language and Cognition Group (2003). Reportedly, at the time it wasn’t easy to convince Language that a consortium could be a viable author. Also, authors citing this study sometimes have trouble with the correct citation. But the idea behind it was exactly what you outline: the results were the outcome of a largescale project of the L&C Group, and so the group was the relevant entity.
Levinson, Stephen C., Sérgio Meira, and The Language and Cognition Group. 2003. “‘Natural Concepts’ in the Spatial Topological Domain-Adpositional Meanings in Crosslinguistic Perspective: An Exercise in Semantic Typology.” Language 79 (3): 485–516.