Let’s invest more time in research, and less time in reviewing

Over the last three decades, the amount of time linguists spend on reviewing seems to have increased significantly. Reviews of journal papers seem to be getting longer, we spend more time on grant reviewing, and most strikingly, we spend much more energy on abstract reviewing. Maybe this increase in reviewing is a good thing and I’m just nostalgic of the old times, but I feel that there’s too little discussion of this development. Here I will argue that less reviewing would be better for science,
and maybe others will react and make the case for increased reviewing (or that thigs are fine and should stay the way they are currently).

Abstract reviewing

In the 1990s, at least in my subcommunities, conference abstracts would be graded by reviewers, and the abstracts with the best average grades would be accepted. There was no justification, and you weren’t told what your grades were. Nowadays, reviewers are expected to write comments on abstracts, and it seems to be fairly common that one receives lengthy comments, sometimes half the amount of the abstract text. The old practice seemed arbitrary to many, and the new practice looks fairer at first blush: If you are rejected, at least you know what the reviewers didn’t like about your abstract.

But I find the old practice much better, for several reasons: (1) Abstracts are much too short to really reveal the quality of a contribution, so by reviewing them extensively, one attributes too much significance to them (and one invites the strange practice of teaching students to write good abstracts, rather than teaching students to do good research); (2) Since abstract readers are now asked to do a lot more work on each abstract, individual abstract readers are assigned far fewer abstracts, and their judgements cannot really take the average quality of the abstracts into account – as a result, judgements are more haphazard than in the past, and every abstract is read by fewer people; (3) the negative comments that we get from the critical reviewers are almost never constructive in the sense that they would help us improve our research, and they primarily serve to remind us how stupid our colleagues are or how misguided their views are. This creates bad feelings, not only if our abstract is rejected, but even if it is not rejected. My last SLE abstract had two very good grades and one very bad grade (“clearly reject”) – which shows that SLE abstract readers are a diverse bunch of people, and if there had accidentally been two reviewers of the latter sort, I might have been terribly offended.

Trying to create greater abstract-acceptance justice is a noble goal, but since abstracts are a mere shadow of the research that we are doing, it seems much better to focus on justice in other areas. If it turns out that justice is very hard to achieve in these other areas (see below for grant reviewing), then maybe we have to live with the fact that getting an abstract accepted is a bit like a lottery. After all, the next conference is just around the corner, and nobody is always unlucky.

Another development is that for each abstract, you have to say whether you are an expert in the research area, and some conferences even have a process of “abstract bidding”, which distributes the abstracts in a complicated way. The assumption that experts in the area will judge an abstract more fairly seems reasonable, but shouldn’t a good abstract speak equally to every potential conference participant? In my perception, these additional complications have been introduced without any evidence that they lead to improvements, and I would challenge advocates to try to show somehow that they lead to better results (e.g. greater conference satisfaction among participants, including unsuccessful submitters). Personally, one of the less satisfactory aspects of conference talks is sometimes the extreme specialization, making it almost impossible to follow a talk if you don’t belong to a very narrow subcommunity. It seems to me that at least the abstracts should be written in such a way that they are comprehensible to a wide range of readers (including at least all conference participants).

Paper reviewing

Peer review of journal papers has been prevalent only since the 1960s (I think), and in linguistics, the systematic practice is probably much more recent. While many people have voiced the concern that peer review suppresses innovative science by making scholars prefer more conventional approaches, the practice is rarely questioned in linguistics.

On the contrary, it is my impression that people expect to invest more and more energy into peer review. Revise-and-resubmit is increasingly becoming the default decision by editors, in my perception, and some editors even tell the author that their paper needs several rounds of revision. Reviewers are routinely asked to look at a revised version of a paper, even if they recommended rejection or straightforward acceptance of the paper – this can be seen in “Commandment No 5” in this picture, posted on Facebook by Johan Rooryck:

https://www.facebook.com/photo.php?fbid=1537724949689077&set=a.116241035170816.12150.100003547610207&type=3&theater

It’s certainly nice that our colleagues invest so much into helping us to improve our work, but many linguists complain about the process, something that has become much more visible to me because of Facebook. So clearly, there is a lot of suffering – but it is worth it?

One big problem is that peer review is typically associated with wielding power. If a review is critical, the paper will get rejected, at least for the current journal, or the author can be forced to make changes in the direction desired by the reviewers – which may not be changes that the author herself finds useful. Publication in a prestigious journal is often considered a prerequisite for career advancement, even though it is well known that it is not good scientific practice to rely on journal impact for assessment of research quality (as is codified in the San Francisco Declaration on Research Assessment). Thus, editors and reviewers wield power, and it is often more advantageous to follow their suggestions, even if they are not reasonable.

But this is bad for science. Sifting out the truth from all the ideas anf findings that are floating around is difficult, and there are already too many incentives to follow a fashion rather than a more promising path. Revise-and-resubmit discourages unconventional ideas and stifles innovation. Moreover, it uses up a lot of energy – the author’s, the reviewers’, and also the editor’s. The resulting papers are often watered down so much that the author’s original intention is hard to recognize anymore. (And sometimes they make the paper titles more boring, for example, I have heard that a paper originally entitled “Asymmetric DOM in coordination and why this is fatal for movement-based approaches” was accepted only after the title had been watered down to “…why this is a challenge for movement-based approaches”.)

Moreover, the process of paper reviewing is frustratingly slow – it seems that it is still not unusual in linguistics to wait more than half a year for a first decision. And there is almost no discussion of alternatives. Some editors even defend the lengthy process, because they feel that it is their job not only to select the best papers from among the submissions, but also to improve the papers, i.e. to serve as senior supervisors for the authors. I find this a very questionable attitude, and maybe these journals should be called “supervisor-reviewed journals”, rather than “peer-reviewed journals”.

Here’s a radical alternative: Reviews are short (maximally 2 pages) and can limit themselves to evaluating the paper’s quality plus a few suggestions. They are due after three weeks. If the suggestions to the author exceed two pages, the paper is rejected. If not enough reviews come in after a month, the paper is rejected as well. Each month, the best papers are accepted and the author is given another month to revise the paper, taking the comments into account. An accepted paper is published four months after submission. (If someone gives me funding, I will start a journal called Linguistics Monthly that uses these principles.) This will save a lot of work and lead to much less frustration (here is an earlier statement of the same ideas).

Grant reviewing

I will say less about grant reviewing here, because this is a matter that concerns primarily the funders, i.e. rich organizations with their own agendas, in contrast to abstract reviewing and journal reviewing where I feel that we poor ordinary linguists can easily make a difference.

But the problems of expert reviewing in grant selection are by now well-known. In a recent high-profile essay, psychologist Dorothy Bishop makes the case for a funding lottery. She notes that there are implicit human biases that make the reviewing system inefficient and unfair, and she seriously suggests that everyone who submits a serious (methodologically sound) proposal should have an equal chance. This would also put an end to the increasingly widespread practice of paying academics more if they get more outside funding. I am not advocating this here, but the article shows that there are serious problems also with grant reviewing.

Thus, it seems to me that there are overall very good reasons to spend less time on reviewing and more time on research.


OpenEdition suggests that you cite this post as follows:
Martin Haspelmath (July 25, 2018). Let’s invest more time in research, and less time in reviewing. Diversity Linguistics Comment. Retrieved February 16, 2025 from https://doi.org/10.58079/nsu8


3 thoughts on “Let’s invest more time in research, and less time in reviewing

  1. I like all of these ideas, except the plan to have papers being rejected when there are not enough reviews. I read your 2014 arguments for them, but I think they are not convincing. It is still too harsh to resubmit. My suggestion would be different: the editors would publish a list of all papers received on a webpage, plus the date when they arrived some indication of where they are in the processes. This would be enough of an indication for authors to see whether they like the speed.

    I also would like it the decision would always only be ‘accept’ or ‘reject’. When accepted, the editor can still work with the author on improving certain aspects of the paper.

    • Yes, more transparency would be very good in any event. Many years ago, I wrote to the editor of “Language” to ask that they publish not only the dates of first submission and acceptance, but also the date of the first decision – so “Language” is unusually transparent.

  2. When I discussed similar issues on Facebook a while ago, someone made the following comment, with which I wholeheartedly agree (I don’t remember who it was): “Reviews are too long in our field across the board. Both for papers and for abstracts. I would much prefer to have 10 abstracts to review, where all I have to do is give a grade out of 5, and write 5 lines in justification, than have 4 abstracts that I have to write a whole page about. I think that better judgements get made both if (i) a single reviewer gets enough abstracts to get a sense of the spread and (ii) if there are more reviews for a single abstract than 2 or 3. The latter point in particular helps to eliminate reviewer-idiosyncrasy related noise MUCH better than forcing people to spend more time reading and commenting on abstracts.” (an anonymous Facebook comment)”

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.