The abyss of anonymity and the struggle for an open reviewing practice
Veröffentlicht am: 13.11.2017, 12:47
The academic publishing system is broken, everyone who works in academia knows it. We publish way more than can be read, we postpone deadline after deadline due to chronic overloading, and many of us even don’t find the time to read at least part of what is published in our specialized subfield anymore. Then there is the ongoing discussion about value chains of academic publishing houses, including publication fees, access-models, or the accessibility of publicly funded research. A third aspect of this highly dysfunctional practice relates to journal publications and the reviewing system. Peer-reviewed journals are still the gold-standard for publications, and many of them use the so-called double-blind procedure for reviewing submissions. This basically means that neither the author nor the reviewer ought to know who is behind a text and a review respectively.
Theoretically speaking, this is a good mechanism that guarantees objectivity and fosters open criticism. In practice however, it often turns out to be a bad one. First, it tends to be unfair, simply because while authors will never know who reviewed their submission, in many cases it is pretty obvious for reviewers who wrote a text. Many papers are made available on pre-print servers before publication (which means that the names of the authors are public as well) or they tackle problems and projects that can be easily linked to a person, like in my case. I run a citizen-science project called Lingscape, so whenever a text discusses specific aspects of the project, it is fairly easy to identify me as the author. And second, anonymous reviewing often produces contemptuous criticism without the need to take responsibility for it. Let me give you a recent example for what I mean with this.
A review is a review is not a review
Some months ago, I submitted a paper to a special volume dedicated to methodology in linguistic landscapes research. The editors were looking for cutting-edge approaches to the study of visual multilingualism, so I thought a project like Lingscape that introduces a citizen science approach to linguistic landscapes research while making use of mobile app technology for data collection would be a perfect match for such a volume, not only because citizen science brings about a lot of methodological and epistemological changes to our established research routines, but also because the use of a mobile app could be an interesting addition to the field’s toolset. At least that is what I thought…
I wrote the paper along the lines of this blog post in which I have tried to outline the methodological foundations of a sustainable community-driven linguistic landscapes research. I took the five concepts from the post – participatory research, lifeworld orientation, societal engagement, computational analysis, and open research practice –, refined and deepened the related aspects a bit, and introduced them as the methodological framework for Lingscape as it directly emerges from the empirical work in the project. This may sound a bit unusual in the first place, but keep in mind that Lingscape is a citizen-science project, which basically means that not only most of what is done in the project is co-created by the participants but also many aspects of how it is done and how the project work is framed theoretically. All of this is part of what Frisch (1990) has called “shared authority”.1 As I said before, citizen science fundamentally challenges the ways we define and pursue academic research and negotiate authority in discourse. And that is one of the things that makes it so exciting, at least for me.
In a second step, I developed an analytical matrix to compare crowdsourced projects that employ mobile apps for linguistic landscapes research to set the ground for follow-up studies. Since both aspects – participatory research and mobile app technology – represent novelties in the field’s arsenal of methods, my impression was that such a discussion could be a fruitful contribution to the mentioned special volume.
Some weeks later, I got the (anonymous) reviews for my paper, together with the notification from the editors that my paper was rejected due to the negative assessments in the reviews. So far, so normal. Bad reviews are part of the game, and of course many papers get rejected by journals, mine as well. So nothing to complain about in this regard. In fact, both reviews were critical, but differed vastly in relation to their approach (and maybe motivation):
- The first review was very critical but in a detailed and constructive way. It highlighted several passages in the text, discussing potential problems, proposing alternatives, and recommending changes for the text. Although it is never pleasant to be criticized, especially by an anonymous colleague, I was happy with this review because it a) tackled my submission in a serious way and b) offered a perspective for improvement. People who know me know that I’m very open to criticism and actively ask colleagues and students to help me improve my work. So with this review it would have been easy to refine and improve the text to make it suitable for publication.
- The second review was also very critical, but not in a helpful way. Since they are anonymous, I’m posting some of the reviewer’s comments here so that you can see what I’m talking about:
“However, I do not quite see how the creation of the app and its use by the public is meant to be a “scientific” endeavour in its own right, which the submission seems to argue without giving any specific evidence of how this may be achieved (or has been achieved).”
“The paper purports to outline the “methodological challenges and scientific chances” (sic, “opportunities” intended?). It uses a lot of grand terms such as “citizen science approach”, “knowledge production”, “empowerment”, and so on, but it does not say anything about what these terms actually mean or how the aim of turning citizens into scientists is meant to work.”
“I think the main problem with this no doubt well intentioned initiative is that it does not address the central question of what it means to do (scientific) research. Clearly, it is not just collecting masses of (raw) data. The main aim of research is theory building. And I do not think that citizens can do just that unless they receive long-term, formal training. But then, they are no longer just “citizens”, they are researchers, and the whole idea of switching the roles between scientists and citizens no longer holds.”
“In its Outlook on p. 17, the author states: “The discussion of the research rationale for the Lingscape project…has shown how radically a citizen science approach to linguistic landscape research impacts all phases of a research project.” No, it has not. The discussion has only asserted that without any convincing evidence.”
“The submission demonstrates the author’s IT expertise but it lacks any sociolinguistic sophistication. It’s social mission sounds naïve when it repeatedly mentions meeting the “needs” of citizens, and almost offensive, when it mentions the idea of citizen empowerment – why and how can an app ever empower anyone? Can some of these assumptions ever be reflected on before people assume the authority to claim that they are able to meet anyone’s needs and empower them?”
“Last but not least, the paper is poorly written with seriously flawed word choice and expression.”
Sounds pretty bad, right? Apparently, I’m not only a bad sociolinguist and writer, but also naïve because I think citizen science could be a feasible (and somehow innovative) scientific approach. And offensive because I think that actively integrating citizens in the co-creation of a research project could be an act of social empowerment.
To be fair here, I saw flaws in the first version of the paper myself. And of course I’m the only one to blame for these weaknesses. But be it as it may, I got the impression that this review was negative on purpose and written by someone who doesn’t see citizen science as a “scientific endeavour in its own right”. For example, the author apparently thinks that citizens cannot contribute to what he/she defines as the “main goal” of science without formal training. Of course I can’t expect everyone to share my enthusiasm for shared authority and the co-creation of a research project together with the public. But to me it is apparent from the text that the author is not willing to even consider the possibility that citizen science could bring innovation to academic research because apparently it has nothing to do with “what it means to do (scientific) research”…
Just to be clear, all the negative things the author has to say about my paper and me as a researcher may be true. I’m not a native speaker of English (although normally I get along pretty well). I have a relatively strong profile in sociolinguistics, but it was not the purpose of this paper to discuss sociolinguistic theories. Instead, I wanted to discuss methodological aspects of mobile app technology and develop a framework for a citizen science approach to highlight its innovation potential for linguistic landscapes research. As a consequence, I focused mainly on things that do not tackle all the interesting sociopragmatic and semiotic aspects related to linguistic landscapes.
What made me angry about this review is not the fact that I’m being criticized; it’s the fact that I’m criticized by someone who cleaves to a very traditional idea of what research has to be, who feels the urge to get personal in his/her criticism, and who maybe didn’t even take the time to read the paper closely – many points of criticism are in fact discussed extensively in the paper. Take for example the idea of working together with citizens to develop a shared research perspective that fits the specific “needs” of a community regarding certain aspects of their everyday practice. Although the term “needs” might be a bit too strong in this context, it could for instance relate to the Portuguese community in Luxembourg that makes up for 16% of the entire population but is barely present in official signage. Working together with members of the community to discuss ways in which Portuguese could be promoted in official communications in Luxembourg would not only an exciting study in applied sociolinguistics but could also be seen as an act of empowerment that directly emerges from the citizen-science context of the project. But well, if you don’t see the point of citizen science in the first place, I guess this approach to societal engagement will never match your (scientific) standards.
Open your eyes, open your mind, proud like a god don’t pretend to be blind
So, what happened next? I decided to protest against this review because it was not only inappropriate but it severely lacked the professional attitude one should expect in academic reviewing. The journal editors replied swiftly and the review was in fact annulled. As s consequence, I was able submit an improved version of the paper that will be published as part of the mentioned volume soon. So it was good that I stood up for myself, also because actually publishing the paper might be the best possible answer to this type of review. In any case, this incident is a good example for current erroneous trends in academic reviewing, and I’m not talking about the fact that a reviewer tries to ward off citizen science as a threat to the established construction of authority in academia. Such behavior is pretty common in all social groups as people in general are wary of innovations that may pose a threat to their social status, and a common mental figure in modern social philosophy ever since its beginnings.2
My point here is the way many journals organize their reviewing system, i.e., creating an often imbalanced anonymity between author and reviewer and therefore fostering contemptuous criticism. As I said before, it should be fairly easy for any researcher in the field to find out who is behind the Lingscape project, especially since the community is still small and I have spent quite some time promoting the project at conferences and in social media over the last year. In contrast, I have no means of finding out who criticized my paper in such an unfriendly way. And even if the reviewer did not know who I am when reviewing the paper, my name will be public once it gets published. As we have seen in the example, this reviewing model may lead to some very unpleasant comments which in the long run could be even harmful to the academic quality of the journals themselves. Naturally, there is also a good side to anonymous review. Reviewers can write critical and sometimes decisive (with regard to funding) reviews without risking to be publicly attributed with this decision. This can be an important safety mechanism for ensuring qualitative standards in academic research. However, the often imbalanced anonymity remains as a fundamental problem of this system.
How about an alternative then? Of course it’s not easy to change a highly institutionalized (and profitable) system that shapes publishing practice to a considerable extent. But why not try and open up the reviewing process like we currently see it in the open data and open access movement? If we are willing to rethink the ways we make accessible our data sets and publications to the public, why not do so with one of the most important steps in the pre-publication phase? Let us not forget that reviewers are in a position of power: They decide about the scientific sufficiency of papers and proposals. In many cases they contribute substantially to the improvement and overall structure of scientific publications. In a way, they are co-authoring the texts they are reviewing. So why not make their names and contributions public?
One possibility would be to publish the names of the involved reviewers together with the authors’ names when publishing a book or paper. This could be a good way to prevent inappropriate or even offensive criticism because reviewers won’t know in advance whether or not a text gets published. Moreover, it could be seen as a reward for their contribution to the final version of a text. But of course that does not cover cases in which a text doesn’t get published after reviewing. Therefore, another option that takes this into account would be an open-identity approach, where the names of authors and reviewers are known during the process. This could as well include direct interaction between authors and reviewers to foster discussions about strengths and weaknesses of the text. Other possible models include public review by the community on dedicated pre-print severs like arXiv or the publishing of the reviewers’ comments together with the actual text.3
All the different models try to offer a solution for the problems of the current (anonymous) reviewing system. Personally, I would prefer a reviewing process that is open, i.e., both sides know who they are dealing with, interactive, i.e., it offers the possibility to discuss critical points directly in the process, transparent, i.e., reviews and names are made public together with the text, public, i.e., texts are discussed publicly on dedicated pre-print servers, and continuous, i.e., texts can be improved and actualized based on criticism in a citable version control system. This is of course a bit unrealistic. However, such a system could help to fix some of the most notorious problems of anonymous peer-review and contribute to an open and responsible research practice. Of course there are potential shortcomings to an open review system as well, e.g., in relation to the quality standards for reviews or the attribution of contributions to a text. Plus, in some cases it would be more difficult for reviewers to express fundamental criticism, simply because he/she would have to take public responsibility for it. However, compared to the current system, and against the background of an unpleasant experience like the one above, they sound like luxury problems to me.
In conclusion, and speaking as an enthusiastic citizen scientist, I would say that maybe we do not only need to rethink public engagement and the negotiation of authority in academic research but also foster the idea of open review in academic publishing. An open reviewing system may have helped to establish an equal basis between me and my cantankerous reviewer for a critical but fruitful discussion about how to improve my flawed paper. But maybe that is only me being naïve again.
ps. This text was read, criticized, and vastly improved by Fabienne Gilbertz and Dirk Hovy. Fabienne helped tempering my writing style, and Dirk suggested to focus more on a constructive discussion of open review models instead of poring over the reviews. Thank you both!
Frisch, Michael (1990): A Shared Authority: Essays on the Craft and Meaning of Oral and Public History. Albany: SUNY Press. ↩︎
See for example Honneth, Axel (1994): Der Kampf um Anerkennung. Zur moralischen Grammatik sozialer Konflikte. Frankfurt am Main: stw. ↩︎
See for example the typology of open review models in Ford, Emily (2013): Defining and Characterizing Open Peer Review: A Review of the Literature. Journal of Scholarly Publishing 44(4), 311–326 or Ross-Hellauer, Tony (2016): Defining Open Peer Review: Part Two – Seven Traits of OPR, OpenAIRE Blog. blogs.openaire.eu (Retrieved 2017-11-09). ↩︎
Letzte Aktualisierung: 10.04.2021