1 The Starting Point: An “Ill” Method of Assessment
Colleagues complaining about blind-peer-reviewed law journals, in particular about the lack of transparency of the evaluation process and the absence of accountability of the evaluators, was one of the first and common scenarios that I encountered when starting my formation as a young researcher in a Latin-American law school several years ago. Stories about receiving the final anonymous reviewer’s remarks almost one year after the article was submitted with only a few words -that can be summarized in “yes, publish” or “no, do not publish”; stories about harsh and disparaging comments -that in some cases were not related to the manuscript; as well as stories about the influence of non-substantive quality criteria in evaluator’s decision making; helped me noticing a contradiction: the method that aims to select high-quality research articles seems to be of poor-quality. The question arises; what is causing this? The literature, as well as discourse among the author’s colleagues, suggest that there is likely not only one, but several causes that explain what is plaguing the current blind peer review system. Two causes are pointed out frequently: the anonymity as an essential characteristic of the method and the lack of consensus among legal scholars regarding the quality criteria to be applied in the evaluation process. This article focuses on the former of the causes because anonymity is at the core of the traditional evaluation method applied by several law journals, especially in Latin America where it is the standard method of review.1 Moreover, anonymity frequently facilitates the occurrence of -what the author calls- technical and social issues in the scholarly publishing world as those mentioned above.
Consequently, another question arises; what can legal journals do to deal with this particular cause, namely anonymity? Extant research offers an alternative; the so-called Open Peer Review (OPR) method. A vast body of literature discusses how journals in other disciplines (e.g. Science, Technology, Engineering and Mathematics -known as STEM disciplines) have gradually and -in most of the cases- successfully implemented OPR models already more than ten years ago. However, only a small number of articles focus on law journals, and none of them discuss the potential use of OPR models by Latin-American law journals. This is remarkable for several reasons. First, in Latin America, the dominant evaluation method of academic legal publications is the double-blind peer review (BPR). Hence, the problems regarding transparency of the evaluation process and the accountably of reviewers are much more salient there than in other jurisdictions where BPR is not the only (or common) method being used. In fact, it is a common perception that Latin-American legal scholars are dissatisfied with BPR. Yet, no serious debate has been started about the fundamental aspects of BPR, which are the root cause behind the problems experienced with such model, let alone a discussion of the OPR method, which offers a solution. This stands in sharp contrast with the ongoing debates in North-American law journals concerning the essential characteristics of student-editing and in European law journals about editorial review; the two of which (student-editing and editorial review) are the dominant methods used in those parts of the world to evaluate the substantive quality of academic legal contributions.2
The current article addresses this gap in the literature, by attempting to start such a debate. It builds on insights from STEM disciplines and their experience in the use of OPR as an alternative method to overcome the issues that anonymity seems to entail when evaluating the substantive quality of scientific research.3 It is important to note that this article is neither against BPR in legal publishing, nor peer reviewers nor law journals. This work presents and systematically compares the pros and cons of BPR and OPR as identified in the existing literature, mainly from STEM disciplines. By objectively comparing both assessment methods, this paper explores whether unmasking the identities of authors and evaluators is a suitable method to counteract the drawbacks that BPR has experienced in the legal domain or whether another alternative method is needed. To achieve this aim, this research follows a “medical diagnosis” structure inspired by the medical field, within STEM disciplines. The key assumption is that the traditional blind method is not unsuitable as such, but it is ill (i.e. it is a poor-quality evaluation method) due to its shortcomings that affect both the transparency during the evaluation process and the accountability of the actors involved in it.4 Thus, Section 2 is devoted to studying the “patient” itself; BPR method in general terms. Section 3 focuses on the review of the “medical record” of BPR, which in our case is associated with the different “symptoms”, such as technical and social issues that this evaluation method has suffered across disciplines for a long time and the anonymity as their possible cause. Here, the principal flaws of BPR are pointed out. In Section 4, the “diagnosis” is presented. This section analyzes the OPR evaluation method as the “antidote” in the “treatment” that BPR has received in STEM disciplines. In this sense, the experience of STEM disciplines -in applying OPR when evaluating the substantive quality of scientific publications- may inspire the path that blind-peer-reviewed law journals need to undertake to guarantee transparency in the evaluation process and the accountability of the parties that participate in it. Specifically, to guarantee the accountability of the evaluators. Furthermore, the “benefits”, as well as the “contraindications” of OPR, are discussed in this section. Section 5 concludes and provides recommendations for further research.
2 Knowing the Patient: The “Ill” Gatekeeper of Science
Peer review is one of the most commonly applied systems to evaluate the substantive quality of scientific research. It has been used across disciplines for more than 200 years.5 In its initial stages, peer review was used as a mechanism of censorship and later as a filter system in charge of selecting high-quality research. For this reason, Ross-Hellauer has labelled the peer review method as the “gatekeeper of science”.6 In the field of legal research, peer review has been one of the most preferred methods to assess the substantive quality of manuscripts because of the limitations of a purely metrics-based system in this area of knowledge that -as Peruginelli refers to- has an ambiguous identity for being ‘[…] somewhere between humanities and social sciences, sharing characteristics of both.’7
There are ample definitions of peer review in the existing body of literature.8 However, a closer examination reveals that almost all of the current definitions converge in some specific aspects.9 For virtually all disciplines, peer review is a quality assurance mechanism that aims to identify and validate high-quality and original research.10 At the same time, the ideal peer review serves as a tool to scrutinize researchers’ intellectual work in order to improve the scientific knowledge in a certain discipline.11 Accordingly, Ziman states that peer review is one of the ways in which the scientific method contrasts and verifies the ideas developed in each field.12 In this vein, Maharg and Duncan argue that although each discipline often has ‘[…] their own specific processes, and with these, journals have their own procedures’, the characteristics of peer review mentioned above are the most common in all the areas that apply this assessment method. The legal academia is not an exception to this.13
As far as history is concerned, Kronick argues that peer review has emerged within the modern science, while for other authors, these practices were already developed before that.14 However, the relevant feature of peer review, according to Weller, is that this system has been generalized among various scientific disciplines after the Second World War when the codes and written norms about the requirements to apply peer review were created.15
Nowadays, peer review in scholarly publishing is made up of two or more referees -also known as peer reviewers or evaluators- who are selected by the publisher, such as the editors in chief or the editorial board, under its own standards, to provide an objective opinion about potentially publishable research.16 For some scholars in the fields of Humanities and Social Sciences, the criteria and mechanisms applied by journals to select the referees are most of the time a real mystery.17 Consequently, to know what are the practices of peer review in scientific journals has become an imperative need.18 Nevertheless, peer reviewers are, by definition, experts in a specific field of the knowledge.19 They have -or at least, this is what scientific community hopes- the necessary expertise to evaluate the quality of a contribution in an impartial and disinterested way.20 It is common that, in peer-reviewed journals, referees decide whether a submitted article should be accepted, considered acceptable with revisions or rejected.21 In other words, peer review is oriented to contribute to scientific research by selecting the best contributions in a specific area and by suggesting the improvement of those that need some amendments to impact on the scientific world.22
As van Gestel has pointed out, peer review comes in different shapes and sizes.23 From the beginning, the most common have been those models in which anonymity plays the leading role. This model of peer review is called blind, closed or anonymous because the identities of the author (single-blind) or the author and the evaluator (double-blind) are covered by the veil of anonymity. According to Tavares de Matos, the first, labelled “single-blind peer review”, is predominant in Social Sciences, while the second one; “double-blind peer review” has been preferred by journals in the field of Humanities. As mentioned above, in the area of law, the double-blind peer review is commonly used in Latin America, where it is a standard quality-assessment method among the most representative law journals,24 but also in the UK where it is gaining more and more ground.25
3 Symptoms: The Veil of Anonymity as a Possible Cause – The Pros and Cons of BPR
Defenders of the BPR method argue that anonymity is important for the evaluation process from both the author’s and the evaluator’s perspective because it creates a safe place.26 From the author’s side, a blind system provides confidence regarding the impartial treatment that his paper is receiving or is going to receive.27 Likewise, it is believed that masking an author’s identity may contribute to avoiding potential publication bias among referees. In particular, reported evidence suggests that the potential publication biases that can be prevented with anonymity are those related to female authors or against authors that work in less prestigious institutions or that are from non-English speaking regions.28 Furthermore, Godlee and Ford have stated that anonymity is perceived as a protective tool for junior researchers that are taking infancy steps in academia.29 Then, the masking of the identity of a junior researcher can save him of public humiliation because of his lack of experience, or in the case of high-quality work, may also increase the chances that his work will be accepted for publication.30
From a reviewer’s perspective, a BPR system has been argued to be the most suitable way to help referees to focus only on the strengths and the flaws of the manuscript that has been submitted.31 Especially, for junior researchers that are reviewing the work of senior scholars, a BPR system may offer them more freedom to give honest and constructive criticism without the fear of future retaliation or the end of their careers.32 Similarly, Ross-Hellauer points out that senior researchers serving as evaluators can provide “candid feedback” without the concern of reprisals from upset or not satisfied authors.33
Nevertheless, the current state of BPR has been compared to Churchill’s statement on democracy; ‘almost fatally flawed but better than any alternative’.34 For this reason, different studies have been undertaken to try to understand the effects of BPR in practice. Studies -from the classic research conducted by Peters and Ceci in 1982 or the classic randomized controlled trial undertook by van Rooyen and others in 1997, to the survey implemented by Ware in 2007 and his recent follow-up in 2015- show that BPR is a helpful tool to evaluate and improve the substantive quality of scientific research, but also that changes are needed to achieve these aims.35 Maharg and Duncan summarize the situation in legal publishing saying that:
‘[P]eer review is necessary, but the process could probably be made more transparent than the current black box of many legal journals. This does not mean that we run the risk of no quality control -in effect, opening up a Pandora’s box of poor quality research upon the legal community. The process of review can be improved if we use procedures and processes that are customised for the job in the way that a tool is customised for a task.’36
The question then arises, what are the symptoms of the current system that suggest that it is not a suitable “tool” for the task of assessing the substantive quality of academic legal research? For the purposes of this article, I have synthesized them into two principal groups; technical and social issues, where the former is related to the transparency of the process and the latter to the person of the referee.
3.1 Technical Issues
There are seven specific issues that are frequently mentioned in the existing body of knowledge about BPR. In a random order, these are: 1) delays in the different stages of the evaluation process; 2) costs of the evaluation process; 3) lack of incentives for reviewers; 4) poor-quality reviewers’ reports; 5) lack of a real dialogue among the actors involved in the evaluation process; 6) lack of reviewers’ expertise to evaluate the manuscript; and 7) authors’ identification. Each of them is briefly discussed subsequently.
1) Delays in the different stages of the evaluation process. In most cases, the use of the BPR model by journals generates long periods between the three principal stages of the process, namely, submission, review and publication.37 In some journals, including Latin-American law journals, the period from submission of a manuscript to its publication can exceed one year -as mentioned before.38 Sumner and Buckingham have argued that this is because of an “intellectual meal” that is led by a small number of reviewers. As a consequence, a long period can elapse until there is “acceptance”, “rejection” or “acceptance after revisions”. Therefore, Ross-Hellauer has aptly stated that ‘[…] this delay slows down the availability of results for further research and professional exploitation.’39
2) Costs of the evaluation process. The cost of the evaluation process has two faces. On the one hand, the cost can be seen only as a pecuniary loss and on the other, as the time that evaluators spend reviewing manuscripts and writing the reports. An example of the former can be found in the study conducted by the Research Information Network (RIN) in the UK. The study shows that the global cost of reviewers was 1,9 billion pounds in 2008, however not included in the calculation are others costs like the time that publishers and authors spend checking, modifying and resubmitting their works.40 For Ross-Hellauer, the amount can even be higher when considering that a manuscript may be submitted and resubmitted several times until it is accepted.41 Not to mention the costs that a scientific community has to pay when, for instance, an innovative work or a relevant contribution is being held for a long time.
3) Lack of incentives for reviewers. It is commonly argued that human beings are moved by incentives that can be either material or immaterial, and sometimes both. In scholarly publication, a material incentive may be an economic reward and an immaterial the increase of reputation. However, in most of the blind-peer-reviewed journals, the work of referees is almost exclusively unpaid and the anonymity of the process makes that their contributions are not recognized by the scientific community.42 For this ad honorem activity is that Khan argues that for a huge sector of scholars the work of reviewing is perceived more as a burden than as a gift.43 In the case of Latin-American law journals, it is common that the name of the evaluator is published in a general list of referees on the website of the journal. At the same time, certifications to include in the evaluator’s academic Curriculum Vitae can be provided when required. But are these examples of real incentives for investing time and effort in evaluating an academic legal manuscript?
4) Poor-quality reviewers’ reports. A lack of incentives, in turn, may lead to another issue. Bold states that the principal cause why the reviewers’ reports are of poor quality is the lack of incentives. According to Bold, the lack of incentives generates a lack of motivation to spend long hours reviewing, validating knowledge and writing high-quality and in-depth reports that contribute to the improvement of the author’s research.44 Then, in a cost and benefit analysis, anonymity seems not to be a good deal. As a consequence, very vague or run-of-the-mill comments -sometimes not even related to the research object of assessment- as well as “yes” (i.e. acceptance for publication) or “no” (i.e. rejection for publication) reports become the order of the day.
5) Lack of a real dialogue among the actors involved in the evaluation process. To achieve its aims, peer review method needs to generate a dialogue between authors and evaluators. This relationship that one can denominate horizontal between an author and an evaluator enables the improvement of specific research and with this, the scientific knowledge of a discipline. For the defenders of an open method, anonymity changes this horizontal relationship into a vertical one that merely enables an imaginary debate between distant parties rather than a dialogue with which the evaluator and author engage and work together for a high-quality publication. Additionally, the anonymous method requires the intervention of a third person, in most cases the editor. Then, this triangular communication also makes that the review process lasts longer and that ‘[…] questions go unanswered, confusions go unclarified, criticism go undefended.’45
6) Lack of reviewers’ expertise to evaluate the manuscript. The question about the way in that referees are selected is tough to answer, in particular, because it is not common to find any reference to or explanation about the process or the criteria applied to select a referee in the publishing policies and practices of blind-peer-reviewed law journals. Thereby, Travis and Collins uphold that editors have the power in most cases of shaping the reviews by selecting reviewers according to their own preference because of their affiliation to specific theories and methods.46 Thus, the masked identity of a peer in the anonymous model may allow referees that do not have the expertise required to assess the substantive quality of a publication, which is patently unfair.47 With this in mind, Sekhar and Aery have considered that the problem is that the author, not knowing who the reviewer is cannot know either if he is a real expert that can understand what the author is talking about.48 The same scholars have provided a graphic example to describe the situation of a lack of referees’ expertise saying that the author ‘[…] just believe that the journal and its editor knew the expertise of the peer, the same way one believes the knowledge of a priest about God.’49
7) Authors’ identification. The experiment conducted by Godlee, Gale and Martyn in 1998, shows that some factors, such as close disciplinary communities and reviewers’ technological skills, allow him to identify the author of a manuscript. Therefore, the parties’ (i.e. authors and evaluators) anonymity may only be partially effective in the blind version of peer review. Principally, anonymity may not be truly effective, for instance, in Latin America where legal communities are usually small. Likewise, a double-blind model may not be successful when the reviewer, through the use of technological developments, such as the Internet, can take off the veil of anonymity from the authors’ side.
3.2 Social Issues
‘Human beings have feet of mud’, as a common Colombian saying goes. Despite goodwill, all human beings make mistakes. Reviewers are no exception to the rule. Nevertheless, this is not a justification for abuse when no substantive criteria (e.g. reviewers’ power or bias) influence the decision of whether a manuscript can be published or not. As several experiments have shown and as academia has reported it, the anonymity of reviewers in the BPR method has caused social issues, such as the presence of bias and empowerment in the decision-making-publication-process, across disciplines for several years. Biases and empowerment are social issues because they do not only have an impact on the quality of the review but also on the society, as evaluators are actors within a social, political, economic and institutional environment. Both issues, 1) reviewers’ biases and 2) reviewers’ empowerment in the blind method are presented as follows.
1) Reviewers’ biases. From a theoretical point of view, BPR as a method of assessment of substantive quality requires an objective referee. However, authors, like Groves, Boldt and Maharg and Duncan, point out that evaluators are not always unbiased when evaluating research, especially when the manuscript object of assessment affects their own research.50 Moreover, Epstein argues that some evaluators have a greater tendency to reject those submissions that affect the research of their friends as well, what the author calls “peer cartels”.51 In this context, BPR may facilitate this type of behavior, mainly, because no matter what the reviewer does, his reputation -that may be one of the most valuable affected aspects when talking about accountability in the evaluation process- is well protected under anonymity.
The most common biases that may influence evaluators’ decision-making are: gender,52 nationality,53 institutional affiliation,54 authors’ reputation55 and even authors’ moral virtues.56 Another bias mentioned often in the literature is language. Some referees prefer a sophisticated vocabulary over a plain one and use this preference as an evaluation criteria in their review.57 This is also applicable to the methodology.58 For instance, Travis and Collins uphold that a complex methodology is favored by reviewers over a basic one, even if the former is inappropriate.59 In this sense, the systematic study carried out by Ross-Hellauer, which was published in 2017, shows that across disciplines evaluators often dismiss innovative methods, approaches or results that are contrary to those predominant in the respective discipline.60 Furthermore, the referred systematic study reveals that positive results in research are more easily accepted by reviewers, than negative or neutral results.61 Additionally, the classic research conducted by Kerr, Tolliver and Petree in 1977 and later by Campanario in 1998 shows that a huge amount of referees are against replication studies.62
2) Reviewers’ empowerment. BPR may function as a black box by enabling many evaluators and editors to retain their power by creating a vertical relationship, that in its origins aims to be horizontal, as the name of the method itself suggests (“peer” review). This hierarchical relationship that has emerged in STEM disciplines decades ago – and which is not alien to academic legal publishing world either – has produced several problems, in particular, problems related to ethics and justice. Boldt and Pöschl identify two phenomena as the main consequences of the reviewers’ empowerment conferred by the anonymity of the blind method: the phenomenon of abuse and the phenomenon of unprofessional referees’ comments.63 The first phenomenon has led authors, like Smith, to denounce practices of stealing ideas from the author’s submission and afterwards passing them off as their own.64 Ross-Hellauer further mentions that the empowerment of anonymous referees in the current BPR method allows ‘[…] intentional blocking or delaying publication of competitor’s ideas through harsh reviews.’65 Ford and Gould argue that elitism is another consequence of this phenomenon.66 In this vein, Ford summarizes the thinking of Roh, stating that a ‘[…] blinded review continues a cycle of exclusion, retaining the social power of scholarly publishing in the hands of the majority voice, which is often white and male.’67 The second problem (i.e. unprofessional referees’ comments) threatens the objectivity, reliability and consistency of BPR. Various studies, for instance, have also shown the weak levels of agreement that can be found in reviews’ assessments when comparing their reports.68
4 The “Diagnosis” and the Suggested “Treatment”: Unmasking the Identities of the Peers
Transparency of the peer review evaluation process and accountability of referees are two essential aspects that BPR is seemingly lacking, while anonymity is identified as the cause of this. Therefore, OPR has emerged in STEM disciplines as the treatment that BPR requires. The concept of OPR is not new. According to Ross-Hellauer, the first scholar who used the expression “open peer review” was Armstrong in 1982.69 However, the concept did not enter into the scholarly debate until the early 90s, and from the 2000s it started being discussed more often.70 Ross-Hellauer has associated this significant upswing in the discussion of OPR with the rise of the academia’s openness agenda.71 There are different definitions of OPR. Nevertheless, the above-mentioned systematic study conducted by Ross-Hellauer shows that among the variety of meanings of OPR found in the literature, open identities peer review is the most common form to understand the concept.72 The definition proposed by Ford in 2013 illustrates this. The cited author defines OPR ‘[…] as any scholarly review mechanism providing disclosure of author and referee identities to one another at any point during the peer review or publication process.’73 Thus, OPR is also known as “unmasked review”, “signed peer review” or “unblinded review”.74
Some well-known, high-quality journals in STEM disciplines have -in most of the cases- experimented successfully with OPR trials.75 Groves, the editor of the British Medical Journal (BMJ) in 2010, stated that the BMJ adopted the OPR model because reported evidence had shown that there are not so many benefits in using BPR.76 At the same time, the editor denominates the traditional model as an “unfair” and “Kafkaesque” system that limits the transparency of the assessment process.77
4.1 The Technical Benefits of the Open “Antidote”
In order to facilitate the comparison with BPR, the benefits that OPR has disclosed in its application in STEM disciplines, are presented in the same order that the drawbacks of the blind model were described in section 3.
1) A faster evaluation process. According to Prug, McCormack, Pölschl, Cope and Kalantis, OPR shortens the time between the article submission and publication because it enables a more direct interaction among the actors involved in the evaluation process. As a consequence, different opinions, interpretations and experiences would be promoted.78 In this vein, the efficiency and equality of the system may improve and with this, the credibility of the journals as well.79
2) Lower costs of the evaluation process. For the defenders of an unmasked method, OPR contributes not only by shortening the stages of the publication process but reducing the pecuniary costs in comparison with the use of the traditional BPR. The reason is that generally, OPR works using technologies -such as online platforms- that facilitate the contact and the interaction between authors, evaluators and editors, and in some cases, with the scientific community in a specific field as well. Moreover, some authors have suggested that the non-anonymity in the alternative method of quality evaluation can also operate by disclosing the reviewers’ reports online, for instance, on the website of the journal or on a virtual repository. As a result, the time that reviewers spend writing their reports may be compensated with their recognition within the scientific community for the exhaustive and devoted service rendered in pro of a specific area of knowledge.80
3) Short and long-term incentives for reviewers. OPR may be seen as a suitable tool to deal with the lack of reviewers’ incentives. For authors like Horn, anonymity is inconvenient for most of the evaluators because their work in blind traditional systems cannot be taken into account in metrics and other kinds of assessment methods to obtain a promotion.81 In contrast, in the open system referees may be motivated to write good-quality reports because they would receive compensation for their work in the short or long term. This compensation -as suggested in the previous numeral- is not restricted to economic remunerations, but to non-economic rewards such as an increase in their reputation.
4) Enriching high-quality reviewers’ reports. According to Ford, Boldt, and Prug, professional reviews could improve the quality of scientific research because they contribute to promoting constructive criticism.82 In the same spirit, Groves argues that through OPR it is possible to avoid arbitrary judgments for publication, such as “yes” or “no”, contributing in this way to enhance scientific research as well.83
5) A potential real dialogue among the actors involved in the evaluation process. Once the author and the evaluators know each other, the horizontal relationship that the peer review aims -as a method of assessment of substantive quality- can be reached. Or at least, there are more possibilities to establish a real relationship between “peers” in which the exchange of ideas may represent a true enrichment in a certain discipline as well as in the scientific community and, in consequence, in the society.
6) Referees with enough expertise to evaluate the manuscript. Knowing the identity of the evaluators also allows authors to scrutinize whether or not the referees have enough expertise to provide constructive criticism. Then, it has been argued that author’s confidence may increase with regard to the knowledge -and in some cases also the professional behavior- of the evaluator’s in a specific scientific domain.
7) Authors identified. The first of the issues that is evidently solved with the use of the OPR method is the identification of authors in small communities. As it has been mentioned in Section 3.1, the use of BPR in small scientific communities is criticized for being only partially effective because the evaluator usually can recognize the author of a manuscript through the use of references and writing style. In an open system, Lipworth and others argue that it is not necessary to play the game of “not knowing each other”. Instead, authors and reviewers may establish an honest academic dialogue like true peers.84
4.2 The Social Benefits of the Open “Antidote”
OPR is seen by its defenders as an effective mechanism to vanquish the abovementioned social issues, as it follows.
1) Reviewers’ biases are more visible. The scholars that promote the OPR method argue that it makes the reviewers’ biases more visible and subject to a further direct scrutiny.85 In this sense, it is more difficult that in an open system reviewers can reject certain academic work because its author, for example, is either a woman or a junior researcher, or works in a not well-known university or comes from a non-English speaking country. Thus, with OPR referees’ accountability may be warranted because their names and, in consequence, their reputation are at stake.
2) Less room for reviewers’ empowerment. ‘With great power comes great responsibility’ is a phrase attributed to illustrious personalities like Voltaire, Churchill, Roosevelt and Lord Melbourne. The expression might be applied to OPR as well. The power conferred by journals to evaluators in the blind method has been denounced as the cause of abuses and a lack of professionalism both during the evaluation process and in the stage in which the results of it are reported. This is why Sekhar and Naresh consider that OPR may make the reviewers more responsible because -again- their credibility and reputation are at stake.86 In this manner, from Ford’s view, the amelioration of reviewers’ abuse and the hierarchical relationship between author and evaluator could be reached.87 In the same way, OPR may replace elitism by establishing -as Ford has said- ‘an open and inclusive discourse of ideas within a community’.88 Therefore, OPR ‘[…] strengthens efforts to create a socially just culture of publishing’ and thus developing and enhancing academia to be a diverse and inclusive place.89
4.3 The Potential “Contraindications” of the “Treatment”
Although to this point, everything seems to be in favor of an open system, it is not a magical solution to all of the “symptoms” suffered by the traditional blind assessment method. OPR has its own limitations, six of the most relevant ones will be discussed below.
1) Not very clear advantages with regard to the time spent in the evaluation process. Several studies have addressed the advantages that the OPR process may provide concerning time spent.90 However, the results are mixed. In the well-known randomized controlled trial conducted by van Rooyen in 1999, the scholar observed that OPR did not have any ‘[…] adverse effect on the quality of the review, the recommendation regarding publication, or the time taken to review.’91 This constitutes an essential aspect that still needs to be tested in our days in which the technology plays a fundamental role in the communication of knowledge. In other words, from a theoretical perspective, OPR may contribute to a shorter quality-evaluation process. However, from a practical perspective, neither the contribution of OPR nor the variations that its use may present among disciplines in this regard, are very clear. Thus, empirical research is needed in this arena.
2) The potential refusal of the reviewing task. Horn and Pöschl note that in an ideal world OPR could be very advantageous. However, in some circumstances, these advantages are minimal, such as when a referee does not want to risk appearing ignorant or disrespectful before the author or the scientific community.92 For that reason, both scholars advocate for “optional anonymity”, which entails providing authors and evaluators with the option to determine whether or not they want to disclose their identities to each other. This is an interesting proposal that should deserve more attention in the academic legal publishing forum, as well as the design and implementation of OPR trials based on the (un)successful experiences of other scientific journals in STEM disciplines. In 2006, for example, Nature conducted an OPR trial seeking to explore the preferences of authors and evaluators regarding the implementation of the open identities and the open report model. The results show that for that year, OPR was not widely popular either among authors or among scientists who were invited to comment. Among the 64 people contacted, 27 responded, out of which 11 expressed a preference for OPR.93 Four years after Nature’s attempt to implement an OPR system, van Rooyen, Delamothe and Evans conducted a new empirical research.94 The researchers found that open reports correlated with the refusal rates obtained among potential reviewers. The results show that the correlation applies to the time that a referee spends in writing the report as well. However, the same study revealed that there was not an effect on the quality of the review.95 Nonetheless, Ross-Hellauer upholds that these studies are derived from only one disciplinary area, namely medicine. Thus, the author points out that ‘[…] the results cannot be taken as representative and hence, further research is undoubtedly required.’96
3) Poor-quality or fake reviewers’ reports. Van Rooyen, Godlee, Evans and others have also argued that with OPR, reviewers may ‘[…] blunt their opinions to fear of causing offence.’97 In this line, Khan states that OPR has a double effect on evaluators. On the one hand, the open system prevents reviewers from accepting the task -as outlined above. And on the other, lead those reviewers who accepted the work to modified their reports. According to Khan, it is clear that OPR produces an inferior outcome in both cases. Nevertheless, the results of an experiment carried out in the special issue of the Shakespeare Quarterly in 2010 show that the concern about the inhibition of frankness that an OPR model can generate in the evaluator’s report was less in practice than that expected.98 Similarly, Groves states that for more than one decade The BMJ has been using OPR and its successful experience ‘[…] has shown that the critics that argue that with this model referees do not say [anything] much, are wrong.’99
4) The identification of junior researchers. This article has mentioned that anonymity can also be a protective tool for junior researchers to save them from public humiliation because of their lack of experience, or conversely in the case of excellent research, anonymity may increase the possibilities that their work will be accepted. An OPR method, in which junior researchers are identified, may therefore put them to face uncomfortable situations or make their research object of disparagement.100 Contrary to this, authors like Sekhar have defended that for young researchers an OPR system is an excellent opportunity -although sometimes painful- to build their confidence.101
5) The reviewers’ biases remain. Khan argues that biases are not eradicated with OPR, perhaps because the reviewers are the same human beings. For him, it is part of the editor’s work to suspect when non-substantive factors are influencing referees’ decision-making. According to him, editors can ask for additional reviews, but if it is not feasible, blind-peer-reviewed journals can develop a robust system of appeal.102 Although an interesting proposal, the idea of Khan could run the risk of generating more delay in the peer review process, which is one of the advantages that the defenders of OPR have pointed out.103
6) Empowerment and elitism are conceivably stronger. Van Gestel upholds that in legal scholarship, an OPR model may affect the evaluation of a contribution due to the so-called “halo-effect” in which ‘[…] researchers with a good reputation are scrutinized in a less critical way than less well-known scholars.’104 Khan has also argued that OPR ‘[…] provides more scope for power relationships to favor the great and the good’ because it facilitates reciprocity which is -according to Robert Cialdini- one of the most powerful forces that influence the behavior of people.105 Following this author, Khan states that in the open method, reviewers are more moved to act leading by the principle ‘give what you want to receive’. For him, OPR assumes that reciprocity has no role and it constitutes its main flaw.
5 Conclusion and Recommendations for Further Research
By studying literature about traditional BPR and OPR in both the legal and in STEM disciplines, I was able to build a general picture of the current state of the two methods of quality assessment in academic journals. At the same time, the comparison between the two methods pertaining to the substantive quality of scientific publications shows that the technical and social issues that have been experienced in legal publishing are common across the scientific disciplines. Hence, the solutions that have emerged in STEM disciplines -although not perfect- may inspire potential solutions in the legal domain. In this vein, from the comparison presented in this article, one can say that unmasking the identities of authors and evaluators may be a suitable tool to counteract only some of the flaws that BPR has in academic legal publishing such as the identification of authors within small communities (or through their writing style and the sources consulted); the lack of a real dialogue among actors involved in the evaluation process, and the lack of reviewers’ expertise to evaluate a specific manuscript. However, OPR is not a magical “antidote” that can radically cure the “ill” method.
The experience of STEM disciplines using OPR reveals that some of the technical and social issues of BPR seem to remain in the open model (e.g. dubious-quality reviewers’ reports and reviewers’ biases). Concurrently, some studies have shown that also some of the drawbacks of BPR can even have conceivably stronger effects (e.g. reviewers’ empowerment and publishing elitism) or opposite effects (e.g. fake reviewers’ reports) when using OPR. Furthermore, some other problems still need more empirical evidence in each particular discipline (e.g. the time invested in the evaluation process and its respective costs).
Further research is imperative, and Latin-American law journals may be a suitable case of study for all the reasons that have been pointed out throughout this paper because the most representative Latin-American law journals use BPR. In light of this, the most important topics that legal scholars should address include: 1) a mixed quality-evaluation method and the “optional anonymity”; 2) the mechanisms and the criteria followed by (Latin-American) blind-peer-reviewed law journals for selecting reviewers; 3) the reviewers’ potential incentives to guarantee a high-quality evaluation and to provide high-quality feedback; 4) training programs for reviewers and the development of an ethical code for the actors involved in the evaluation process; 5) the use of new technologies to facilitate the assessment of manuscripts and fraud detection; and 6) the implementation of an appeal action against the decision provided by the reviewer.
A mixed quality-evaluation method that balances the pros and cons of both BPR and OPR methods could be the fairest alternative method to guarantee the transparency of the assessment process and the accountability of reviewers in currently blind-peer-reviewed law journals. Besides, it may also be relevant to consider the “optional anonymity” -proposed by Horn and Pöschl- in the design of a mixed quality-evaluation method. In my view, providing authors and reviewers the choice of unmasking their identities could have positive effects. However, the design and implementation of trials by law journals may provide better insights into this arena.
The mechanisms and the criteria followed to select reviewers are also key determinants of transparency and accountability in the evaluation process of a publication. It is imperative to determine the rationale behind the lack of (clear) information available to authors regarding the process that blind-peer-reviewed law journals follow to select peers. Making the selection mechanisms and the criteria applied for selecting reviewers, as well as the list of reviewers per each area of law in which the law journal specializes, public -or at least available to authors- could contribute to building up a fairer evaluation method. Especially, because in this way, it will be more difficult for editors to shape the quality evaluation of a manuscript by selecting evaluators according to their own preference or affiliation to specific theories or methods, one of the drawbacks of BPR indicated by Travis and Collins.
The incentives for reviewers, as pointed out in this article, play a decisive role when determining what is the most suitable method for assessing an academic paper. In the current blind method, the work of the reviewer is anonymous which makes it more of a burden to him due to the lack of reputational incentives. Then, how can the current lack of motivation of reviewers be counteracted so that they can spend long hours reviewing, validating knowledge and writing high-quality and in-depth reports that contribute to the improvement of the author’s research and, with it, of the scientific community? Alternative ways that help to recognize the time and effort required to write a high-quality review have to be explored, as, for instance, a virtual repository of reviewers’ reports. But, is the idea of a virtual repository attractive enough for reviewers? Moreover, what are the advantages and disadvantages of disclosing reviewers’ reports?
The training as an evaluator is also another aspect to consider in future research. The expertise to review an intellectual work is often associated with the expertise in a specific field of knowledge. However, in practice, it becomes apparent that having experience in a scientific area does not imply being a good reviewer. Although academic journals offer guidelines to reviewers about how to evaluate a manuscript and the most diverse tips about how to be a high-quality reviewer may be found online, in reality reviewers are not trained for this role. Only the method of trial and error and the time elapsed between one and another attempt shape the capability of the evaluator as such. However, in the trial and error exercise unethical and unfair situations can take place. Therefore, how to deal in practice with problems related to ethics and justice manifested in the two phenomena denounced by Boldt and Pöschl, namely the “phenomena of the abuse” and the “phenomena of the unprofessional reviewers’ remarks”?
The use of technology, such as virtual platforms and forums can make the process of evaluation shorter and more direct. In this way, the dialogue among peers may be established and with it an enriching experience of team-work. Additionally, involving the scientific community in an open evaluation through the Internet may contribute to detecting fraud quickly. This may be an exciting new research arena as it opens the possibility of finding new ways to protect the gate of legal scholarly publishing from erroneous or non-reliable research that the current blind method seems not to be able to avoid.
Finally, what to do when there are suspicions that the reviewer has followed non-substantive criteria in his decision-making? The proposal of an appeal action sounds attractive. However, empirical data is needed to analyze whether or not the “remedy” is more expensive than the “illness” and if so, which other kind of alternative solutions could be provided.
In sum, it is urgent that legal scholars -in particular Latin-American ones- start engaging in the exploration of new ways to assure the transparency of the process of evaluation of academic legal publications and the accountability of the reviewers. In this way, the tool will be customized for the task; the scientific community can flourish and hopefully the quality of academic legal research can be enhanced.