1 Introduction

Law schools, law journals, and legal publishers have become the dinosaurs of today’s academic world. Despite the forces of globalization, law is still primarily a nationally oriented discipline, heavily intertwined with legal practice and without an explicit scholarly methodology. In terms of publication culture, law deviates from most other (social) sciences. In Europe, there is no lingua franca in legal research, no commonly recognized ranking of law schools, law journals, or legal publishers,1 no uniform system of peer review, no practice of quantitative research evaluation (e.g. impact assessment via bibliometric indicators), and no transnational system of research assessment that enables cross-border benchmarking of law schools, law journals and other legal publication outlets.2

The pressure from governments, funding bodies, and university managers is mounting to introduce more harmonized systems of quality management in European legal academia.3 In many respects, the state of affairs in the humanities is similar to the situation in law. However, in the humanities there has been far more activity to introduce centralized research evaluation mechanisms. Attempts have been undertaken in this field to introduce a ranking of journals,4 to develop quality indicators,5 and to measure the impact of research.6 Although debate is possible about whether law as a discipline is not drifting away from the arts and humanities and moving closer towards the social sciences,7 the increased attention for research evaluation in the humanities might indicate what lies in wait for law in the near future in case the scholarly legal community does not take action.

One of the things that can be learned from the debate about quality management in the arts and humanities is that in attempts to develop proper research evaluation methods, it is essential to take a bottom-up approach and to involve the scholarly community in the process.8 Legal scholars, however, have rarely been surveyed in order to find out how they feel about what quality of academic legal research entails and how it could be measured or assessed.9 Not only is the involvement of legal scholars in the process a precondition for the acceptance of quality indicators and evaluation methods, but stakeholder participation is also important to discover what the academic forum considers poor, average and excellent research and to what extent and how this could be measured or weighed.10 This is the reason we have undertaken national surveys in Switzerland and the Netherlands to investigate how legal scholars feel about research evaluation and quality management.11

Hereafter, we will present a comparative analysis of some of the results. First, we will summarize the aims and methodology of the surveys.12 Next, the most important outcomes will be given with special attention given to similarities and differences. After that, we will reflect on how one might explain the outcomes and which new questions and debates arise. Finally, a research agenda for the future is presented since we believe the debate should be broadened to other European countries. Before going into the aims, methods and limitations of both surveys, though, it might be good to mention that the origins of the projects in Switzerland and the Netherlands are quite different.

2 Origins, Aims, and Methodology

2.1 Origins

One of the main reasons for conducting a study in Switzerland was the coming into force on January 1, 2015 of the new law on higher education (“Hochschulförderungs- und Koordinationsgesetz”), which contains a chapter on quality management.13 This is why the Rectors’ Conference of the Swiss Universities (CRUS)14 launched a research programme on research evaluation in the humanities and social sciences, legal research included. The law faculties of Bern and Geneva took the occasion to propose a pilot study in the field of law, which was then accepted by CRUS.15 In other words, legislators and university managers in Switzerland initiated the debate on research evaluation in the field of humanities and social sciences.

In the Netherlands, law faculties of all universities take part in the periodic national research assessment exercise guided by a Standard Evaluation Protocol (SEP). Like the Research Assessment Exercise in the United Kingdom (UK),16 its Dutch counterpart is based on peer review by specialized panels. There has however been ongoing criticism that this system is time-consuming and burdensome without the assistance of a recognized ranking of law journals and legal publishers. The College of Law Deans (CLD) has initiated several expert committees to come up with proposals for reform, but none of these initiatives received support from the academic community.17 In reaction, the Dutch Jurists Association (NJV) discussed research quality and evaluation during its annual conference in 2015.18 Simultaneously, two law professors undertook a survey to fuel the debate.19

Hence, whereas in Switzerland the initiative to conduct research on quality management and evaluation methods and organize debate was outsourced by policy makers and university rectors to the academic community, in the Netherlands, the College of Deans has kept this topic to itself instead of leaving it to the academic community.

2.2 Aims

We will briefly outline the aims of both surveys. In Switzerland, the aims for the first phase of the research trajectory (2013–2015) were to: (i) identify relevant methods and procedures for the evaluation of scholarly legal research; (ii) discover quality criteria and indicators that can be used to predict successful legal research; and (iii) find out more about the perception of important stakeholders, e.g. law professors and practicing lawyers, pertaining to the potential evaluation methods and quality indicators. The overall goal was to describe the state of the art of research evaluation as well as to explore and critically analyze existing practices and identify possible challenges.20

In the Netherlands, the survey was meant to counter-balance the emphasis in the past on expert committees reporting about quality management. The basic idea was to find out how Dutch legal scholars feel about the direction in which: (i) scholarly legal research is moving; (ii) the way in which faculties, law journals, and legal publishers evaluate the quality of scholarly publications; and (iii) the extent to which changes are deemed necessary.21 This idea was spurred by the fact that there has been a lively debate about the academic nature and methodological rigor of Dutch legal scholarship.22 Of course, this also raised the question to what extent different perspectives on the nature of legal scholarship affect the way we think about quality of scholarly legal publications.

2.3 Methodology

Both in Switzerland and in the Netherlands, surveys were used to collect relevant data. However, the design of the surveys was not identical. The Swiss survey, for example, asked primarily questions about ‘how things are’, whereas the Dutch survey also asked about ‘how things should be’. In the Swiss survey, only people who filled out the entire survey were taken into account, while in the Dutch survey, questionnaires that were filled-out for the most part were included in the sample. However, these differences should not have a significant impact on the key results. For example, in the Netherlands these results and the following debate about quality assessment procedures, such as ranking, peer review and bibliometrics were largely confirmed by the general report of the 2016 RAE.23

Moreover, the categories of respondents in the Swiss and Dutch survey were not completely identical since the Dutch survey did not take into consideration practitioners, as did the Swiss survey, but at the same time it also included a broader range of legal academics than professors (e.g. associate professors, postdocs, PhDs) as well as non-legal academics (sometimes working in law schools), such as criminologists, political scientists, and experts in public administration. Another difference was that in Switzerland, the electronic questionnaire was accompanied by interviews and expert meetings, which has not been the case in the Netherlands.24

For the technical details concerning the methodology of the surveys, we refer to the separate publications.25 Hereafter, we only present the key-characteristics and findings of both surveys. Despite the different designs of both surveys, we are convinced that they lend themselves to a comparative analysis. The aim of both surveys was to get a better understanding about how legal scholars perceive research quality and which criteria, indicators or standards they would like to see applied when it comes to the evaluation of their own work. Besides, a survey primarily needs to be consistent and valid in itself. Hence, we see no reason why the outcomes would not allow for a descriptive comparison. Wrongful interpretations in the original surveys, that would not be recognized by the scholarly community, would almost certainly have led to strong previous protest from critical legal scholars because of the sensitivity of the topic that affects the academic legal community as a whole.

2.3.1 Switzerland

The aim of the Swiss survey was to investigate the procedures used to evaluate the quality of legal research and the indicators and standards being applied. The Swiss research team conducted surveys among law professors,26 law journal editors, juries for scientific prizes and practicing lawyers,27 in order to find out how they feel about research evaluation. In addition, semi-structured interviews were undertaken with the deans of the nine law schools and a representative of the Swiss National Science Foundation (SNSF), which is the main institution supporting scientific research in Switzerland. The surveys were conducted between December 2013 and June 2014. The first questionnaire was sent in December 2013, via the online digital survey program Lime Survey (https://www.limesurvey.org), to 397 law professors. Previously, the questionnaire had been revised and evaluated by experts in two pre-tests. Email addresses were provided by the law faculties and the lists were replicated with the addresses on the faculty websites. Only legal scholars with the status of professor working in law schools were included in the survey (e.g. full professors, assistant professors, associate professors, honorary professors etc.). In total, 137 survey forms were filled-out in full (response rate 34.5 percent). The response rate was quite evenly distributed over the different faculties (between 31 percent and 42 percent), the law faculties of Luzern and Freiburg being the exceptions with a response rate of only 20 and 21 percent, respectively. Individual statements were anonymized.

Two limitations of the chosen methodology should be borne in mind when interpreting the results hereafter. First, only law professors employed in a law faculty at the time of the survey took part in the survey. Naturally, the views of junior legal academics and of law professors in other faculties on research evaluation would also be of interest. Second, the answers given in a survey are always dependent on the procedure chosen. Because, the survey aimed to study the opinions and assessments by legal scholars, their answers may also include some views the interviewees consider socially desirable. However, we have tested the Swiss sample for language, university affiliation, age, sex, field of research and type of research. We did not find any significant differences between the categories.28

A second survey was addressed to a sample of practicing lawyers in Switzerland. Some 10 percent of all in the Swiss bar associations’ database registered lawyers were contacted between May and June 2014.29 For each of the 26 cantons a randomly generated sample of lawyers was selected, calculated as a percentage of the total number of lawyers practicing in a canton. Questions were distributed to 873 lawyers. 231 filled in the questionnaire entirely (response rate 26.3 percent). Again, people who have opted out of the electronic survey during the process of responding to it were not included in the final sample.30

2.3.2 The Netherlands

The Dutch survey was conducted in February 2015 via the online digital survey program Survey Monkey (https://www.surveymonkey.com).31 Questions were distributed to 2768 email addresses of all known legal scholars with a research position (professors, assistance professors, post docs, PhD’s) but also to non-lawyers working in law faculties, such as criminologists and experts in public administration. This is because in the Dutch tradition there is close cooperation between criminologists and criminal lawyers or administrative lawyers and experts in public administration for example. These groups are not part of other social science departments as one would find in certain other countries. Here we took a formal approach: researchers working in law schools do (at least partly) legal research. Moreover, these groups are also included in the national research assessment exercise as part of the discipline of law.

The email addresses were collected manually from the public websites of different Dutch law faculties.32 Due to the manual collection of the addresses, some errors were made. Certain emails could not be delivered because the address was no longer in use or otherwise invalidated. In addition, due to the design of the different law school websites (departments, research centers, graduate schools and so on), the possibility that some scholars might have been overlooked cannot be excluded. Moreover, the online survey program did not invite people who have opted out of electronic surveys on an earlier occasion. As a result, some 2733 scholars received the invitation to participate in the survey. However, a few individuals who did not receive a notification expressed an interest in participating. They were given a separate link to the survey and were included. Overall, a broad coverage of academic staff at Dutch law schools was accomplished despite the aforementioned limitations. After screening the dataset, 665 survey forms appeared to have been filled-out in full or to a substantial extent. Open comments, which were allowed for in special text boxes, were immediately separated and put in a different database in order to preserve anonymity. As a result, 24 percent of the addressees completed the questionnaire. The response rate was quite evenly distributed over the different faculties and was almost always between 21 and 29 percent, except the University of Amsterdam with a response rate of only 13 percent.

3 Empirical findings

3.1 Outline

Hereafter, we will mainly focus on the overlap between the surveys. The first cluster is about the focus of legal research activities. This is important because, the way in which quality of legal scholarship is valued depends on, to a large extent, how we envisage the purpose of legal research. Other quality indicators would apply in case of, for example, if one would see legal research primarily as a service to legal practice, rather than as a contribution to the body of academic knowledge.

The second cluster is about research quality and in particular about the quality standards and indicators that apply to the evaluation of academic legal publications. How do legal scholars (and practicing lawyers) perceive research quality, which criteria or indicators do they prefer for the evaluation of their research, and are there important differences between Swiss and Dutch legal scholars in this respect? How is the relationship between academia and legal practice perceived, and does this reflect in the way scholars look at the originality, profundity and thoroughness of legal research?

The third cluster is about research assessment. What is considered to be the purpose of research assessments? Who should evaluate academic legal publications (e.g. editorial boards or independent peers) and which assessment methods are preferred by the academic community in Switzerland and the Netherlands? With regard to the latter, how does this relate to current practice? Is there a difference between which objectives and methods researchers deem useful and how research evaluation is currently organized in both countries?

Finally, we will compare the most important findings from the explorative Swiss and Dutch surveys in order to detect the most important lessons that can be drawn from them. Based on this, we will propose some suggestions for further research. It is good to keep in mind that we cannot compare the results with previous studies on this point because the Swiss and Dutch surveys were the first ever to be conducted by legal scholars in both countries. On one hand, this limits the possibilities for generalization. On the other hand, it provides unique data that future researchers can build upon.

3.2 Research and publication behavior

Although there are differences between the Swiss and Dutch academic legal research culture, both strands seem to have a lot in common. Compared to the United States (US), where law and research has become dominant particularly at the elite law schools with some legal academics argue that legal research has distanced itself too much from legal practice;33 in Switzerland and the Netherlands, academia and practice are still heavily intertwined.

Apart from that, the “scientific” nature of legal scholarship has also been debated in Switzerland and the Netherlands.34 There appears to be consensus that academic research is supposed to be more independent and situated at a higher level of (theoretical) abstraction than the research that practitioners undertake. Nevertheless, when it comes to individual research activities (e.g. advisory opinions) and publications (e.g. case notes or commentaries), the distinction between academic and professional publications is blurred in both countries. This is not only due to the fact that Switzerland and the Netherlands hardly have any purely academic law journals and legal publishers, but also because there is no strict separation between the fora of legal scholars and practitioners. Judges, for example, quote academic legal publications on a regular basis and attend university conferences, while academics cite court cases, comment upon judicial opinions and often function as part-time judges or part-time lawyers.

What makes things extra complicated is that academics publish their work in different languages for different media (e.g. national and international law journals and publishers), which makes assessing the quality of such work even more complicated than it is, apart from the differences in audience and accordingly the impact (e.g. English language publications may reach a broader international audience) and the level of competition (foreign journals are usually not very interested in descriptive national-oriented work). How does one, for example, compare a publication in Dutch for a general interest law review with a publication in English for a specialized law journal? As difficult as these questions may be, both surveys show that the academic legal community should start addressing these questions in order to avoid forced external interference from policymakers and organisations responsible for research funding.

3.2.1 Focus of research activities

3.2.1.1 Switzerland

More than 75 percent of the Swiss legal academics claim that their research activities have at least a partial international or transnational focus. Almost 30 percent of respondents say that their research shows inter- or transnational elements (Table 1).

Table 1

Professors indicate to what extent their research activities have an international/transnational focus.

Inter-/transnational focus N %

Mostly 40 29.2
Partly 66 48.2
Seldom 27 19.7
Never 4 2.9
Total 137 100

3.2.1.2 The Netherlands

Large numbers of the Dutch respondents claim that their research is focused on European and international debates with other scientists. Almost 75 percent of them indicate that their own research is directed to an international academic legal audience, which would imply that Dutch academics are more focused on the debate with foreign scholars than with their direct colleagues from other Dutch faculties. With regard to the focus on legal practice, it is interesting that respondents are clearly more concentrated on a debate with Dutch practitioners (55.3 percent) than with European and international practitioners (40.2 percent).

3.2.1.3 Comparison

Although the literature seems to suggest that in Switzerland legal academics are more focused on systematizing the law and looking for underlying principles using hermeneutic (interpretative) methods,35 Dutch legal scholars seem to be moving more towards the social sciences and towards multidisciplinary research.36 In both countries, however, internationalization of legal research is becoming more prevalent. Likewise, in both countries, around 75 percent of the legal academics claim to focus on publishing for an international audience. If these survey results represent everyday reality, it is likely that academic legal research is becoming less of a service to legal practice than it used to be. After all, we may not expect legal practitioners to follow every possible international publication outlet. At the same time, Swiss or Dutch legal scholars who want to publish in international or European publication outlets will to a certain extent have to refrain from the particularities of their own national legal system, in order to be interesting for a foreign audience. This implies that authors need to either incorporate comparative elements and extra-legal insights or focus more explicitly on theory building.

3.2.2 Types of publications

3.2.2.1 Switzerland

Which publication outlets legal scholars prefer depends largely on the (academic) legal culture in a country. In France, for instance, case notes and commentaries possess a higher status than in the United Kingdom where law journal articles and books published by academic publishers (e.g. Oxford University Press (OUP) and Cambridge University Press (CUP)) carry more weight.37 No empirical data exists in Switzerland on how much value legal scholars attach to different types of publications. What is known from the literature, though, is that Swiss legal scholars publish in a wide variety of outlets ranging from law journals, commentaries, monographs, Festschriften (commemorative publications), and advisory opinions.38 “Core journals”, typical of the natural sciences, do not exist in Swiss legal research. What appears to be different from the situation in the Netherlands is that books are considered to be more important than journals in Switzerland, whereas in the Netherlands there seems to be more preference for publishing in international law journals (See Tables 2 and 3).

Table 2

Respondents indicate at which audience their publications are aimed.

My research is mainly focused on

Strongly disagree (%) Partly disagree (%) Agree nor disagree (%) Partly agree (%) Strongly agree (%)

The Dutch debate with other scholars (n = 624) 13.6 6.9 9.1 34.5 35.9
The Dutch debate with practitioners (n = 611) 18.7 12.4 13.6 32.7 22.6
A European/international debate with other scholars (n = 633) 9.2 7.7 8.5 31.0 43.8
A European/international debate with practitioners (n = 602) 26.4 18.9 14.5 27.6 12.6

Table 3

How do legal scholars rank different types of publications?

As concerns my own publications, I attach the most (=1) and the least (=8) value to writing:

LAWYERS
(n = 371)*
RANK NON-LAWYERS
(n = 89)*

Contributions in Dutch journals 1 Contributions in international journals
Contributions in international journals 2 Contributions in books in a foreign language
Handbooks (or parts of handbooks) 3 Contributions in Dutch journals
Contributions in Dutch books 4 Handbooks (or parts of handbooks)
Contributions in books in a foreign language 5 Textbooks (or parts of textbooks)
Case notes 6 Contributions in Dutch books
Textbooks (or parts of textbooks) 7 Commentaries
Commentaries 8 Case notes

* We only included respondents who ranked all items.

3.2.2.2 The Netherlands

What is interesting about the situation in the Netherlands is not only the fact that practice-oriented national law journals have lost popularity over the last decades against (international) academic law journal articles, but also that the publication culture has become less individualistic due to the greater emphasis on research programming as a result of requirements within the national RAE.39 Probably, both developments also have to do with the way legal research is being evaluated via national research assessments based on research programmes developed by departments or research institutes within law faculties. Apart from that, internal faculty publication guidelines of various law schools seem to have an influence on the preferences of individual scholars regarding what (not) to publish. One complaint from publishers, for example, is that since the national research evaluation protocol does not reward books for educational purposes, several faculties no longer give credits for textbooks. Consequently, the market for textbooks has shrunk significantly.40

3.2.2.3 Comparison

Switzerland and the Netherlands share a similar publication culture, but whereas the Swiss are heavily focused on books and practice-oriented publications, the Dutch appear to be moving away from practice in order to concentrate more on academic legal journals. In both countries, however, scholars and administrators struggle with the question where to draw the line between academic and professional publications. As we will see, the criteria to qualify legal publications as scholarly, academic or scientific are extremely vague. Besides, scholars in both countries are facing a dilemma: if they move too far away from legal practice, they are no longer taken seriously by professionals in the field, but if they stay too close to practice, critics will question the scientific relevance of their work. This also reveals that as long as legal publications are divided in academic and professional, and the latter do not count in the evaluation system, the struggle about the demarcation line will continue because academics will want to maximize their number of academic publications. This automatically puts pressure on the evaluation system and the criteria to detect academic research quality.

3.3 Research quality

3.3.1 Introduction

Not only in law, but also in other disciplines, assessing the quality of academic research is considered difficult.41 Substantive quality (e.g. the quality of argumentation and interpretation) is especially hard to measure. Review by peers is still the most important method to evaluate research in the humanities and social sciences.42 This is also true for legal research, even though one can still find many law journals, in both Switzerland and the Netherlands, which prefer editorial review to assessment by independent peer reviewers. Peer review has its own problems, however.43 Two main issues play an important role here. First, peer review requires a certain consensus within the scholarly forum regarding the criteria that should be applied by referees to recognize quality. These criteria or indicators do not yet seem to exist in the field of law. Second, peer review requires a policy from journals and publishers to select independent and capable referees. If this selection process is flawed, peer review runs the risk of being biased.44 Moreover, a problem in the field of law is that in many countries it is difficult to decide who belongs to the forum of legal scholars and hence who could qualify as a possible referee. This has to do with the fact that academia and legal practice are heavily intertwined.

3.3.2 Quality criteria of scholarly legal publications

In the Swiss and Dutch surveys, we tried to get a better understanding about how legal scholars perceive research quality and which criteria, indicators or standards they would like to see applied when it comes to the evaluation of their own work.45 The tables below show some important outcomes.

3.3.2.1 Switzerland

Table 4 shows an overlap between the criteria that both professors and practicing lawyers find most important, namely: argumentative reproducibility, clear and precise language, and substantive correctness. Structure also seems to play an important role.

Table 4

Professors’ and practicing lawyers’ perception of research quality of scholarly legal publications.

Standards for the publication of academic legal research Professors Practicing lawyers

Either important or very important Very important Total Either important or very important Very important Total

N % N % N N % N % N

Argumentative reproducibility 111 100 83 75 111 211 100 144 68 212
Clear and precise language 111 100 61 55 111 218 100 133 61 219
Substantive correctness of argumentation 106 95 87 78 111 215 100 179 83 215
Structure 106 96 68 61 111 214 100 109 51 215
Legal craftsmanship (e.g. showing through citations and use of sources) 104 95 48 44 110 197 92 81 38 213
Clear research question 101 93 60 55 109 175 87 65 32 201
Methodological rigor 98 90 52 48 109 167 83 53 26 202
Implementation of formal requirements 83 78 33 31 106 114 62 25 14 184
Critical reflection 104 78 54 49 110 171 84 49 24 204
Theoretical relevance 101 65 44 40 109 177 86 53 26 207
Relevance for current debates 67 61 19 17 109 171 80 67 31 213
Originality/Innovation 101 54 43 39 110 118 59 17 8 201

Of course, the question is: what do these indicators actually mean, do respondents interpret them in the same way, and how could one avoid a purely subjective interpretation and application of the criteria? There is not one proper answer here, apart from the fact that this should be tested in future research. The surveys provide the starting point for this. If, for example, most scholars believe that a clear research question is important for academic legal publications, do they share a more or less similar understanding of what a clear research question entails? If not, why is that?

On a higher level of abstraction, it is difficult to find any of these indicators unimportant, but as soon as one starts to break down indicators, like originality and methodological rigor, into more specific guidelines or standards,46 the situation becomes even more complex. How should one, for instance, go about in requiring methodological accountability in academic publications? In the US, there is a lot of criticism against the footnote fetishism caused by the so-called Bluebook, which dictates how to use footnotes.47 One of the main points of criticism is that student editors, because of their lack of substantive knowledge, focus too much on footnote checking and thereby requiring references for sometimes the most absurd things, such as that Plato was an influential philosopher or that one of the core values of American life is equality.48 Do we want this in Europe, or do we think it would run against the normative character and argumentative nature of academic legal research, in which “the art of persuasion” is also a very important feature?

3.3.2.2 The Netherlands

In Table 5, respondents had to rank a number of general indicators for quality of scholarly legal publications. Remarkably, thoroughness and profundity have a clearly higher score among lawyers than among (academic) non-lawyers working in law schools. The latter group feels that originality should be the primary trademark of scholarly legal research. Both lawyers and non-lawyers view the presence of a clear and well-defined research question as an important indicator for the quality of legal publications. Non-lawyers, such as criminologists, seem to pay more attention to methodology and research design than lawyers, while the latter group sets higher standards for clear and precise language. Methodological rigor scores relatively low on the list of possible quality indicators, while at the same time both groups rank convincing results and conclusions very high. This raises the question of how one can accomplish convincing research outcomes without methodological rigor. It could also imply that respondents have a very different idea about what, for instance, theory-building or methodological rigor entails. This can only be brought to light via other research methods, such as interviews. With regard to methodological rigor and use of sources, a recent study shows that on an abstract level there is a lot of agreement, but as soon as one confronts legal scholars with certain dilemmas (e.g. when to use or not use self-citation or how to deal with blogs or other “grey” sources where the level of editorial control might be limited) things become more complicated and disagreement increases.49

Table 5

Quality indicators for academic legal publications.

In my view, the quality of legal research is best (=1) reflected in:

LAWYERS
(n = 375)*
RANK NON-LAWYERS
(n = 90)*

Thoroughness and profundity 1 Originality (adding something to the body of knowledge)
Originality (adding something to the body of knowledge) 2 Convincing results and conclusions
Convincing results and conclusions 3 Thoroughness and profundity
Theory-building 4 Methodological rigor
Methodological rigor 5 Theory-building
Societal impact 6 International recognition
International recognition 7 Societal impact

* We only included respondents who ranked all items.

Table 6 shows how academic lawyers and non-lawyers rate indicators that could give an indication of quality content in legal publications. What is interesting is that both groups put the presence of a clear research question on top of the list, while the way in which the use of sources is accounted for is seen as less important. The reason could be that perhaps legal scholars believe the evidence presented in the footnotes to articles and books needs no further explication. This was, at least until recently, common practice in the Netherlands. One would hardly find books that explained the selection of sources. The importance of the availability of a solid research question, though, is much more discussed in the literature over the last ten years. In PhD dissertations, for instance, this has become an issue after Tijssen’s research on the quality of dissertations.50

Table 6

What determines the substantive quality of academic legal publications?

With respect to the quality of the content of scholarly legal research I attach great (=1) or little (=) value to:

LAWYERS
(n = 351)*
RANK NON-LAWYERS
(n = 66)*

The presence of a clear research question 1 The presence of a clear research question
The use of clear and precise language 2 The presence of solid research methodology
The presence of theory-building 3 The presence of theory-building
The way in which the use of sources is accounted for 4 The use of clear and precise language
The presence of solid research methodology 5 The way in which the use of sources is accounted for

* We only included respondents who ranked all items.

3.3.2.3 Comparison

It is interesting that legal scholars in both countries seem to be more focused on the quality of interpretation and argumentation in legal scholarship than on methodological rigor, correct use of sources and accountability — although one would suspect a link between the two types of indicators. After all, how do you know that the argumentation in legal publications is valid in cases where authors do not make their implicit methodological choices explicit?

The only possible way to assess the correctness of a publication from a substantive perspective is by verifying the content, weighing the arguments and checking the sources. Only scholars with more or less the same expertise as the authors are able to do that. They could perhaps see, based on references in footnotes, whether the most important sources are included or not, but others who do not have this specific expertise will not be able to tell what the quality of the publication is. For them, the most important indicator for research quality will probably be the methodological exposition: how did the author go about answering the research question, and which steps were taken during the research process, and why? This may also partly explain why the research question is valued so much by Dutch legal scholars. If the question is unclear, it is often very difficult to tell whether the results of a research paper are convincing. After all, a research outcome is only meaningful when the problem the research intends to provide an answer for is clearly laid out.

3.3.3 Quality of law journals

Law journals play a vital role in the evaluation of academic legal research in Switzerland, the Netherlands and the rest of Europe, but what is interesting is that there is little information available on how the quality of these journals is perceived by academics and practitioners. Other than in the US, there is no generally accepted ranking of law journals in Europe, so the question is how do legal scholars decide where to publish and what quality criteria matter to them and to practitioners who use law journals?

3.3.3.1 Switzerland

Since journals decide upon the publication of an article, the perception of professors and practitioners on quality criteria for law journals can be taken as a proxy for the research quality of scholarly legal publications.

Again, what we can observe here is that Swiss scholars and practitioners consider substantive quality of the contributions in journals to be the most discerning factor in deciding what the best journals are (see Table 7). As such, this says little about how scholars recognize substantive quality. If scholars would give different answers to the question which journals publish the best articles, one would be running around in circles. Interestingly, criteria (that play a role in other disciplines) to weigh the quality of law journals, such as the use of peer review, impact factors or rejection rate, hardly play a role Switzerland.

Table 7

Professors and practicing lawyers – criteria for the assessment of law journals.

Professors Practicing lawyers

Either important or very important Very important Total Either important or very important Very important Total

N % N % N N % N % N

Substantive quality 111 100 100 90 111 214 100 180 84 215
Publisher is part of the academic community 89 83 35 33 107 103 54 15 8 190
Systematic peer review 61 59 15 15 103 92 65 18 13 142
International exposure/reputation 51 48 10 9 107 23 13 1 1 183
Information about the selection procedure 47 47 7 7 99 57 35 7 4 161
Number of citations/impact factor 49 47 7 7 105 122 64 18 9 192
Rejection rate 37 38 5 5 98 47 32 1 1 148
Language 38 37 5 5 103 143 73 49 25 197
Number of copies 30 29 0 0 102 67 37 6 3 182
Frequency of appearance 24 23 1 1 103 76 41 4 2 185

As might be expected, academics consider the international exposure of a journal to be more important than practitioners do. The reverse is true for language. Practitioners find the language in which the journal is published more important than academics. At the same time, one could ask what language or the frequency of appearance have to do with quality at all. Are papers published in English-language journals for a broad audience and with a high frequency necessarily better than articles written in German or French for a specialized journal that has fewer readers and appears less often? Or should we argue that the language in which an article is written does not tell us anything about the quality of the content?

Perhaps the latter depends on what is considered as quality. Research that is published in English has per se a larger audience and a larger dissemination, which often means that the impact is higher. Where the number of citations of a book or an article counts as a quality criterion, the language also becomes important, but this tells us very little about the value of the content of a publication. It is, for example, well-known that there are many different reasons why scholars cite other publications, but that does not always indicate quality. Just think of negative citations that are a sign of disagreement. If someone, for example, makes an absurd argument, that argument is likely to receive a more than average amount of citations, but this certainly does not imply high quality.51

Last but not least, (double) blind peer review is considered as an important quality indicator of journals, both for legal scholars and practitioners in Switzerland. This is remarkable since there are very few (double) blind peer-reviewed law journals in Switzerland. This begs the question: if both groups find this such an important quality indicator, why have they not urged more journals to become peer reviewed? Do they perhaps feel the initiative should come from law schools or from publishers?

3.3.3.2 The Netherlands

When asked about the relevant aspects of the quality of law journals, lawyers and non-lawyers stated different priorities. In the Netherlands, lawyers consider the reputation of the editorial board to be the most important quality indicator, while non-lawyers seem to find external blind (peer) review more important (see Table 8).

Table 8

What determines the quality of law journals?

If I want to assess the quality of a journal, I attach great (=1) or little (=6) value to:

LAWYERS
(n = 340)*
RANK NON-LAWYERS
(n = 91)*

The reputation that the journal has among my peers 1 Whether the external referees conduct a ‘blind’ review
The expertise of the editorial board 2 The reputation that the journal has among my peers
Whether the external referees conduct a ‘blind’ review 3 Whether the editorial board does the assessment itself or uses external referees
Whether the editorial board does the assessment itself or uses external referees 4 The reputation the journal has according to quantitative indicators (e.g. impact factor)
Whether the entire editorial board assesses a paper or not 5 The expertise of the editorial board
The reputation the journal has according to quantitative indicators (e.g. impact factor) 6 Whether the entire editorial board assesses a paper or not

* We included only respondents who scored all items.

An interesting difference between lawyers and non-lawyers is the importance given to a journal on the basis of quantitative indicators, such as its impact factor. Non-lawyers rate this as much more relevant than lawyers. The reason for this might be that non-lawyers, such as criminologists, write more for international journals. Law journals in the Netherlands hardly work with impact factors and citation scores, but this is different for foreign journals, especially in the US and UK, as well as journals focused on international and European law. Some of these journals openly advertise their impact factors.52

3.3.3.3 Comparison

The expertise of the editorial board is seen as important in both Switzerland and the Netherlands. Although there are few (double) blind reviewed law journals in both countries, independent peer review is considered a relevant quality indicator. This is not so strange if one takes into account that the content of publications scores very highly on the list of quality criteria. After all, quantitative criteria, such as citation scores, may say something about the impact of a journal in the field, but it should not be confused for quality of substance. Controversial papers, for example, may attract many readers but that does not imply that the ideas expressed in the papers are innovative or otherwise valuable. What the survey does not explain, however, is why peer review is still rather rare for Swiss and Dutch journals although academics and practitioners consider it so important. For Dutch journals, one of the reasons might be that the pool of potential reviewers in most fields is limited, and it is difficult to organize truly blind reviews because of the fact that most scholars in the field know each other very well. Reviewers may recognize the style of their colleagues even if the article is anonymized. Moreover, Swiss journals and editorial boards may also be afraid of the bureaucracy and workload that comes with the introduction of (double) blind peer review.53 In the Netherlands, however, this does not appear to be a problem as more and more journals are turning to peer review. This might be due to the fact that peer reviewed journal articles receive extra weight in the RAE.

3.3.4 Potential difficulties in the assessment of legal research

In the field of research evaluation, experts have long discussed the difficulties of assessing research quality in a reliable manner. Recently, there has been much debate on the importance of impact factors and citation scores in the hard sciences because of the perverse effects the application of bibliometrics indicators may have on the behavior of researchers.54 For law, however, this debate is relatively new, because law schools, law journals and legal publishers have so far spent little time and energy on considering which methods are the most appropriate ones for research evaluation.

3.3.4.1 Switzerland

The perception of potential difficulties in the assessment of legal research by law professors corresponds with previous results.55 Apparently, professors do not believe that there are suitable quality indicators (yet). They seem to have a strong distrust against the use of quantitative (bibliometric) indicators to evaluate legal research and are afraid of all sorts of adverse effects of research evaluation on the research and publication behavior of scholars, including strategies to bypass the evaluation system, such as salami slicing (see Table 9). Moreover, they are against excessive external influence on the goals of the legal research community. Interestingly, professors seem to think that there are enough (potential) referees for the assessment of legal research.

Table 9

Professors’ perception of difficulties in the practice of research evaluation.

Difficulties in the practice of research evaluation Either totally agree or partly agree N % Either disagree or totally disagree N %

Current evaluation practice is too much focused on quantitative criteria/numbers. 84 77 25 23
Lack of suitable quality indicators. 82 73 31 27
Research assessment is too time-consuming. 79 72 31 28
Research evaluation has adverse effects on publication strategies (e.g. salami tactics, citation cartels…). 71 71 29 29
Current evaluation practices put too much emphasis on external expectations instead of on the goals of the research community. 64 72 25 28
Research evaluation has adverse effects on the publication behavior (e.g. choice of topics). 66 69 30 31
There is no suitable evaluation infrastructure (e.g. citation databases). 46 45 56 55
Insufficient number of experts for the assessment of legal research. 42 39 66 61
Current evaluation practice is too much focused on qualitative criteria. 24 24 76 76

3.3.4.2 The Netherlands

What catches the eye in the Dutch survey is that more than 75 percent of the respondents feel that law faculties put too much emphasis on measuring the number of publications their staff produce (see Table 10). Little over 40 percent of the respondents claim that proper quality indicators for scholarly legal publications do not exist at all. It is interesting that more than 50 percent of the respondents feel that double blind peer review leads to higher quality publications, but at the same time, there is disagreement over whether peer review takes too much time from all parties involved. One wonders what is behind these opinions. Are they based on personal experience, or are scholars perhaps afraid of the stories of flawed peer review systems in other disciplines? It is also interesting to note that almost 65 percent of the respondents suggest that the current method of quality evaluation leads to strategic behavior.56

Table 10

Scholarly opinions concerning quality measurement.

My opinion about the following proposition is:

(n = 440) Strongly disagree Partly disagree Agree nor disagree Partly agree Strongly agree

There are no proper indicators for measuring the quality of legal publications. 11.1% 28.4 18.0 31.1 11.4
Law faculties put too much emphasis on measuring the number of publications. 2.3 7.3 14.1 38.9 37.5
There are too few independent experts in the Netherlands to use peer review as a standard procedure. 9.1 13.0 30.9 33.0 14.1
Assessing publications through peer review is too time-consuming for all parties involved. 13.4 22.7 24.8 26.8 12.3
Double blind review by external referees leads to better publications than non-blind review conducted by editors. 6.1 15.5 24.8 31.4 22.3
The way in which the quality of publications is measured leads to undesirable strategic behavior. 1.8 7.5 27.5 38.9 24.3
I rather submit my articles to a journal whose editors I personally know than to a journal with an unfamiliar editorial board. 29.5 23.9 23.9 18.6 4.1

3.3.4.3 Comparison

Remarkably, both the Swiss and Dutch respondents have doubts whether there are appropriate quality indicators to assess legal publications. At the same time, we have seen that both groups appear to have faith in, and a strong preference for, peer review over metrics-based evaluation methods. In Switzerland, professors show strong reluctance towards bibliometrical evaluation methods. This raises the question of what standards or quality indicators referees should then apply in the peer review process. A major worry of the scholarly legal community in both countries is that a stronger emphasis on quantitative evaluation methods will lead to strategic behavior in order to avoid the consequences of review. It is unclear what has inspired these opinions. Is it personal experience, fear for what has happened in other disciplines, or perhaps a general distrust against the monitoring and evaluation of legal research by outsiders? Again, only further qualitative research (e.g. interviews) can yield answers here.

3.4 Goals of quality assessment of scholarly legal research

In the end, the most important question about research evaluation is probably: what purpose should it serve? Research assessment cannot be a goal in itself. However, it is far from self-evident for which type of problems research evaluation should be viewed as the solution. After all, law as a discipline has paid little attention to research evaluation so far and has managed to survive for centuries. One could therefore ask: why do we need changes? Is it primarily to satisfy politicians or university managers who want to see whether legal scholars provide value for money? Is it to distribute scarce research funds in the fairest way, or is it perhaps to make the criteria about what is considered as the “best legal research” more transparent in order to provide information to other scholars and to the public about which publications are most worthwhile reading?

3.4.1 Switzerland

Swiss professors seem to think that the number one aim of research evaluation is the assessment of existing and new staff members (promotion and appointments) (see Table 11). Next, they find being transparency about what counts as proper legal research as most important, closely followed by facilitating the personal and professional development of researchers. Intriguingly, the comparison of departments and research units is considered the least important from the perspective of legal academics. What this means is not immediately clear, but it could indicate that Swiss scholars have more faith in research cooperation than competition, although it is of course uncertain whether university or faculty managers, or government policy-makers, would automatically feel the same way.

Table 11

Professors’ opinions about the purpose of research evaluation.

Purpose of research evaluation N/N total %

Evaluation of existing staff and appointment/promotion of (new) staff members 72/89 81
Stimulating transparency of what counts as proper legal research 65/85 76
Personal/professional development 63/87 72
Incentive to stimulate quality awareness of the research community 55/83 66
Fulfilment of legislative obligations and policy targets 50/85 59
Promotion of research cooperation 42/79 53
Distribution of research funds 44/88 50
Accountability towards politics and the larger public 35/81 43
Performance-based funding 31/80 39
Positioning towards other academic disciplines 26/81 32
Comparison between departments/research units in law schools 12/80 15

3.4.2 The Netherlands

According to more than 90 percent of the Dutch scholars, warranting a certain minimum quality of legal publications is the most important one out of the four potential aims of research evaluation (see Table 12). At the same time, more than 80 percent see promoting research excellence as a prominent aim of research evaluation.

Table 12

What is the purpose of evaluation of academic legal publications?

The assessment of the quality of scholarly legal research may serve various goals. I find the following goals:

Very unimportant Somewhat unimportant Neither important nor unimportant Somewhat important Very important

Warranting a certain minimum quality of legal publications (n = 535) 1.1% 0.9 5.2 34.4 58.3
Promoting research excellence (n = 532) 1.5 4.5 12.8 44.7 36.5
Accountability for the use of public money (n = 529) 2.6 6.8 16.8 49.9 23.8
Efficient and well-targeted allocation of funding (n = 535) 2.4 9.3 23.9 44.7 19.6

The other targets are viewed as significantly less important. Nevertheless, we do not know what would have happened if we would have asked respondents to choose between the goals provided. This is relevant since, for example, warranting a certain minimum quality of publications will probably require very different measures than stimulating research excellence. Apart from that, it might be that scholars agree on the goals of research evaluation on a rather high level of abstraction (e.g. who could be against warranting a certain minimum quality level of legal publications?) but they would disagree once the goals are translated into more specific requirements (e.g. requirement regarding methodological accountability and explanation of the research design in legal publications) or specific evaluation methods.

Table 13 not only shows that Dutch scholars, just like their Swiss colleagues, prefer evaluations based on substance to quantitative assessments. It also reveals that lawyers and non-lawyers do not necessarily see peer review as a more reliable method of research evaluation over assessment of scholarly legal research by a professional editorial board. Lawyers are, however, much more outspoken here than non-lawyers. When it comes to the benefits of ranking law journals, the opinions become even more divided, although non-lawyers seem to have more faith in ranking than lawyers. Moreover, the opinion of lawyers is in particular divided when it comes to national research assessment exercises. About half of the respondents in favor of them seems to be skeptical about the benefit of such exercises. This proves again that the situation becomes more complex as soon as the goals of evaluation are made more specific and respondents actually have something to choose between.

Table 13

Scholarly preferences concerning research evaluation methods.

With regard to the way in which research is assessed I feel that:

(n = 381/72) Strongly disagree Partly disagree Agree nor disagree Partly agree Strongly agree

LAWYER/NON-LAWYER L N-L L N-L L N-L L N-L L N-L

An assessment of the substance of publications should prevail over the use of citation scores. 0.3% 0.0 2.6 6.9 5.5 11.1 24.4 29.2 67.2 52.8
The assessment of journal articles by a professional editorial board is at least as good as assessment by external referees. 4.2 16.7 11.3 26.4 12.6 16.7 32.8 33.3 39.1 6.9
The introduction of a ranking of journals in the Netherlands would be a welcome development. 18.4 5.6 23.9 15.3 23.1 34.7 26.2 33.3 8.4 11.1
A nationwide research assessment exercise is a suitable way to compare the quality of research groups. 11.3 1.4 21.0 20.8 35.2 26.4 27.3 44.4 5.2 6.9

Differences in distributions between lawyers and non-lawyers is significant for all reported items.

3.4.3 Comparison

It is very difficult to draw general lessons from the surveys about what the Swiss and Dutch perceive to be the most important aims of research evaluation. What seems fair to say, though, is that scholars in both countries appear to be somewhat skeptical about the idea that competition automatically leads to better research quality. In Switzerland, research evaluation is first seen as a way to select academics who have potential. It is also telling that the Swiss gave a low ranking to comparing departments and research units via research evaluation. Apparently, they do not believe in organizing a competition between research groups in order to raise the quality of legal publications because academic legal research is still viewed as a largely individualistic activity. Scholars in both countries prefer research evaluation based on assessments of the research substance rather than quantitative assessments using bibliometrical indicators, such as citation scores. The Dutch survey, however, shows that this does not automatically imply scholars are in favor of the introduction of (double blind) peer review as a method to select papers for publication. Interestingly, Dutch scholars have just as much faith in the professional judgment of an editorial board composed of experts as they do reviews from anonymous external referees.

4 Conclusions and follow-up

Although the Swiss and Dutch surveys are somewhat different in terms of the research design, we were nonetheless able to obtain meaningful and comparable data. Notably, there are a few striking similarities in terms of outcomes. Undoubtedly, the most important resemblance is that legal scholars in both countries have a strong preference for qualitative research evaluation methods over quantitative measures such as journal rankings, citation counts and paper downloads. It is not clear, however, why many researchers seem to have a preference for (double) blind peer review over existing non-blind reviews by professional editorial boards. There may be different reasons for this, which need further investigation. It could be that scholars think that independent (external) peer reviewers are less biased than members of professional editorial boards are, but it is also possible that scholars do not have a clear idea what different forms of peer review entail and how it would function in daily practice.57

Interviews with the law deans in Switzerland, conducted in addition to the survey, revealed that simple bibliometric data about the number of publications in different types of outlets is considered important by administrators, especially in the process of selection and appointment of academic positions. Nevertheless, legal scholars seem to have a strong distrust against quantitative research evaluation methods that are being used in other (social and natural) sciences. The latter is not entirely without reason. Recently we have seen a lot of protest in the sciences, technology sector and medicine against the perverse effects of impact factors and h-indexes.58 In these fields, quantitative research evaluation has led to undesirable strategic behavior from scholars, editorial boards and publishers, to pump up research output in order to score highly on relevant performance-indicators.59 Some people even claim that by further increasing the publication pressure, research fraud may increase. There is however little hard evidence to this effect.60

From this perspective, however, it is not so strange that Swiss and Dutch legal scholars have more sympathy for peer review than for citation counts, impact factors, rejection rates etc. An empirical study, concerning administrative knowledge politics and changing epistemic cultures in Dutch law schools, revealed that the way in which publications are categorized as professional or academic is heavily politicized by the administration in certain law schools:

[S]ome coordinators police the boundary between scholarly and professional publications very strictly, they tend to count articles as scholarly if they are in line with a particular vision of legal scholarship, that is research that is empirically oriented, conceptually robust, and primarily relevant to international academic audiences. However, there are diverse other strategies of using indicators that are connected to very different intellectual outlooks. For example, one of our informants pursues an alternative vision of a ‘scientific turn’ that actually depends on engaging non-academic audiences on a national level.61

The authors of this study warn that whereas discussions about the nature of legal scholarship and the way it should be valued used to take place via the scholarly communication in academic journals and at conferences, it now seems to have been replaced by routinized administrative procedures that are often invisible to legal scholars. This implies that researchers without a particular institutional function (dean or vice dean, programme manager, research coordinator etc.) have received a much weaker position in the debate on how publications should be weighed.62

The question is of course: is turning to (double blind) peer review the answer to these sort of issues?63 The institutional weighing of publications will, for instance, often take place during the post peer-review phase, but apart from that, peer review is not without its own problems. In order to make the outcomes of peer review predictable and objective, one needs specific quality-indicators that referees can use in their assessment of publications. Moreover, journals and publishers have to think about the way in which they are going to select referees in a way that best matches the expertise of the referee with the authors and avoid, as much as possible, biases from the side of the referees (e.g. authors and referees can be in competition with each other and referees might recognize authors even when their name is anonymized). Finally, there needs to be clarity about what happens when referees contradict each other, or in cases where there is a serious conflict of opinion between the referees and the editorial board, or the referee and the author(s).

Even though respondents claim they prefer (double) blind peer review by external referees to non-blind review by editorial boards, it is remarkable that they simultaneously doubt whether peer review by external referees in (Dutch) practice has sufficient added value, compared to assessment by an editorial board consisting of experts in the same field. In the Netherlands, respondents expect difficulties in finding enough independent referees with relevant expertise who are also unbiased. This opinion was not shared by the Swiss respondents, who reject the idea that there is a shortage of qualified experts to review legal research. However, in both the Netherlands and in Switzerland, respondents mostly fear that the introduction of peer review might bring about time constraints and bureaucracy. Perhaps it is safe to expect that the introduction of peer review will develop gradually since more and more scholars apparently focus on publishing for an international audience in English. This may drive young scholars in Switzerland and the Netherlands towards other publishing styles than the ones they are used to in their own countries. Many of the UK journals and journals focused on European and International law are already being peer reviewed and the same goes for English academic publishers like Oxford University Press and Cambridge University Press. If Dutch and Swiss legal scholars want to enter this “market”, they will have to get used to it.

This does not however imply that Swiss and Dutch journals and publishers should simply follow what English language law journals and international academic book publishers do. With regard to this, Swiss and Dutch journals and publishers have the advantage of lagging behind, which means they can learn from the mistakes that others have made before them. Most of the current law journals and publishers, for example, have not specified their assessment criteria, have not made very clear what they expect from authors in terms of methodological accountability (e.g. to what extent should author explain their research design?) and what is expected from referees in terms of feedback and response to criticism from scholars who have submitted contributions for peer review. Being more explicit about this might reflect on the writing process of potential authors, who would then be able to anticipate to the requirements that journals and publishers expect to see fulfilled. However, there is still a long way to go before there is consensus about more specific quality indicators that translate abstract criteria, such as originality, thoroughness and profundity, into more concrete guidelines. The same goes for the development of policies to select, instruct, and evaluate referees for peer review. Without a minimum level of agreement on quality indicators and peer review procedures, the introduction of (double blind) peer review will probably not lead to increased trustworthiness of legal scholarship.

Perhaps the most important challenge for Swiss and Dutch legal scholars, which can be deducted from the survey, is how they can keep their close relationship to legal practice but at the same time meet the demands globalization puts on the academic study of law. Is it possible to maintain a healthy national publication culture and a fruitful dialogue with legal practitioners, while concurrently publishing more often in English language law journals and academic publishers in order to share ideas with a growing transnational forum of legal scholars? In addition, how about the relationship between academic research and teaching? How do we prepare students for a job in (national) legal practice while simultaneously teaching them that traditional legal concepts are undergoing fundamental changes due to the forces of globalization?

In order to answer these and other related questions, more qualitative empirical research is vital.64 Research quality and research evaluation are far too important for us to leave to bibliometricians, university managers, and higher education policy-analysts. What is needed most of all is more data about how legal scholars, but also practitioners, view the challenges facing legal scholarship. The pressure is mounting for law to introduce standards for research evaluation. Experiences in the humanities has shown that in order for research assessment practices to become accepted, they need to be supported by the forum of academics. Choosing a bottom-up approach means that the evaluation procedures and criteria should be first determined by the researchers themselves, especially because of the close relationship between research evaluation and academic freedom. The experience in the humanities has also shown that research evaluation practices should take the particularities of a discipline into account. For law this is not only the unique relationship between legal scholars and lawmakers (e.g. think of case note commenting upon judicial decisions), but also the normative nature of the discipline in which “ought questions” play an important role.65 It is therefore the duty of the academic community to decide upon and continually update these methods and criteria.66

What law as a discipline should perhaps try to avoid is that legal scholarship and legal practice drifts apart in a way that makes academic research useless for practitioners and legal practice uninteresting for academics – this has been a matter of complaint in the US for years.67 This probably implies that we should cherish the diversity of the academic legal research landscape because different types of legal research (doctrinal, empirical, comparative) serve different functions and should therefore be evaluated partly according to different standards (e.g. methodological rigor in doctrinal publications means something different than in empirical legal publications, where replicability is, for instance, an important requirement). Developing tailor-made research evaluation methods for various types of research cannot do without the expertise and involvement of the academic legal community.