Recent publications

Zhang, L., Rousseau, R. & Sivertsen, G. (2017) Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLOS ONE
http://dx.doi.org/10.1371/journal.pone.0174205

Read abstract
The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen’s main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen’s basic work

Full text:http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0174205


Aagaard, K. (2017). The Evolution of a National Research Funding System: Transformative Change Through Layering and Displacement. Minerva, 1-19. doi:10.1007/s11024-017-9317-1

Read abstract
This article outlines the evolution of a national research funding system over a timespan of more than 40 years and analyzes the development from a rather stable Humboldt-inspired floor funding model to a complex multi-tiered system where new mechanisms continually have been added on top of the system. Based on recent contributions to Historical Institutionalism it is shown how layering and displacement processes gradually have changed the funding system along a number of dimensions and thus how a series of minor adjustments over time has led to a transformation of the system as a whole. The analysis also highlights the remarkable resistance of the traditional academically oriented research council system towards restructuring. Due to this resistance the political system has, however, circumvented the research council system and implemented change through other channels of the funding system. For periods of time these strategies have marginalized the role of the councils.

Full text:https://link.springer.com/article/10.1007/s11024-017-9317-1


Franssen, T., Scholten, W., Hessels, L.K. & de Rijcke, S. (2018). The Drawbacks of Project Funding for Epistemic Innovation: Comparing Institutional Affordances and Constraints of Different Types of Research Funding. Minerva

Read abstract
Over the past decades, science funding shows a shift from recurrent block funding towards project funding mechanisms. However, our knowledge of how project funding arrangements influence the organizational and epistemic properties of research is limited. To study this relation, a bridge between science policy studies and science studies is necessary. Recent studies have analyzed the relation between the affordances and constraints of project grants and the epistemic properties of research. However, the potentially very different affordances and constraints of funding arrangements such as awards, prizes and fellowships, have not yet been taken into account. Drawing on eight case studies of funding arrangements in high performing Dutch research groups, this study compares the institutional affordances and constraints of prizes with those of project grants and their effects on organizational and epistemic properties of research. We argue that the prize case studies diverge from project-funded research in three ways: 1) a more flexible use, and adaptation of use, of funds during the research process compared to project grants; 2) investments in the larger organization which have effects beyond the research project itself; and 3), closely related, greater deviation from epistemic and organizational standards. The increasing dominance of project funding arrangements in Western science systems is therefore argued to be problematic in light of epistemic and organizational innovation. Funding arrangements that offer funding without scholars having to submit a project-proposal remain crucial to support researchers and research groups to deviate from epistemic and organizational standards.

Full text: https://doi.org/10.1007/s11024-017-9338-9


Aagaard, K.; Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics 11(3):923-926.

Full text: https://doi.org/10.1016/j.joi.2017.05.018


Giménez-Toledo, Elea; Manana-Rodriguez, Jorge; Sivertsen, Gunnar (2017). Scholarly book publishing: Its information sources for evaluation in the social sciences and humanities. Research Evaluation 26(2):91-101.

Read abstract
In the past decade, a number of initiatives have been taken to provide new sources of information on scholarly book publishing. Thomson Reuters (now Clarivate Analytics) has supplemented the Web of Science with a Book Citation Index (BCI), while Elsevier has extended Scopus to include books from a selection of scholarly publishers. More complete metadata on scholarly book publishing can be derived at the national level from non-commercial databases such as Current Research Information System in Norway and the VIRTA (Higher Education Achievement Register, Finland) publication information service, including the Finnish Publication Forum (JUFO) lists (Finland). The Spanish Scholarly Publishers Indicators provides survey-based information on the prestige, specialization profiles from metadata, and manuscript selection processes of national and international publishers that are particularly relevant for the social sciences and humanities (SSH). In the present work, the five information sources mentioned above are compared in a quantitative analysis identifying overlaps and uniqueness as well as differences in the degrees and profiles of coverage. In a second-stage analysis, the geographical origin of the university presses (UPs) is given a particular focus. We find that selection criteria strongly differ, ranging from a set of a priori criteria combined with expert-panel review in the case of commercial databases to in principle comprehensive coverage within a definition in the Nordic countries and an open survey methodology combined with metadata from the book industry database and questionnaires to publishers in Spain. Larger sets of distinct book publishers are found in the non-commercial databases, and greater geographical diversity is observable among the UPs in these information systems. While a more locally oriented set of publishers which are relevant to researchers in the SSH is present in non-commercial databases, the commercial databases seem to focus on highly selective procedures by which the coverage concentrates on prestigious international publishers, mainly based in the USA or UK and serving the natural sciences, engineering, and medicine.

Full text: https://doi.org/10.1093/reseval/rvx007


Hammarfelt, B.; de Rijcke, S.; Wouters, P.F. (2017). From eminent men to excellent universities: University rankings as calculative devices. Minerva 55(4):391–411.

Read abstract
Global university rankings have become increasingly important ‘calculative devices’ for assessing the ‘quality’ of higher education and research. Their ability to make characteristics of universities ‘calculable’ is here exemplified by the first proper university ranking ever, produced as early as 1910 by the American psychologist James McKeen Cattell. Our paper links the epistemological rationales behind the construction of this ranking to the sociopolitical context in which Cattell operated: an era in which psychology became institutionalized against the backdrop of the eugenics movement, and in which statistics of science became used to counter a perceived decline in ‘great men.’ Over time, however, the ‘eminent man,’ shaped foremost by heredity and upbringing, came to be replaced by the excellent university as the emblematic symbol of scientific and intellectual strength. We also show that Cattell’s ranking was generative of new forms of the social, traces of which can still be found today in the enactment of ‘excellence’ in global university rankings.

Full text: https://doi.org/10.1007/s11024-017-9329-x


Lavik, Gry Ane Vikanes; Sivertsen, Gunnar (2017). Erih Plus – Making the SSH Visible, Searchable and Available. Procedia Computer Science 106:61-65.

Read abstract
The European Reference Index for the Humanities and the Social Sciences (ERIH PLUS) may provide national and institutional CRIS systems with a well-defined, standardized and dynamic register of scholarly journals and series in the social sciences and humanities. The register goes beyond the coverage in commercial indexing services to provide a basis for standardizing the bibliographic data and making them available and comparable across different CRIS systems. The aims and organization of the ERIH PLUS project is presented for the first time at an international conference in this paper.

Full text: https://doi.org/10.1016/j.procs.2017.03.035


Müller, R.; De Rijcke, S. (2017). Thinking with indicators. Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences. Research Evaluation 26(3):157–168.

Read abstract
While quantitative performance indicators are widely used by organizations and individuals for evaluative purposes, little is known about their impacts on the epistemic processes of academic knowledge production. In this article we bring together three qualitative research projects undertaken in the Netherlands and Austria to contribute to filling this gap. The projects explored the role of performance metrics in the life sciences, and the interactions between institutional and disciplinary cultures of evaluating research in these fields. Our analytic perspective is focused on understanding how researchers themselves give value to research, and in how far these practices are related to performance metrics. The article zooms in on three key moments in research processes to show how ‘thinking with indicators’ is becoming a central aspect of research activities themselves: (1) the planning and conception of research projects, (2) the social organization of research processes, and (3) determining the endpoints of research processes. Our findings demonstrate how the worth of research activities becomes increasingly assessed and defined by their potential to yield high value in quantitative terms. The analysis makes visible how certain norms and values related to performance metrics are stabilized as they become integrated into routine practices of knowledge production. Other norms and criteria for scientific quality, e.g. epistemic originality, long-term scientific progress, societal relevance, and social responsibility, receive less attention or become redefined through their relations to quantitative indicators. We understand this trend to be in tension with policy goals that seek to encourage innovative, societally relevant, and responsible research.

Full text: https://doi.org/10.1093/reseval/rvx023


Rushforth, A.; De Rijcke, S. (2017). Quality Monitoring in Transition: The Challenge of Evaluating Translational Research Programs in Academic Biomedicine. Science and Public Policy 44(4):513–523.

Read abstract
While the efficacy of peer review for allocating institutional funding and benchmarking is often studied, not much is known about issues faced in peer review for organizational learning and advisory purposes. We build on this concern by analyzing the largely formative evaluation by external committees of new large, ‘translational’ research programs in a University Medical Center in the Netherlands. By drawing on insights from studies which report problems associated with evaluating and monitoring large, complex, research programs, we report on the following tensions that emerged in our analysis: (1) the provision of self-evaluation information to committees and (2) the selection of appropriate committee members. Our article provides a timely insight into challenges facing organizational evaluations in public research systems where pushes toward ‘social’ accountability criteria and large cross-disciplinary research structures are intensifying. We end with suggestions about how the procedure might be improved.

Full text: https://doi.org/10.1093/scipol/scw078


Sivertsen, Gunnar (2017). Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective. Palgrave Communications 3.

Read abstract
Inspired by The Metric Tide report (2015) on the role of metrics in research assessment and management, and Lord Nicholas Stern’s report Building on Success and Learning from Experience (2016), which deals with criticisms of REF2014 and gives advice for a redesign of REF2021, this article discusses the possible implications for other countries. It also contributes to the discussion of the future of the REF by taking an international perspective. The article offers a framework for understanding differences in the motivations and designs of performance-based research funding systems (PRFS) across countries. It also shows that a basis for mutual learning among countries is more needed than a formulation of best practice, thereby both contributing to and correcting the international outlook in The Metric Tide report and its supplementary Literature Review.

Full text: https://www.nature.com/articles/palcomms201778


Zhang, L.; Rousseau, R.; Sivertsen, G. (2017) Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLoS ONE 12(3): e0174205.

Read abstract
The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen’s main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen’s basic work.

Full text: http://dx.doi.org/10.1371/journal.pone.0174205


Piro, Fredrik Niclas; Dag W, Aksnes; Kristoffer Rørstad (2016). How does prolific professors influence on the citation impact of their university departments? Scientometrics 107(3):941-961.

Read abstract
Professors and associate professors (“professors”) in full-time positions are key personnel in the scientific activity of university departments, both in conducting their own research and in their roles as project leaders and mentors to younger researchers. Typically, this group of personnel also contributes significantly to the publication output of the departments, although there are also major contributions by other staff (e.g. PhD-students, postdocs, guest researchers, students and retired personnel). The scientific productivity is however, very skewed at the level of individuals, also for professors, where a small fraction of the professors, typically account for a large share of the publications. In this study, we investigate how the productivity profile of a department (i.e. the level of symmetrical/asymmetrical productivity among professors) influences on the citation impact of their departments. The main focus is on contributions made by the most productive professors. The findings imply that the impact of the most productive professors differs by scientific field and the degree of productivity skewness of their departments. Nevertheless, the overall impact of the most productive professors on their departments’ citation impact is modest.

Full text: https://doi.org/10.1007/s11192-016-1900-y


Piro, Fredrik Niclas; Gunnar Sivertsen (2016). How can differences in international university rankings be explained? Scientometrics 109(3):2263–2278.

Read abstract
University rankings are typically presenting their results as league tables with more emphasis on final scores and positions, than on the clarification of why the universities are ranked as they are. Finding out the latter is often not possible, because final scores are based on weighted indicators where raw data and the processing of these are not publically available. In this study we use a sample of Scandinavian universities, explaining what is causing differences between them in the two most influential university rankings: Times Higher Education and the Shanghai-ranking. The results show that differences may be attributed to both small variations on what we believe are not important indicators, as well as substantial variations on what we believe are important indicators. The overall aim of this paper is to provide a methodology that can be used in understanding universities’ different ranks in global university rankings.

Full text: https://doi.org/10.1007/s11192-016-2056-5