Recent publications

Schneider, J. W., van Leeuwen, T., Visser, M. & Aagaard, K. (2019). Examining national citation impact by comparing developments in a fixed and a dynamic journal set. Scientometrics 119(2):973–985.

Read abstract
In order to examine potential effects of methodological choices influencing developments in relative citation scores for countries, a fixed journal set comprising of 3232 journals continuously indexed in the Web of Science from 1981 to 2014 is constructed. From this restricted set, a citation database depicting the citing relations between the journal publications is formed and relative citation scores based on full and fractional counting are calculated for the whole period. Previous longitudinal studies of citation impact show stable rankings between countries. To examine such findings coming from a dynamic set of journals for potential “database effects”, we compare them to our fixed set. We find that relative developments in impact scores, country profiles and rankings are both very stable and very similar within and between the two journal sets as well as counting methods. We do see a small “inflation factor” as citation scores generally are somewhat lower for high-performing countries in the fixed set compared to the dynamic set. Consequently, using an ever-decreasing set of journals compared to the dynamic set, we are still able to reproduce accurately the developments in impact scores and the rankings between the countries found in the dynamic set. Hence, potential effects of methodological choices seem to be of limited importance compared to the stability of citation networks.

Full text: https://doi.org/10.1007/s11192-019-03082-3


Langfeldt, L., Nedeva, M., Sörlin, S. & Thomas, D. A. (2019). Co‑existing notions of research quality: A framework to study context‑specifc understandings of good research. Minerva.

Read abstract
Notions of research quality are contextual in many respects: they vary between fields of research, between review contexts and between policy contexts. Yet, the role of these co-existing notions in research, and in research policy, is poorly understood. In this paper we offer a novel framework to study and understand research quality across three key dimensions. First, we distinguish between quality notions that originate in research fields (Field-type) and in research policy spaces (Space-type). Second, drawing on existing studies, we identify three attributes (often) considered important for ‘good research’: its originality/novelty, plausibility/reliability, and value or usefulness. Third, we identify five different sites where notions of research quality emerge, are contested and institutionalised: researchers themselves, knowledge communities, research organisations, funding agencies and national policy arenas. We argue that the framework helps us understand processes and mechanisms through which ‘good research’ is recognised as well as tensions arising from the co-existence of (potentially) conflicting quality notions.

Full text: https://doi.org/10.1007/s11024-019-09385-2


Piro, F. N. (2019). The R&D composition of European countries: concentrated versus dispersed profiles. Scientometrics 119(2):1095–1119.

Read abstract
In this study, we use a unique dataset covering all higher education institutions, public research Institutions and private companies that have applied for funding to the European Framework Programs for Research and Innovation in the period 2007–2017. The first aim of this study is to show the composition of R&D performing actors per country, which to the best of our knowledge has never been done before. The second aim of this study is to compare country profiles in R&D composition, so that we may analyse whether the countries differ in concentration of R&D performing institutions. The third aim of this study is to investigate whether different R&D country profiles are associated with how the R&D systems perform, i.e. whether the profiles are associated with Research and Innovation performance indicators. Our study shows that the concentration of R&D actors at country-level and within the sectors differ across European countries, with the general conclusion being that countries that can be characterized as well-performing on citation and innovation indicators seem to combine (a) high shares of Gross Domestic Expenditure on R&D as percentage of GDP with (b) a highly skewed R&D system, where a small part of the R&D performing actors account for a very high share of the national R&D performance. This indicates a dual R&D system which combines a few large R&D performing institutions with a very large number of small actors.

Full text: https://doi.org/10.1007/s11192-019-03062-7


Franssen, T. & de Rijcke, S. (2019). The rise of project funding and its effects on the social structure of academia. In F. Cannizzo & N. Osbaldiston (Eds.), The social structures of global academia (chapter 9). London: Routledge.

Read abstract
In this chapter we analysed the effects of the rise of project funding on the social structure of academia. We show that more temporary positions are created and the temporary phase in the career is extended. Short-term contracts increase job and grant market participation of early career researchers, which, in turn, establishes competition as a mode of governance, reaffirms the individual as the primary epistemic subject and increases anxiety and career uncertainty. All of which impact the social fabric of research groups and departments. Communitarian ideals are promoted by senior staff members, which is necessary to establish the research group as a community, but cannot solve the inherent tension because of the structural nature of the mechanisms we describe. We conclude that individual research groups will be unlikely to be able to solve these problems and a more radical shift in the distribution of research funding is necessary.

Full text: https://doi.org/10.4324/9780429465857


Franssen, T. & Wouters, P. (2019). Science and its significant other: Representing the humanities in bibliometric scholarship. Journal of the Association for Information Science and Technology.

Read abstract
The cognitive and social structures, and publication practices, of the humanities have been studied bibliometrically for the past 50 years. This article explores the conceptual frameworks, methods, and data sources used in bibliometrics to study the nature of the humanities, and its differences and similarities in comparison with other scientific domains. We give a historical overview of bibliometric scholarship between 1965 and 2018 that studies the humanities empirically and distinguishes between two periods in which the configuration of the bibliometric system differs remarkably. The first period, 1965 to the 1980s, is characterized by bibliometric methods embedded in a sociological theoretical framework, the development and use of the Price Index, and small samples of journal publications from which references are used as data sources. The second period, the 1980s to the present day, is characterized by a new intellectual hinterland—that of science policy and research evaluation—in which bibliometric methods become embedded. Here metadata of publications becomes the primary data source with which publication profiles of humanistic scholarly communities are analyzed. We unpack the differences between these two periods and critically discuss the analytical avenues that different approaches offer.

Full text: https://doi.org/10.1002/asi.24206


Borlaug, S. B. & Langfeldt, L. (2019). One model fits all? How centres of excellence affect research organisation and practices in the humanities. Studies in Higher Education.

Read abstract
Centres of Excellence (CoE) have become a common research policy instrument in several OECD countries the last two decades. The CoE schemes are in general modelled on the organisational and research practices in the natural and life sciences. Compared to ‘Big science’, the humanities have been characterised by more individual research, flat structures, and usually less integration and coordination of research activities. In this article we ask: How does the introduction of CoEs affect the organisation of research and research practices in the humanities? By comparing Norwegian CoEs in different fields of research and studying the specific challenges of the humanities, we find that CoEs increase collaboration between different fields and make disciplinary and organisational boundaries more permeable, but so far they do not substantially alter individual collaboration patterns in the humanities CoEs. They further seem to generate more tensions in their adjacent environments compared to CoEs in other fields.

Full text: https://doi.org/10.1080/03075079.2019.1615044


Aksnes, D. W., Langfeldt, L. & Wouters, P. (2019). Citations, citation indicators, and research quality: An overview of basic concepts and theories. Sage Open 9(1):1–17.

Read abstract
Citations are increasingly used as performance indicators in research policy and within the research system. Usually, citations are assumed to reflect the impact of the research or its quality. What is the justification for these assumptions and how do citations relate to research quality? These and similar issues have been addressed through several decades of scientometric research. This article provides an overview of some of the main issues at stake, including theories of citation and the interpretation and validity of citations as performance measures. Research quality is a multidimensional concept, where plausibility/soundness, originality, scientific value, and societal value commonly are perceived as key characteristics. The article investigates how citations may relate to these various research quality dimensions. It is argued that citations reflect aspects related to scientific impact and relevance, although with important limitations. On the contrary, there is no evidence that citations reflect other key dimensions of research quality. Hence, an increased use of citation indicators in research evaluation and funding may imply less attention to these other research quality dimensions, such as solidity/plausibility, originality, and societal value.

Full text: https://doi.org/10.1177%2F2158244019829575


Borlaug, S. B. & Gulbrandsen, M. (2018). Researcher identities and practices inside centres of excellence. Triple Helix 5(14):1–19.

Read abstract
Many science support mechanisms aim to combine excellent research with explicit expectations of societal impact. Temporary research centres such as ‘Centres of Excellence’ and ‘Centre of Excellence in Research and Innovation’ have become widespread. These centres are expected to produce research that creates future economic benefits and contributes to solving society’s challenges, but little is known about the researchers that inhabit such centres. In this paper, we ask how and to what extent centres affect individual researchers’ identity and scientific practice. Based on interviews with 33 researchers affiliated with 8 centres in Sweden and Norway, and on institutional logics as the analytical framework, we find 4 broad types of identities with corresponding practices. The extent to which individuals experience tensions depend upon the compatibility and centrality of the two institutional logics of excellence and innovation within the centre context. Engagement in innovation seems unproblematic and common in research-oriented centres where the centrality of the innovation logic is low, while individuals in centres devoted to both science and innovation in emerging fields of research or with weak social ties to their partners more frequently expressed tension and dissatisfaction.

Full text: https://doi.org/10.1186/s40604-018-0059-3


Franssen, T., Scholten, W., Hessels, L. K. & de Rijcke, S. (2018). The Drawbacks of Project Funding for Epistemic Innovation: Comparing Institutional Affordances and Constraints of Different Types of Research Funding. Minerva 56(1):11–33.

Read abstract
Over the past decades, science funding shows a shift from recurrent block funding towards project funding mechanisms. However, our knowledge of how project funding arrangements influence the organizational and epistemic properties of research is limited. To study this relation, a bridge between science policy studies and science studies is necessary. Recent studies have analyzed the relation between the affordances and constraints of project grants and the epistemic properties of research. However, the potentially very different affordances and constraints of funding arrangements such as awards, prizes and fellowships, have not yet been taken into account. Drawing on eight case studies of funding arrangements in high performing Dutch research groups, this study compares the institutional affordances and constraints of prizes with those of project grants and their effects on organizational and epistemic properties of research. We argue that the prize case studies diverge from project-funded research in three ways: 1) a more flexible use, and adaptation of use, of funds during the research process compared to project grants; 2) investments in the larger organization which have effects beyond the research project itself; and 3), closely related, greater deviation from epistemic and organizational standards. The increasing dominance of project funding arrangements in Western science systems is therefore argued to be problematic in light of epistemic and organizational innovation. Funding arrangements that offer funding without scholars having to submit a project-proposal remain crucial to support researchers and research groups to deviate from epistemic and organizational standards.

Full text: https://doi.org/10.1007/s11024-017-9338-9


Aagaard, K. (2017). The Evolution of a National Research Funding System: Transformative Change Through Layering and Displacement. Minerva 55(3):279–297.

Read abstract
This article outlines the evolution of a national research funding system over a timespan of more than 40 years and analyzes the development from a rather stable Humboldt-inspired floor funding model to a complex multi-tiered system where new mechanisms continually have been added on top of the system. Based on recent contributions to Historical Institutionalism it is shown how layering and displacement processes gradually have changed the funding system along a number of dimensions and thus how a series of minor adjustments over time has led to a transformation of the system as a whole. The analysis also highlights the remarkable resistance of the traditional academically oriented research council system towards restructuring. Due to this resistance the political system has, however, circumvented the research council system and implemented change through other channels of the funding system. For periods of time these strategies have marginalized the role of the councils.

Full text: https://doi.org/10.1007/s11024-017-9317-1


Aagaard, K. & Schneider, J. W. (2017). Some considerations about causes and effects in studies of performance-based research funding systems. Journal of Informetrics 11(3):923-926.

Full text: https://doi.org/10.1016/j.joi.2017.05.018


Giménez-Toledo, E., Manana-Rodriguez, J. & Sivertsen, G. (2017). Scholarly book publishing: Its information sources for evaluation in the social sciences and humanities. Research Evaluation 26(2):91-101.

Read abstract
In the past decade, a number of initiatives have been taken to provide new sources of information on scholarly book publishing. Thomson Reuters (now Clarivate Analytics) has supplemented the Web of Science with a Book Citation Index (BCI), while Elsevier has extended Scopus to include books from a selection of scholarly publishers. More complete metadata on scholarly book publishing can be derived at the national level from non-commercial databases such as Current Research Information System in Norway and the VIRTA (Higher Education Achievement Register, Finland) publication information service, including the Finnish Publication Forum (JUFO) lists (Finland). The Spanish Scholarly Publishers Indicators provides survey-based information on the prestige, specialization profiles from metadata, and manuscript selection processes of national and international publishers that are particularly relevant for the social sciences and humanities (SSH). In the present work, the five information sources mentioned above are compared in a quantitative analysis identifying overlaps and uniqueness as well as differences in the degrees and profiles of coverage. In a second-stage analysis, the geographical origin of the university presses (UPs) is given a particular focus. We find that selection criteria strongly differ, ranging from a set of a priori criteria combined with expert-panel review in the case of commercial databases to in principle comprehensive coverage within a definition in the Nordic countries and an open survey methodology combined with metadata from the book industry database and questionnaires to publishers in Spain. Larger sets of distinct book publishers are found in the non-commercial databases, and greater geographical diversity is observable among the UPs in these information systems. While a more locally oriented set of publishers which are relevant to researchers in the SSH is present in non-commercial databases, the commercial databases seem to focus on highly selective procedures by which the coverage concentrates on prestigious international publishers, mainly based in the USA or UK and serving the natural sciences, engineering, and medicine.

Full text: https://doi.org/10.1093/reseval/rvx007


Hammarfelt, B., de Rijcke, S. & Wouters, P. F. (2017). From eminent men to excellent universities: University rankings as calculative devices. Minerva 55(4):391–411.

Read abstract
Global university rankings have become increasingly important ‘calculative devices’ for assessing the ‘quality’ of higher education and research. Their ability to make characteristics of universities ‘calculable’ is here exemplified by the first proper university ranking ever, produced as early as 1910 by the American psychologist James McKeen Cattell. Our paper links the epistemological rationales behind the construction of this ranking to the sociopolitical context in which Cattell operated: an era in which psychology became institutionalized against the backdrop of the eugenics movement, and in which statistics of science became used to counter a perceived decline in ‘great men.’ Over time, however, the ‘eminent man,’ shaped foremost by heredity and upbringing, came to be replaced by the excellent university as the emblematic symbol of scientific and intellectual strength. We also show that Cattell’s ranking was generative of new forms of the social, traces of which can still be found today in the enactment of ‘excellence’ in global university rankings.

Full text: https://doi.org/10.1007/s11024-017-9329-x


Lavik, G. A. V. & Sivertsen, G. (2017). Erih Plus – Making the SSH Visible, Searchable and Available. Procedia Computer Science 106:61–65.

Read abstract
The European Reference Index for the Humanities and the Social Sciences (ERIH PLUS) may provide national and institutional CRIS systems with a well-defined, standardized and dynamic register of scholarly journals and series in the social sciences and humanities. The register goes beyond the coverage in commercial indexing services to provide a basis for standardizing the bibliographic data and making them available and comparable across different CRIS systems. The aims and organization of the ERIH PLUS project is presented for the first time at an international conference in this paper.

Full text: https://doi.org/10.1016/j.procs.2017.03.035


Müller, R. & de Rijcke, S. (2017). Thinking with indicators. Exploring the Epistemic Impacts of Academic Performance Indicators in the Life Sciences. Research Evaluation 26(3):157–168.

Read abstract
While quantitative performance indicators are widely used by organizations and individuals for evaluative purposes, little is known about their impacts on the epistemic processes of academic knowledge production. In this article we bring together three qualitative research projects undertaken in the Netherlands and Austria to contribute to filling this gap. The projects explored the role of performance metrics in the life sciences, and the interactions between institutional and disciplinary cultures of evaluating research in these fields. Our analytic perspective is focused on understanding how researchers themselves give value to research, and in how far these practices are related to performance metrics. The article zooms in on three key moments in research processes to show how ‘thinking with indicators’ is becoming a central aspect of research activities themselves: (1) the planning and conception of research projects, (2) the social organization of research processes, and (3) determining the endpoints of research processes. Our findings demonstrate how the worth of research activities becomes increasingly assessed and defined by their potential to yield high value in quantitative terms. The analysis makes visible how certain norms and values related to performance metrics are stabilized as they become integrated into routine practices of knowledge production. Other norms and criteria for scientific quality, e.g. epistemic originality, long-term scientific progress, societal relevance, and social responsibility, receive less attention or become redefined through their relations to quantitative indicators. We understand this trend to be in tension with policy goals that seek to encourage innovative, societally relevant, and responsible research.

Full text: https://doi.org/10.1093/reseval/rvx023


Rushforth, A. & de Rijcke, S. (2017). Quality Monitoring in Transition: The Challenge of Evaluating Translational Research Programs in Academic Biomedicine. Science and Public Policy 44(4):513–523.

Read abstract
While the efficacy of peer review for allocating institutional funding and benchmarking is often studied, not much is known about issues faced in peer review for organizational learning and advisory purposes. We build on this concern by analyzing the largely formative evaluation by external committees of new large, ‘translational’ research programs in a University Medical Center in the Netherlands. By drawing on insights from studies which report problems associated with evaluating and monitoring large, complex, research programs, we report on the following tensions that emerged in our analysis: (1) the provision of self-evaluation information to committees and (2) the selection of appropriate committee members. Our article provides a timely insight into challenges facing organizational evaluations in public research systems where pushes toward ‘social’ accountability criteria and large cross-disciplinary research structures are intensifying. We end with suggestions about how the procedure might be improved.

Full text: https://doi.org/10.1093/scipol/scw078


Sivertsen, G. (2017). Unique, but still best practice? The Research Excellence Framework (REF) from an international perspective. Palgrave Communications 3.

Read abstract
Inspired by The Metric Tide report (2015) on the role of metrics in research assessment and management, and Lord Nicholas Stern’s report Building on Success and Learning from Experience (2016), which deals with criticisms of REF2014 and gives advice for a redesign of REF2021, this article discusses the possible implications for other countries. It also contributes to the discussion of the future of the REF by taking an international perspective. The article offers a framework for understanding differences in the motivations and designs of performance-based research funding systems (PRFS) across countries. It also shows that a basis for mutual learning among countries is more needed than a formulation of best practice, thereby both contributing to and correcting the international outlook in The Metric Tide report and its supplementary Literature Review.

Full text: https://doi.org/10.1057/palcomms.2017.78


Zhang, L., Rousseau, R. & Sivertsen, G. (2017). Science deserves to be judged by its contents, not by its wrapping: Revisiting Seglen’s work on journal impact and research evaluation. PLoS ONE 12(3): e0174205.

Read abstract
The scientific foundation for the criticism on the use of the Journal Impact Factor (JIF) in evaluations of individual researchers and their publications was laid between 1989 and 1997 in a series of articles by Per O. Seglen. His basic work has since influenced initiatives such as the San Francisco Declaration on Research Assessment (DORA), the Leiden Manifesto for research metrics, and The Metric Tide review on the role of metrics in research assessment and management. Seglen studied the publications of only 16 senior biomedical scientists. We investigate whether Seglen’s main findings still hold when using the same methods for a much larger group of Norwegian biomedical scientists with more than 18,000 publications. Our results support and add new insights to Seglen’s basic work.

Full text: http://dx.doi.org/10.1371/journal.pone.0174205


Piro, F. N., Aksnes, D. W. & Rørstad, K. (2016). How does prolific professors influence on the citation impact of their university departments? Scientometrics 107(3):941–961.

Read abstract
Professors and associate professors (“professors”) in full-time positions are key personnel in the scientific activity of university departments, both in conducting their own research and in their roles as project leaders and mentors to younger researchers. Typically, this group of personnel also contributes significantly to the publication output of the departments, although there are also major contributions by other staff (e.g. PhD-students, postdocs, guest researchers, students and retired personnel). The scientific productivity is however, very skewed at the level of individuals, also for professors, where a small fraction of the professors, typically account for a large share of the publications. In this study, we investigate how the productivity profile of a department (i.e. the level of symmetrical/asymmetrical productivity among professors) influences on the citation impact of their departments. The main focus is on contributions made by the most productive professors. The findings imply that the impact of the most productive professors differs by scientific field and the degree of productivity skewness of their departments. Nevertheless, the overall impact of the most productive professors on their departments’ citation impact is modest.

Full text: https://doi.org/10.1007/s11192-016-1900-y


Piro, F. N. & Sivertsen, G. (2016). How can differences in international university rankings be explained? Scientometrics 109(3):2263–2278.

Read abstract
University rankings are typically presenting their results as league tables with more emphasis on final scores and positions, than on the clarification of why the universities are ranked as they are. Finding out the latter is often not possible, because final scores are based on weighted indicators where raw data and the processing of these are not publically available. In this study we use a sample of Scandinavian universities, explaining what is causing differences between them in the two most influential university rankings: Times Higher Education and the Shanghai-ranking. The results show that differences may be attributed to both small variations on what we believe are not important indicators, as well as substantial variations on what we believe are important indicators. The overall aim of this paper is to provide a methodology that can be used in understanding universities’ different ranks in global university rankings.

Full text: https://doi.org/10.1007/s11192-016-2056-5