How to assess researchers on quality not quantity

23 May 2022 | Story Lyn Horn and Lex Bouter. Photo UCT. Read time 4 min.

How do you assess academic researchers for promotion or funding? This question has become ever more central in higher education settings since the 1980s saw substantial growth in investment in research. This significantly increased the number of researchers in the academic workforce and the need to assess their output for employment, promotion and other career advancements.

One response to the need to “scale up” researcher assessments was to introduce publication metrics. These are counts of publications and citations and more complex measures like the Hirsch Index and the Impact Factor. These allowed for relatively easy assessment and comparison of researchers’ careers. They were seen to be both more objective and less time consuming than traditional assessments in which narrative bio sketches were peer reviewed subjectively.

 

Relying too much on metrics has led to researchers engaging in practices that reduce the trust in, and quality of, research.

But it’s now widely accepted that the metrics approach to assessment can negatively affect the research system and research outputs. It values quantity over quality and creates perverse incentives that easily lead to questionable research practices. Relying too much on metrics has led to researchers engaging in practices that reduce the trust in, and quality of, research. These include “salami slicing” (the spreading of study results over as many publications as possible to ensure numerous publications) and selective reporting.

The pressure to publish also makes researchers vulnerable to predatory journals. Because having many publications and many citations is made so important, the pressure to cut corners is high. This can lead to low quality flawed research that typically overstates effects and downplays limitations. When the findings of that research are implemented harm is done to patients, society or the environment.

Researcher assessment criteria and practices need to be overhauled. We believe the best way to do this is using the Hong Kong Principles on Assessing Researchers which emerged from the 6th World Conference on Research Integrity in 2019. The principles were developed to reinforce the need to award researchers for practices that promote trustworthy research. “Trustworthy research” is relevant, valid and is done in a transparent and accountable way without researchers being distracted by other interests.

 

The idea is to foster research integrity and responsible conduct of research.

These principles move beyond merely questioning the use of research metrics for assessment. Instead they offer alternative indicators to assess researchers and reward behaviour. The idea is to foster research integrity and responsible conduct of research.

We believe they should be widely adopted. But there are gaps that must be addressed to ensure that the principles don’t leave institutions in the global south, including those in Africa, out in the cold.

A possible way forward

The Hong Kong Principles and similar initiatives are gaining traction and changing researcher assessment in many countries and institutions worldwide.

The principles are:

  • Assess researchers on responsible practices from conception to delivery. That includes the development of the research idea, research design, methodology, execution and effective dissemination.

  • Value the accurate and transparent reporting of all research, regardless of the results.

  • Value the practices of open science (open research) such as open methods, materials and data.

  • Value a broad range of research and scholarship, such as replication, innovation, translation and synthesis.

  • Value a range of other contributions to responsible research and scholarly activity, such as peer review for grants and publications, mentoring, outreach and knowledge exchange.

The principles also include a strong focus on practical implementation, with an understanding that this is not a straightforward process. They call for the sharing of practices around implementation.

The challenge of implementation

The movement to change the way researchers are measured should undoubtedly be embraced. But it’s important this be done in a way that doesn’t leave poorly resourced institutions in the global south behind. Even for researchers in the global north, the sorts of new expectations contained in the principles can be frustrating, because they require additional time and resources.

The most obvious example of this is Principle Three: value the practices of open science. A researcher cannot do this alone. They need to be supported by adequate infrastructure, skills, funding, and even discipline-specific training to ensure their data are published in a way that is FAIR (findable, accessible, interoperable and reusable). There are some initiatives in Africa to build this kind of infrastructure and skills. But this demand may prove an insurmountable challenge for many African researchers.

African institutions often have a shortage of skilled research management staff to support researchers and ensure their research practices remain in line with international trends. This means researchers from under-resourced institutions may risk losing opportunities as their institutions fail to keep up with changing international demands.

International funding body Wellcome, for instance, has stated that all the institutions it funds must publicly commit to responsible and fair research assessment by signing up to the San Francisco Declaration on Research Assessment, the Leiden Manifesto or an equivalent. Researchers and organisations who do not comply with this policy will be subject to appropriate sanctions. That includes not having new grant applications accepted or their funding being suspended.

 

African institutions often have a shortage of skilled research management staff to support researchers and ensure their research practices remain in line with international trends.

African researchers may join international collaborations because they see this as important for their own careers and for accessing the funding needed to unpack important questions within the communities in which they work. Funders and research team leaders from wealthier countries must ensure that the research systems needed to support, realise and adequately acknowledge those from less resourced places are in place. If they are not, capacity development must be funded and implemented as needed.

A balance

This issue will be among those tabled at the 7th World Conference of Research Integrity in Cape Town, South Africa from 29 May to 1 June. Its theme, Fostering Research Integrity in an Unequal World, offers an ideal opportunity to discuss how best to balance the necessity of changing research assessment practices with the risk to poorer institutions and less resourced researchers. A special symposium will be dedicated to the implementation of the Hong Kong Principles in an African context.

Lyn Horn, Director, Office of Research Integrity, University of Cape Town and Lex Bouter, Professor of Methodology and Integrity, Vrije Universiteit Amsterdam.

This article was published in The Conversation, a collaboration between editors and academics to provide informed news analysis and commentary. Its content is free to read and republish under Creative Commons; media who would like to republish this article should do so directly from its appearance on The Conversation, using the button in the right-hand column of the webpage. UCT academics who would like to write for The Conversation should register with them; you are also welcome to find out more from lisa.boonzaier@uct.ac.za.

 

The Conversation
For licensing information please visit the source website.

Research & innovation






 

 




 
TOP