Skip to main content

Responsible Metrics in the Assessment of Research

University of Leeds Responsible Research Metrics Statement

The assessment of the quality and impact of research is a normal component within the research activity cycle. Assessments are used by universities and funders of research to guide a wide range of decisions, some of which can have a bearing on how researchers' careers develop over time.

The University is fully committed to the use of peer review and expert judgment as the central component in the assessment of research outputs and wider research performance. However, it recognises the supporting role that quantitative metrics can play in research assessment.

Quantitative metrics relating to research activities and outputs include (but are not restricted to): bibliometrics (citation analysis); grant application, grant award, and grant income data; offers to give invited, plenary, keynote addresses at conferences; and evidence for, and success in, postgraduate development and supervision.

In the evaluation of its research activities, the University of Leeds supports the use of “responsible metrics” as defined in HEFCE commissioned report “The Metric Tide 2015)”[1]. The report defines responsible metrics as an approach of “framing appropriate uses of quantitative indicators in the governance, management and assessment of research” In addition, the Leiden Manifesto[2] for Research Metrics presents ten principles which largely correlate to those outlined in The Metric Tide.

Guiding principles

The University of Leeds endorses principles from these reports and encourages their practical application:

  1. Expert Judgement. Quantitative measures can support, not replace, evaluation and critique provided through expert judgement (a judgment based upon a specific set of criteria and/or expertise that has been acquired in a specific knowledge area or particular discipline or industry etc.).
  2. Robustness. All data should be robust (i.e. accurate, and of an appropriate scale and scope).
  3. Transparency. Data collection and analytical processes should be fully open and transparent so that those being evaluated can test and verify the results. This should include sharing details of the metrics themselves and their source.
  4. Appropriate. Use of particular metrics may be inappropriate in the assessment of different types of research outputs, and therefore research management in Schools and Faculties is best placed to determine where and when the use of metrics is appropriate and can add value to expert judgement.
  5. Diversity. We support the use of a range of indicators to reflect and take account of discipline, inter-disciplinarily, career stage, full/part-time status and any other relevant factors including gender and ethnicity. This will support a diverse and inclusive research community.
  6. Variety. Variations between disciplines should be acknowledged by using normalised or field-weighted data.
  7. Relevant. Indicators and their associated evaluation methods should be reviewed and updated regularly, recognising the systemic effect of indicators.

Applications of Responsible Metrics

Staff Recruitment, Performance Assessment, Probation and Promotion

The use of research metrics in researcher assessment should adhere to the seven guiding principles above. It should be an open process, declared in advance and enable those evaluated to verify data and analysis. Assessment of individual researchers will be based on a qualitative judgement of their portfolio, including: their outputs, impact and wider contribution to their field. When research metrics are used, they will be normalised to account for variation by field in publication and citation practices. For more information read the Sway on Using metrics responsibly: recruitment and promotion panels.

Where to publish

Choosing where to publish should involve consideration of many factors including the aim and scope of the publication, the readership, the rigour of the peer-review process, compliance with university and funder open access policies, and reach of the publication. Quantitative indicators can be used to supplement these factors. However, journal-based metrics, such as the Journal Impact Factor (JIF) or equivalent, should not be a deciding factor in choosing where to publish.

Research Outputs

High-quality research outputs are central to the University’s vision and to the career development of our researchers.

The University recognises that to inform the assessment of individual outputs, article-level metrics are more appropriate than journal-level metrics, and consequently the University will not use a Journal Impact Factor as an indicator of output quality. Although article-level citation counts can inform the peer-review assessment of outputs quality, all such indicators will be normalised to account for variation by field in publication and citation practices.

Postgraduate Researchers and Research Income

When the University uses metrics based on the volume of research income and numbers of postgraduate researchers supervised by staff, it will normalise these metrics to account for variations between disciplines and career stages.

Context and Implementation

The University of Leeds endorses the Leiden Manifesto’s emphasis on measuring performance against the research mission of the university and the importance of protecting excellence in locally valued and relevant research. In addition, the organisation is a signatory to the San Francisco Declaration on Research Assessment (2012), known as DORA[3], which focusses particularly on the use of Journal Impact Factors and other journal-based metrics. DORA’s key tenet is to “not use journal-based metrics, such as Journal Impact Factors (JIFs), as surrogate measures of the quality of individual research articles, to assess an individual scientist’s contributions, or in hiring, promotion, or funding decisions”.

We support DORA’s encouragement for considering all forms of research outputs in the assessment of any research project in order to support the best decision-making. We support the view that the value of any research resides in the knowledge it generates and not in publication metrics or the identity of the journal in which it is published. We recognise and support that, regardless of a journal’s impact factor, the choice of where to publish (subject to funder’s open access requirements) is best determined by the researcher themselves, and that this is central to preserving the freedom of researchers.

Faculties, Research Groups, and Schools at the University of Leeds are encouraged to develop local, more detailed policies that are consistent with the institutional position outlined in this statement and make these widely known to staff.

University of Leeds Responsible Research Metrics Group

September 2021

More information and support

Responsible Research Metrics: reporting issues and concerns

References

[1] The Metric Tide: Report of the Independent Review  of the Role of Metrics in Research Assessment and Management

[2] Bibliometrics: The Leiden Manifesto for research metrics

[3] San Francisco Declaration on Research Assessment