On Monday, I was at the LIS-Bibliometrics 10th anniversary conference, “The Future of Research Evaluation”.
Dr Steven Hill* and Prof James Wilsdon**, highlighted the rise of AI in research evaluation as a major trend in their keynotes. This is particularly significant as they are two of the most prominent figures in research evaluation in the UK. They both cited the recent Research 4.0: Research in the age of automation interim report by Demos.
They pointed out how AI can already do/help out with research (e.g. discovering new antibiotics) and how AI can potentially revolutionise research evaluation. Regarding the latter, systems are being developed to:
- generate research grant applications and evaluate them
- spot relevant papers missed from citation lists
- identify reviewers
- identify journals to publish in
- undertake actual reviews of research papers, perhaps as a complement to human peer review
It was noted that AI could potentially reduce existing biases in some of these procesesses but that they equally could introduce new biases or cement existing biases. There are also numerous issues with the transparency of AI.
The responsible use of metrics may well need to cover the responsible use of AI in future and the UK Forum for Responsible Research Metrics are already looking into this.
Please see Stephen Hill’s post about his talk (link to slides included) for more information. I will Share James Wilsdon’s slides when they become available.
*Director of Research at Research England, chair of the steering group for the 2021 Research Excellence Framework (REF)
** Digital Science Professor of Research Policy in the University of Sheffield’s Department of Politics, Director of the Research on Research Institute, chaired the independent review of the role of metrics in the management of the research system resulting in The Metric Tide report