Riff-ing again on the REF Consultation

Professor Richard Holliman, The Open University.

Professor Richard Holliman, The Open University.

The Research Excellence Framework is an exercise in identifying and rewarding excellence in research. It is, of course, also about resource allocation and therefore longer-term planning for research.

Hence, whether we like it or not, REF 2021 (like research assessments before it) will result in cultural and organisational changes in UK universities. For those who do well, REF 2021 will lead to changes, effects and (we hope) benefits to the ways these UK universities, Units of Assessment (UoAs) and the researchers working for them conduct research, and how they engage with non-academic beneficiaries and derive social and/or economic impacts from it. For those who do badly, research will have to be funded from sources other than QR; either that or this work could be de-prioritised.

Research Excellence Framework - REF 2021.

Research Excellence Framework – REF 2021.

Given the power of the REF to shape research priorities, it is important that the assessment system is equitable, and that the guidance promotes rigour, fairness, transparency and consistency. Although it doesn’t specifically say so in the documentation, it seems reasonable to assume that the current REF 2021 Consultation is an attempt to promote these principles.

It’s fair to note up front, therefore, that this post is prompted by some significant concerns about the current guidance and what it could mean, in particular, for the research impact agenda.

My principle concern in what follows is that the REF should not be about ‘boundary work’; setting up de facto restrictive practices prior to the assessment process that unfairly favours one set of impact-generating practices over another.

In the current version of the consultation documents, considerable progress has been made in relation to the areas, types and indicators of impact (Draft Panel Criteria and Working Methods; Annex A). As I argued in a recent post, this inclusive approach is commendable.

In adopting an inclusive approach, researchers are left with a realistic prospect of selecting their strongest evidence of impact. So far, so good on the impact front, notwithstanding a need to address impacts derived from public engagement in a more consistent way, to which the NCCPE are currently working constructively.

The problems start to emerge when there is evidence of divergence between the main panels. I argue that divergence in the guidance has the potential to lead to an uneven playing field for the assessment of impact. The danger is that this divergence results in further unintended consequences for different approaches to impactful research.

In effect, the examples of divergence have the potential to result in restrictive practices (e.g. through de-selection of existing research practices where QR funding is not secured) within the following areas: 1) the ways that researchers’ evidence social and/or economic impacts derived from research; 2) whether and how testimonials are deemed to be suitable evidence; and 3) whether continuing case studies (i.e. those previously submitted to REF 2014) will be judged to have equal value to new case studies.

The question is whether divergence between the panels on these issues is justified. (Spoiler alert: I don’t think it is justified.)

Underpinning evidence of impact
One of the arguments put forward in defence of Panel A’s divergence from the other main panels on the issue of underpinning evidence for impact is that the research submitted to this panel follows a homogeneous epistemological tradition, i.e. positivism, supported by quantitative evidence.

I’m not convinced this is a fair representation of the totality of the potential diversity in high-quality research that could be submitted to Panel A. Further, this approach underplays the potential for novel approaches to emerge and generate evidence of high-quality research.

But these are arguments for another day; my main concern here is the impact agenda. Panel A’s approach is really problematic for me when we consider the evidence that could underpin high-quality impact that is derived from research.

In terms of the impact agenda, Panel A’s position makes the assumption that the underpinning epistemology that generates the research will automatically be the same as that which generates impact. This is not a given; REF 2021 should not make it requirement. If nothing else, it could stifle creativity, limit partnership-working with particular publics who do not value positivist approaches, and hinder interdisciplinary (the latter counter to the spirit of the Stern Review).

Public Engagement: Attitudes, Culture and Ethos (STFC, 2016).

Public Engagement: Attitudes, Culture and Ethos (STFC, 2016).

We know that researchers likely to submit to Panel B are struggling to come to terms with some aspects of change in relation to the impact agenda (STFC, 2016). Are Panel A researchers also struggling? Anecdotally, I’ve found that many of them are struggling. Of course, it would be great to have robust evidence of this. I was an advisor on the early stages of a project to explore these issues several years ago. The MRC were interested in exploring these issues; unfortunately, the project stalled.

Researchers need continuing support to change their practices in relation to research impact. They should be encouraged to plan upstream for impact and in nuanced ways that include relevant publics in meaningful ways (e.g. Holliman et al. 2017; 2015; Holliman and Davies, 2015). The ways that impacts could be evidenced should be part of this upstream planning. Limiting these approaches to the generation of quantitative evidence of impact is unnecessarily limiting.

As a further illustration, I attended a REF Consultation event hosted by the NCCPE last Friday where we discussed how public engagement could and should be evidenced. We discussed a number of relevant issues in relation to the REF consultation. One of these, following a discussion about how rigour might be assessed, focused on the need for robust evidence to underpin claims of impact.

The question follows, “What counts as robust evidence?” and, related to this, is the HE sector’s understanding of the epistemology of impact sufficiently mature to answer this question with any confidence?

I don’t think the HE sector has reached this level of maturity. The solution, therefore, is to allow researchers to submit what they consider to be their strongest evidence. It then falls to the panels to ensure that they have access to sufficient expertise to assess different forms of evidence, and to make evaluate these assessments without fear or favour.

To fail to do this could result in high-quality work being omitted from REF 2021 (a form of self-censorship based on a risk assessment, “The Panel does not value qualitative evidence.”), or for panels to overlook the quality of more diverse approaches to evidencing impact. In the longer-term, the danger is that excellent impactful research that doesn’t generate quantitative evidence could be de-prioritised. Alternatively, researchers could look to ‘game the system’, submitting case studies to panels where the evidence of impact is most likely to be accepted as ‘robust’.

The solution: The requirements to provide underpinning evidence of impact should apply consistently to all panels. Panels C offers clear guidance, whilst Panel D offers the most inclusive approach. Combining Panels C and D offers a clear and inclusive approach. Assessors should then be recruited with different forms of expertise to ensure that all types of evidence can be assessed fairly and equitably.

The use (or not) of testimonials
To allow, or not to allow testimonials, that is one question. The answer is clear. Testimonials should be allowed, but with clear guidance about the types of evidence that demonstrates evidence of high-quality reach and significance.

Hence, a further question relates to how testimonials could offer evidence of high-quality impact. Each of the main panels takes a slightly different approach to this issue. The guidance on this issue should be simplified and standardised across all the panels.

The solution: An agreement should be reached across all the panels to the effect that testimonials are allowed, but that they should meet standardised conditions to score well, e.g. evidence over opinion; testimonials from non-academic members of research teams should be allowed; researchers should state who the testimonial represents; conflicts of interest should be declared; evidence should only inform grades when this is included in the narrative.

Continuing case studies
Panel A don’t like the idea of continuing case studies, i.e. the submission of impact case studies from REF 2014 to REF 2021. Why? To count, the evidence of the impacts clearly has to fall into different census periods. In short, panels will not be assessing the same evidence of impact. That’s the key issue here. It should be the only issue.

True, you could argue that a UoA could be confident that a REF 2014 Impact Case Study that was awarded a classification (1*-4*) under that assessment guidance would have underpinning research (at 2* or better) in place. But a UoA could call on underpinning research that was classified as 2* or better from RAE 2001 (publications with a date stamp of 2000 onwards), RAE 2008 or REF 2014 and be confident that impacts derived from it in the REF 2021 census period would receive a classification (1*-4*). This doesn’t affect the assessment of the impacts, which would be new for REF 2021.

Let’s assume that the guidance doesn’t change following the completion of the consultation. It could result in two obvious outcomes, either UoAs will seek to ‘game the system’ by blurring the distinction between what’s new for REF 2021 and what falls under REF 2014. There’s a danger that the distinction between ‘new’ and ‘continuing’ becomes a minefield of inconsistency for panels.

The other damaging possibility is that UOAs will de-select continuing case studies and de-prioritise high-quality impactful work going forwards. This is counter-intuitive to the best principles of engagement, i.e. that partnership working, over time, where it is desirable/appropriate for the participants, can result in productive collaborations. For the avoidance of doubt, I’m not suggesting the continuing case studies should be privileged in REF 2021, just that they’re assessed on a level playing field with new impact case studies.

This is a solution in search of a problem. Assess the evidence within the REF 2021 impact census period.

The solution: The submission of continuing case studies should be consistent across all panels, i.e. Panel A should adopt the position of Panels B, C and D on this issue.

Reviewing submissions to the REF 2021 consultation
As a final point, whilst I applaud Research England for their consultative approach, I note some concern about the ways that panels will review and respond to comments about the draft guidance.

The following paragraph describes how the responses to the consultation will be addressed:

“We will commit to read, record and analyse responses to this consultation in a consistent manner. For reasons of practicality, usually a fair and balanced summary of responses rather than the individual responses themselves will inform any decision made. In most cases the merit of the arguments made is likely to be given more weight than the number of times the same point is made. Responses from organisations or representative bodies with high interest in the area under consultation, or likelihood of being affected most by the proposals, are likely to carry more weight than those with little or none.”

This paragraph has been itching away in my brain since it was published. What are the implications of this approach in terms of which voices are heard in the consultation?

The solution: Panels should respond favourably to changes that improve on the current guidance, e.g. in terms of improving the rigour, consistency, fairness and transparency of the system.