The recently published edition of Widening Participation and Lifelong Learning (WPLL) draws our attention to some of the dilemmas and challenges facing providers of higher education in responding to policy-makers’ (namely the Office for Students) drive for evidence-based practice. Articles by Thiele and Johnson present a mix of qualitative and quantitative approaches to evaluating the impact of widening participation initiatives. Johnson’s approach is purely qualitative, using narrative and exploratory research to provide an in-depth understanding of the lives of mature learners from disadvantaged backgrounds, as they progress from a Foundation programme onto undergraduate study. Thiele offers a mixed methods approach, using both quantitative analysis of institutional data to inform practice and qualitative approaches, through interviews to assess the impact of subsequent interventions. Combinations of these approaches can be found in much of the widening participation literature, in an attempt to provide evidence of the impact of widening participation spend so eagerly sought after by the Office for Students. But do these approaches go far enough? Are they robust enough to provide any meaningful insight into what works and what doesn’t?
Apparently not. There appears to be a drive by policy-makers and some practitioners towards the use of randomised controlled trials (RCTs), as presented in the Sander’s article in the recent edition of WPLL. RCTs adopt a more scientific approach to evaluation, and their use in the evaluation of widening participation initiatives seek to provide the hard evidence that is seemingly lacking from current evaluation approaches.
Whilst attempts to overcome the challenges evaluation officers are facing should be welcomed, the trouble with RCTs is that they do not factor in the uniqueness of individuals and their different lived experiences. Trying to control for these other influences is therefore problematic and any impact of interventions reported must be seen within this context. There are ethical dilemmas in approaches that seek to ‘treat’ an individual or groups of individuals, whilst excluding others with similar characteristics, in order that impact can be measured. I wonder how it feels to be part of the group who are not being ‘treated’ and I also wonder how it feels to be ‘treated’ as if suffering from some sort of illness? Presenting the use of RCTs in this way does nothing to address issues of deficit models which the sector seemingly, has been working hard to redress.
Irrespective of the approach adopted, colleagues involved in evaluating the impact of widening participation spend need to ask the question: What are we trying to measure? Is it, for instance…
• the impact of the intervention of students submitting a UCAS application;
• the impact in terms of raising aspirations to attend university;
• the impact of the intervention to raise attainment;
• the impact on an individual’s sense of self-esteem, self-worth and self-confidence
• an evaluation of the process of delivering the intervention;
• a combination of all of them?
Answering these questions may help providers to identify the most appropriate approach to evaluation, but isolating those involved from any external influence is always going to be problematic, given the inter-connected social context within which individuals live. This is likely to be more apparent for students identified as coming from disadvantaged backgrounds who face additional challenges which are likely to affect their engagement with education and possibly their future plans. This is particularly relevant to interventions aimed at adult learners who, as Butcher (2015) suggests, are more likely to study part-time and face additional pressures in terms of financial, caring and work responsibilities.
Can RCTs realistically deliver in response to the drive for evidence-based practice? I doubt it, given the complex world in which we live and the influences to which we all, as individuals, are exposed.