I’m participating in the ‘activity week’ of the JISC online conference ‘Innovating e-Learning: shaping the future’ and yesterday I caught just 30 minutes of an incredibly interesting session on ‘Analysing feedback’. I have just watched the recording of the rest of it, but wish I’d been able to participate fully.
The session was led by Gwyneth Hughes (Institute of Education) and Rola Ajjawi and Karen Barton (Centre for Medical Education, University of Dundee) and described two tools for analysing and auditing feedback that have been developed and used at their respective universities as part of the JISC Assessment and Feedback Programme. The projects are the Assessment Careers Project (IOE) and InterACT (Dundee). The tools developed are different in the two cases, but the findings (that most feedback relates to the current task rather than being ipsative or forward looking in nature) were depressingly similar. Similar too, to what was found in the OU/Sheffield Hallam’s FAST (Formative Assessment in Science Teaching) Project. The FAST Project’s legacy website is at http://www.open.ac.uk/fast/ if anyone is interested – it was around the time of the FAST Project that we started to talk about the need for ‘feed-forward’.
I think audits of feedback practice are useful but, much as feedback interventions themself, they are only useful if they make a difference to practice. In this case, that means getting tutors to realise that their feedback may not be having quite the impact they think it is. Anyone for self-audit or peer-audit? And I like the idea, raised by someone else, that we should look at individual students’ ‘feedback careers’ as they pass from tutor to tutor, each with a different approach.