This will be my final post that picks up a theme from CAA 2011 , but the potential implications of this one are massive. For the past few weeks I have been trying to get my head around the significance of the ideas I was introduced to by John Kleeman’s presentation ‘Recent cognitive psychology research shows strongly that quizzes help retain learning’. I’m ashamed to admit that the ideas John was presenting were mostly new to me. The ideas echo with a lot of what we do at the UK Open University in encouraging students to learn actively, but they go further. Thank you John!
John has written more on the Questionmark blog – go to http://blog.questionmark.com/ and search for ‘Roediger’; much of the recent work has been done Professor Roddy Roediger and his team at Washington University in St Louis (for a complete bibliography of Professor Roediger’s work see http://psych.wustl.edu/memory/publications/). I’ve done some reading around the subject, but still have a lot more to do, so what follows is just a summary of the points that have struck me so far:
Retrieval practice i.e. practising retrieving something (as in an online test) aids long-term retention more than further studying.
Thus, the assumption that underpins much summative testing, that testing is ‘neutral’ (measuring a students learning without affecting that learning) is false.
This is not a newly discovered effect. I’ve found references to the work of Gates (1917), Jones (1923) and Spitzer (1939). And apparently Aristotle said ‘Exercise in repeatedly recalling a thing strengthens the memory’.
The effect has been found both in the pyschological laboratory (in impressively controlled experiments) and in the classroom, and for students of all ages.
The effect persists even after allowing for the fact that testing provides extra exposure to course content, applies to the learning of concepts as well as facts, and the testing effect appears to be more effective than getting students to do a concept mapping exercise after studying (Karpicke and Blunt, 2011). The effect also persists when the questions asked in the final test are different (in both format and content) from the questions used in the intermediate test. And although early work only looked at the impact on retention a few hours or days after the intermediate test, more recent work has found the same effect after a much longer time period (McDaniel et al, 2007).
In general, it appears that short answer tests are more effective than multiple choice tests, and that providing feedback is helpful. Several authors introduce the importance of feedback as a means of stopping students from remembering distractors (delightfully called ‘lures’ by American authors) in multiple choice questions. However the findings here are slightly confusing – it seems that feedback is actually more useful when used with short answer questions (perhaps because students are less likely to get them right) and one recent paper (Butler & Roediger, 2007) reports that feedback made no difference. Another paper (Kang, McDermott & Roediger, 2007) appears to report that, without feedback, multiple choice questions are at least as effective as short answer questions.
The reasons for the effectiveness of the testing effect are also a bit confusing, at least for a non-psychologist like me. It appears that it’s the effort that goes into retrieval that’s important (which is why short answer questions are more effective than multiple choice questions) – a concept of ‘desirable difficulty’ is introduced. But I struggle a bit to see how the role of feedback (where ‘feedback’ here seems to mean giving the correct answer) fits into this argument.
So I’m still thinking around the details – and work is ongoing e.g. looking at the optimum number of intermediate tests and the spacing between them. But the basic effect and its implications for educational practice are mind blowing and some of the areas that are less clear cut may have links to other aspects of my work (e.g. use of short answer free-text e-assessment questions, investigation into what students actually do with the feedback we provide). It’s exciting stuff.
Since this is my final reflection on CAA 2011, I’ll close with a photograph of me looking rather intense at the conference, with Matt Haigh, John Dermo and Simon Cross and others. Until next year!