The testing effect

This will be my final post that picks up a theme from CAA 2011 , but the potential implications of this one are massive. For the past few weeks I have been trying to get my head around the significance of the ideas I was introduced to by John Kleeman’s presentation ‘Recent cognitive psychology research shows strongly that quizzes help retain learning’. I’m ashamed to admit that the ideas John was presenting were mostly new to me. The ideas echo with a lot of what we do at the UK Open University in encouraging students to learn actively, but they go further. Thank you John!

John has written more on the Questionmark blog – go to http://blog.questionmark.com/ and search for ‘Roediger’; much of the recent work has been done Professor Roddy Roediger and his team at Washington University in St Louis (for a complete bibliography of Professor Roediger’s work see http://psych.wustl.edu/memory/publications/).  I’ve done some reading around the subject, but still have a lot more to do, so what follows is just a summary of the points that have struck me so far:

Retrieval practice i.e. practising retrieving something (as in an online test) aids long-term retention more than further studying.

Thus, the assumption that underpins much summative testing, that testing  is ‘neutral’ (measuring a students learning without affecting that learning) is false.

This is not a newly discovered effect. I’ve found references to the work of  Gates (1917), Jones (1923) and Spitzer (1939). And apparently Aristotle said ‘Exercise in repeatedly recalling a thing strengthens the memory’.

The effect has been found both in the pyschological laboratory (in impressively controlled experiments) and in the classroom, and for students of all ages.

The effect persists even after allowing for the fact that testing provides extra exposure to course content, applies to the learning of concepts as well as facts, and the testing effect appears to be more effective than getting students to do a concept mapping exercise after studying (Karpicke and Blunt, 2011). The effect also persists when the questions asked in the final test are different (in both format and content) from the questions used in the intermediate test. And although early work only looked at the impact on retention a few hours or days after the intermediate test, more recent work has found the same effect after a much longer time period (McDaniel et al, 2007).

In general, it appears that short answer tests are more effective than multiple choice tests, and that providing feedback is helpful. Several authors introduce the importance of feedback as a means of stopping students from remembering distractors (delightfully called ‘lures’ by American authors) in multiple choice questions. However the findings here are slightly confusing – it seems that feedback is actually more useful when used with short answer questions (perhaps because students are less likely to get them right) and one recent paper (Butler & Roediger, 2007) reports that feedback made no difference. Another paper (Kang, McDermott & Roediger, 2007) appears to report that, without feedback, multiple choice questions are at least as effective as short answer questions.

The reasons for the effectiveness of the testing effect are also a bit confusing, at least for a non-psychologist like me. It appears that it’s the effort that goes into retrieval that’s important (which is why short answer questions are more effective than multiple choice questions) – a concept of ‘desirable difficulty’ is introduced.  But  I struggle a bit to see how the role of feedback (where ‘feedback’ here seems to mean giving the correct answer) fits into this argument.

So I’m still thinking around the details – and work is ongoing e.g. looking at the optimum number of intermediate tests and the spacing between them. But the basic effect and its implications for educational practice are mind blowing and some of the areas that are less clear cut may have links to other aspects of my work (e.g. use of short answer free-text e-assessment questions, investigation into what students actually do with the feedback we provide). It’s exciting stuff.

Since this is my final reflection on CAA 2011, I’ll close with a photograph of me looking rather intense at the conference, with Matt Haigh, John Dermo and Simon Cross and others. Until next year!

This entry was posted in conferences, testing effect and tagged , , , , . Bookmark the permalink.

6 Responses to The testing effect

  1. Great post again Sally! Next post about the spacing effect? John Kleeman mentions that too in his blogposts about professor Roediger’s work. I found this article about the spacing effect very interesting. Could have implications for the best way to set up formative tests.

  2. Sally Jordan says:

    Thanks Sander – the article you refer to is great! The implications of all of this are massive – perhaps we’ll be able to ‘do better’ at last.

  3. John Kleeman says:

    Sally

    Good article. I agree that this is massive and startling. Thank you for sharing.

    I suspect Sander has it right on feedback, that the prime benefit there is due to spacing, seeing the information again after a time. As well as correcting misconceptions.

    John Kleeman

  4. Hi Sally,

    Just wanted to clarify the role of feedback in learning from tests. Practicing retrieval (i.e. taking a test) without feedback tends to produce better retention than re-studying the information for an equivalent amount of time. However, the critical mechanism is successful retrieval — if the test is too difficult and students fail to retrieve the information, they will not benefit from taking the test. In such instances, feedback is critical to ensuring the benefits of testing. Kang, McDermott, and Roediger (2007) show that multiple-choice testing can yield better retention than short answer testing when performance on the short answer test is very poor (multiple-choice tends to be easier than short answer). However, this pattern is reversed when feedback is provided because students have a recourse to recover the information that they had forgotten. Overall, the main conclusion is that testing without feedback improves retention, but providing feedback further boosts retention because students can recover information that they forgot and correct errors. The finding that feedback did not help in the Butler & Roediger (2007) study is somewhat of an anomaly because many other studies show that feedback is highly effective. We speculate in the article about why this odd result might have occurred, but I do not have a good explanation. Probably best to just ignore it. If you want more information about feedback, I have some articles on my website that contain good references to the key works in the feedback literature.

    Thanks for blogging about this research!

    -Andrew Butler
    http://duke.edu/~ab259/index.html

  5. Sally Jordan says:

    Thank you John and Andrew for these very useful comments. Many thanks for taking the time to respond.

    Andrew – I will certainly take a look at the references you suggest.

    best wishes

    Sally

  6. Pingback: e-assessment (f)or learning » Blog Archive » Quote of the day

Leave a Reply

Your email address will not be published. Required fields are marked *