I’m indebted to Silvester Draaijer for leading me towards an interesting article:
Funk, S.C. & Dickson, K.L (2011) Multiple-choice and short-answer exam performance in a college classroom. Teaching of Psychology, 38 (4), 273-277.
The authors used exactly the same questions in multiple-choice and short-answer free-text response format – except (obviously) the short-answer questions did not provide answer choices. 50 students in an ‘introduction to personality’ psychology class attempted both versions of each question, with half the students completing a 10 question short-answer pretest before a 50 question multiple-choice exam and half the students completing the 10 short-answer questions as a post-test after the multiple-choice exam. The experiment was run twice (‘Exam 2’ and ‘Exam 3’, where students didn’t know what format to expect in Exam 2, but did in Exam 3).
In each case the performance on multiple-choice items was significantly higher (p<0.001) than performance on the same items in the short-answer test. The mean scores are given below:
Note that I’ve reported the data exactly as published even though I think they have given too many significant figures!
The authors summarise the results as follows (p.275) ‘In short, students who were unable to answer several short-answer items were able to answer significantly more of the same items when presented in a multiple-choice format several minutes later…[and] students who were able to answer multiple-choice items were unable to answer a few of those same items in a short-answer format when presented minutes later.’
Moving on to reflect on this result – first of all my reflection. This result seems very plausible – if you provide prompts, students are far more likely to recognise the correct response than they are to work it out for themselves. But it’s not the same result as I found when looking at a range of different free-text and multiple-choice questions. Not all short-answer questions are harder than multiple-choice questions and not all multiple-choice questions are easier than short-answer questions, and we happened to have asked a selection of short-answer questions which, overall, were easier than our multiple-choice questions. See my previous posting on this.
Now the authors’ reflection. They say that it is a simplification to equate multiple-choice questions with recognition processes and short-answer questions with recall processes. They go on to say ‘students often need to understand and interpret information when answering multiple-choice questions in addition to merely recognizing correct answers’. I agree and think I’d take this point slightly further – I’m disappointed to see such a lot of emphasis on recognition and recall. They’re implying that recall is ‘better’ than recognition, which is probably true, but surely assessment should be about more than either recognition or recall. My memory is hopeless but I can still do physics (I do appreciate that recall is more necessary in some disciplines).
In other aspects I agree wholeheartedly with what the authors say : ‘…we know that multiple-choice questions allow for a variety of strategies for identifying correct answers: recognition, recall, analysis and other test-taking strategies such as eliminating wrong answers or guessing’ …'[multiple choice questions] may inadvertently foster dependency’ (I think what they’re saying here is that students learn strategies for passing multiple-choice exams and so success in a multiple-choice exam doesn’t necessarily imply knowledge or understanding of the course)…’Performance on multiple-choice exams may provide inaccurate information to instructors concerning student learning and overestimate students’ learning of course information’.
I know that there are bad multiple-choice questions and not so bad ones and I also know that other techniques (e.g. certainty-based marking) can help to reduce some of the validity issues with multiple-choice questions, but this paper definitely provides food for thought.