Open-ended and multiple-choice versions of the same test

I’ve just read an excellent paper. It’s rather old, so old indeed that I might have been one of the ‘first year secondary school pupils’ involved in the evaluation! (though I don’t think that I was). The full reference is:

Bishop, A.J., Knapp, T.R. & McIntyre, D.I. (1969) A comparison of the results of open-ended and multiple-choice versions of a mathematics test. International Journal of Educational Science, 3, 147-54.

The first thing they did was to produce a multiple-choice version of a mathematics test by using, as distractors, actual wrong answers, commonly given to the same questions in open-ended form. I don’t think this is common practice (and it raises some issues – see below) but it seems a good idea. Interestingly, the final section of the paper tells us that two ‘experts’ were asked to suggest the same number of wrong answers to be used as distractors for each of the questions. Out of a total of 70, their suggestions coincided with the empirically devised distractors in 26 and 30 cases respectively.

Back to the main substance of the paper. The two versions of the test were distributed randomly between 300 school pupils and the results compared, in terms of both total scores and students’ distribution over the answer choices for each question. The total scores on the multiple-choice version were significantly higher than those on the open-ended version, 14 of the 20 questions showed a significantly different distribution of answer pattern, and five questions showed a significant difference between the proportion getting the correct answer.

Why? The explanations suggested by the authors include:

  1. A  ‘none of these’ option was included in the multiple-choice versions of the questions, to allow for the less commonly made errors. Four questions were added in which ‘none of these’ was the correct answer (although these questions were not included in the final analysis – presumably because it was not possible to know the students’ actual answer to these questions). But the ‘none of these’ option was not a popular choice!
  2. Seven of the two questions included distractors which the authors describe as ‘non-parallel’ i.e. they ‘look different’. Usual good practice for the construction of multiple-choice questions would say that this sort of distractor should be avoided, but when ‘real answers’ are being used, that’s not possible. That’s an interesting tension.

Further examination of the questions for which the two forms produced similar results led the researchers to conclude that multiple-choice and open-ended questions give similar results when each distractor matches a distinct mathematical error. However, when students are asked, for example, to do a calculation, there is a continuum of errors and the distractors picked will only be some of these – so there is unlikely to be a good match between multiple-choice and open-ended questions.

This all feels very relevant both for test design and for my analysis of students mathematical misunderstandings, based on their responses to iCMA questions.

This entry was posted in mathematical misunderstandings, multiple-choice questions and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *