Archive for the ‘multiple-choice questions’ Category

Six Geese a-Laying

Sunday, December 30th, 2012

Day 6. Making multiple-choice better. Although I don’t, in general, much like multiple-choice questions, I have to admit that they can sometimes work very well. In conventional face-to-face settings, the use of electronic voting systems (‘clickers’) can bring lectures alive as well as informing lecturers about student misunderstandings. And it gets better – we don’t have conventional lectures at the Open University, but we are making more and more use of synchronous conferencing tools such as Elluminate Live! (now Blackboard Collaborate elsewhere) – which has a quiz function. I have seen the quiz function used most effectively to make Elluminate sessions much more interactive. (more…)

Five Gold Rings

Saturday, December 29th, 2012

Day 5. Go beyond multiple choice. I’ve been reading a lot recently about the pros and cons of multiple choice (selected response) and constructed response questions. If you’re a regular reader of this blog, you’ll realise that I am not a great fan of multiple choice questions. I’ve already given some of my reasons, but I’ll attempt to summarise here. (more…)

Think before you assess

Friday, December 21st, 2012

As well as the reading that has sparked my recent posts on Learning Outcomes and Revolution or Evolution?, I’ve been reading articles about multiple-choice questions and about assessing practical work. I’m fairly sure that I’ll be saying more about both of those topics during 2013, if not sooner. But there’s a common theme. Honest, there is.

When you want to assess practical work, it can be very easy to assess the underlying subject knowledge rather than the practical skills, and there are decisions to be taken about whether you want to assess practical experimental skills or report writing skills or both. If you chose to use multiple-choice assessment questions, irrespective of whether you like MCQs or loathe them per se, again it is sensible to stop and think about what you are actually assessing. An interesting paper that I read today:

Martinez, M.E. (1999) Cognition and the question of test item format. Educational Psychologist, 34(4), 207-218.

points out that ‘data bearing on the hypothesis of equivalence between constructed response and multiple-choice questions have been equivocal.’ Some people think the two types of questions can be equivalent, others disagree. For simple items, where a constructed response question is assessing recall, a multiple-choice question is assessing recognition. So although there may be correlation between scores obtained on tests comprising multiple-choice questions and tests comprising constructed response questions, the two tests are unlikely to be actually assessing the same thing.

My common theme is that you need to think carefully about WHAT you want to assess, and check that you are actually assessing this in the tasks that you produce for your students. And I think the easiest way to do this is to think in terms of learning outcomes. What is it that you hope your students have learnt: recall or recognition?; practical skills, report-writing skills or knowledge?

Distractors for multiple-choice questions

Friday, October 26th, 2012

I’ve just been asked a question (well, actually three questions) about the summative use of multiple-choice questions. I don’t know the answer. Can anyone help?

If we want 3 correct answers, what’s the recommended number of distractors?

If we want 4 correct answers, what’s the recommended number of distractors?

If we want 5 correct answers, what’s the recommended number of distractors?

I have [31st October] now found out a little more and the answer has to be completely correct and no partial credit is allowed. I have also realised that the answer lies in our own previous work on random guess scores. We’ve repeated the sums for these particular examples (see multiple-choice-distractors) and my recommendation to the module team is that if they require 8 options for each question (so, if they require 3 correct answers there are 5 distractors; if they require 4 correct answers there are 4 distractors etc.). The probability of getting the question right completely by chance will then always be less than 2%, and the probability of getting multiple questions right in this way is vanishingly small.

Multiple choice questions in Peerwise

Thursday, August 2nd, 2012

Yesterday morning I particated in a wonderful webinar on Peerwise (http://peerwise.cs.auckland.ac.nz/), led by Paul Denny from the University of Auckland. The more I see of it, the more I am impressed by Peerwise – yesterday I attempted to write questions for myself for the first time, and also reviewed other people’s questions. It was tremendous fun and we all agreed that students would be likely to learn by authoring questions and attempting and reviewing other students’ questions.

Someone asked Paul if has plans to add question types other than multiple choice (the answer is yes, but not many). However this led to an interesting point – Paul explained that for student authoring of questions, multiple choice is good, because they have to think about the distractors. He could be right!

More about guessing and blank/repeated responses

Tuesday, February 7th, 2012

Depressingly, this post reports a similar finding to the last one.

For the  question shown (which is one of a series of linked questions on the Maths for Science  formative-only practice assessment), 62% of students are right at the first attempt but 22% remain incorrect after the allowed two responses. At the response level, whilst 60.2% of responses are correct, the other options are selected approximately equal numbers of time. The details are below:

P>0.1  12.4% of responses

0.1>P>0.05 14.0% of responses

0.05>P>0.01 60.2% of responses

P<0.01 13.5% of responses

So what’s this saying? (more…)

‘A nice demonstration of a problem with multiple-choice questions’

Monday, February 6th, 2012

That was my husband’s comment when we were analysing responses to the question shown on the left. Start by noting that although this is a drag and drop question it is indeed effectively a multiple-choice (or multiple-response) question - you can choose from eight pre-determined options for each ‘blank’ that you are required to fill. It is also worth noting that this question is on the formative-only practice assessment.

62% of students get the question right at first attempt. However 23% are still wrong after three attempts, having received all our carefully crafted feedback. And whilst 54.1% of responses are completely correct, all the other responses appear to have been guessed from the plausible options. Not really how you want students to be answering questions. Not good! I wrote the question, so I’m allowed to say that.

Multiple choice vs short answer questions

Thursday, January 19th, 2012

I’m indebted to Silvester Draaijer for leading me towards an interesting article:

Funk, S.C. & Dickson, K.L (2011) Multiple-choice and short-answer exam performance in a college classroom. Teaching of Psychology, 38 (4), 273-277.

The authors used exactly the same questions in  multiple-choice and short-answer free-text response format – except (obviously) the short-answer questions did not provide answer choices. 50 students in an ‘introduction to personality’ psychology class attempted both versions of each question, with half the students completing a 10 question short-answer pretest before a 50 question multiple-choice exam and half the students completing the 10 short-answer questions as a post-test after the multiple-choice exam. The experiment was run twice (‘Exam 2′ and ‘Exam 3′, where students didn’t know what format to expect in Exam 2, but did in Exam 3). (more…)

Open-ended and multiple-choice versions of the same test

Friday, August 19th, 2011

I’ve just read an excellent paper. It’s rather old, so old indeed that I might have been one of the ‘first year secondary school pupils’ involved in the evaluation! (though I don’t think that I was). The full reference is:

Bishop, A.J., Knapp, T.R. & McIntyre, D.I. (1969) A comparison of the results of open-ended and multiple-choice versions of a mathematics test. International Journal of Educational Science, 3, 147-54.

The first thing they did was to produce a multiple-choice version of a mathematics test by using, as distractors, actual wrong answers, commonly given to the same questions in open-ended form. (more…)

Bad questions

Saturday, June 18th, 2011

As part of a ‘Refreshing Assessment’ Project, the Institute for Educational Technology at the Open University is hosting three talks during June. The first of these, last Wednesday, was from Helen Ashton, head of eAssessment at the SCHOLAR programme at Heriot Watt University, with the subject ‘Exploring Assessment Design’. It was a good talk, highlighting many points that I bang on about myself, but sometimes we need to hear things from a different perspective (in this case, from Helen’s experience of authoring questions for use by a wide range of schoolchildren). (more…)