Archive for the ‘statistics’ Category

Distractors for multiple-choice questions

Friday, October 26th, 2012

I’ve just been asked a question (well, actually three questions) about the summative use of multiple-choice questions. I don’t know the answer. Can anyone help?

If we want 3 correct answers, what’s the recommended number of distractors?

If we want 4 correct answers, what’s the recommended number of distractors?

If we want 5 correct answers, what’s the recommended number of distractors?

I have [31st October] now found out a little more and the answer has to be completely correct and no partial credit is allowed. I have also realised that the answer lies in our own previous work on random guess scores. We’ve repeated the sums for these particular examples (see multiple-choice-distractors) and my recommendation to the module team is that if they require 8 options for each question (so, if they require 3 correct answers there are 5 distractors; if they require 4 correct answers there are 4 distractors etc.). The probability of getting the question right completely by chance will then always be less than 2%, and the probability of getting multiple questions right in this way is vanishingly small.

Random guess scores

Tuesday, May 31st, 2011

As an extension to my daughter Helen’s  iCMA statistics project, random guess scores were calculated for multiple choice, multiple response and drag and drop questions in a number of different situations (e.g. with different numbers of attempts, different scoring algorithms, different numbers of options to select from and different numbers of options being correct, students being told how many options were correct, or not).

The random guess score for a question is essentially the score that you would expect from someone who is completely logical in working though the question but knows absolutely nothing about the subject matter.

Helen’s report is here.

(more…)

iCMA statistics

Tuesday, April 19th, 2011

This work was originally reported on the website of  COLMSCT (the Centre for the Open Learning of Mathematics, Science, Computing and Mathematics) – and other work was reported on the piCETL (the Physics Innovations Centre for Excellence in Teaching and Learning) website. Unfortunately, the whole of the OpenCETL website had to be taken down. The bare bones are back (and I’m very grateful for this) but the detail isn’t, so I have decided to start re-reporting some of my previous findings here. This has the advantage of enabling me to update the reports as I go.

I’ll start by reporting a project on iCMA statistics which was carried out back in 2009, with funding from COLMSCT, by my daughter Helen Jordan (now doing a PhD in Department of Statistics at the University of Warwick; at the time she did the work she was an undergraduate student of mathematics at the University of Cambridge). Follow the link for  Helen’s project report , but I’ll try to report the headline details here – well, as much as I can understand them! (more…)

Overall impact of different variants of questions

Sunday, August 29th, 2010

You may be relieved to hear that this will be my final posting (at least for a while) on our use of different variants of interactive computer-marked assignment (iCMA) questions. We know that, whilst the different variants of many questions are of equivalent difficulty, we can’t claim this for all our questions. How much does this matter? A typical iCMA might include 10 questions and contribute something between 2 and 5% to a student’s overall score. My Course Team colleagues and I have reassured ourselves that the effect of the different difficulty of some variants of some questions will, when averaged out over a large number of questions, have minimal impact on a student’s overall score. But is this true? (more…)

Investigating whether variants of a question are of equivalent difficulty

Sunday, August 29th, 2010

We have devised a range of tools to determine whether or not the variants of a question are of equivalent difficulty. (more…)