Author Archives: Sally Jordan

Short-answer questions : how far can you go?

Finally for today, I’d like to talk about where I believe the limits currently sit in the use of short-answer free-text questions. I have written questions where the correct response requires three separate concepts. For example, I have written a … Continue reading

Posted in short-answer free text questions | Tagged | Leave a comment

Short-answer questions : when humans mark more accurately than computers

Hot on the heals of my previous post, I’d like to make it clear that human markers sometimes do better than computers in marking short-answer [less than 20 word] free-text questions.  I have found this to be the case in two situations … Continue reading

Posted in human marking, short-answer free text questions | Tagged , , , | Leave a comment

Short-answer questions : when computers mark more accurately than humans

Back to short-answer free-text questions. One of the startling findings of my work in this area was that computerised marking (whether provided by Intelligent Assessment Technologies’ FreeText Author or OpenMark PMatch) was consistently more accurate and reliable than human markers. At the time, … Continue reading

Posted in human marking, short-answer free text questions | Tagged , , , | Leave a comment

Two more talks

We’ve now had two more talks as part of the OU Institute for Educational Technology’s  ‘Refreshing Assessment’ series. First we had Lester Gilbert from the University of Southampton on ‘Understanding how to make interactive computer-marked assessment questions more reliable and … Continue reading

Posted in assessment design | Tagged , , , | Leave a comment

Making the grades

I’ve been lent a copy of Todd Farley’s book ‘Making the grades: my misadventures in the standardized testing industry’ (published by PoliPointPress in 2009). The blurb on the back of the book says ‘Just as American educators, parents and policymakers … Continue reading

Posted in human marking | Tagged , , , | Comments Off on Making the grades

Bad questions

As part of a ‘Refreshing Assessment’ Project, the Institute for Educational Technology at the Open University is hosting three talks during June. The first of these, last Wednesday, was from Helen Ashton, head of eAssessment at the SCHOLAR programme at Heriot … Continue reading

Posted in multiple-choice questions | Tagged , , , , , | Leave a comment

Random guess scores

As an extension to my daughter Helen’s  iCMA statistics project, random guess scores were calculated for multiple choice, multiple response and drag and drop questions in a number of different situations (e.g. with different numbers of attempts, different scoring algorithms, different … Continue reading

Posted in statistics | Tagged , , | 2 Comments

Fair or equal?

This post returns to ideas from Gipps and Murphy’s book ‘A fair test?’. We use words like ‘equality’, ‘equity’ and ‘equal opportunities’ frequently, in the context of assessment and elsewhere. Gipps and Murphy deliberately talk about ‘equity’ not ‘equal opportunities’ … Continue reading

Posted in Uncategorized | Tagged , | Leave a comment

Not like Moses

One of the joys of trying to catch up with others who have been working in the field of assessment for much longer than me is finding books and articles that were written some time ago but which still seem pertinent today. … Continue reading

Posted in assessment design, quality, question writing | Tagged , , | Leave a comment

Top tips

I’ve recently been asked for my ‘top tips’ for writing interactive computer-marking assignment (iCMA) questions. I thought I might as well nail my colours to the mast and post them here too: •  Before you start writing iCMA questions, think … Continue reading

Posted in e-assessment, question writing | Tagged , | 1 Comment