Archive for the ‘item format’ Category

Does a picture paint a thousand words?

Tuesday, August 2nd, 2011

One of the things that Matt Haigh looked at when considering the impact of item format (see previous post) was whether the presence of a picture made the question easier or harder. He started with a very simple multiple choice item:

Which of the following is a valid argument for using nuclear power stations?

  • for maximum efficiency, they have to be sited on the coast
  • they have high decommissioning costs
  • they use a renewable energy source
  • they do not produce gases that pollute the atmosphere

All the students received this question, but half had a version which showed a photograph of a nuclear power station. Not surprisingly, the students liked the presence of the photograph. The version with the photograph also had a slightly lower item difficulty  (when calcualted by either Classical Test Theory and Item Response Theory paradigms), but not significantly so.

When compared with aspects of my work that I’ve  already described briefly (it’s the dreaded sandstone question again – see Helpful and unhelpful feedback : a story of Sandstone) it is perhaps surprising that the presence of the photograph does not confuse people and so make the question more difficult. (more…)

The impact of item format

Tuesday, August 2nd, 2011

One of the things I’ve found time and time again in my investigations into student engagement with e-assessment is that little things can make a difference. Therefore the research done by Matt Haigh of Cambridge Assessment into the impact of question format, which I’ve heard Matt speak about a couple of times, most recently at CAA 2011, was well overdue. It’s hard to believe that so few people have done work in this area.

Matt compared the difficulty (as measured by performance on the questions) of ten pairs of question types e.g. with or without a picture, drag and drop vs tick box, drag and drop vs drop-down selection, multiple-choice with only single selection allowed vs multiple-choice with multiple selections enabled, when adminstered to 112 students at secondary schools in England. In each case the actual question asked was identical. The quantitative evaluation was followed by focus group discussions.

This work is very relevant to what we do at the OU (since, for example, we use drop-down selection as the replacement for drag and drop questions for students who need to use a screen reader to attempt the questions). Happily, Matt’s main conclusion was the variations of item format explored here had very little impact on difficulty – even when there appeared to have been some difference this was not statistically significant. The focus group discussions led to general insight into what makes a question difficult (not surprisingly ‘lack of clarity’ came top) and also to some suggestions for the observed differences and lack of differences in difficulty in the parallel forms of the questions.

I’d very much like to do some work in this area myself, looking at the impact of item format on our rather different (and vast) student population. I’d also like to observe people doing questions in parallel formats, so see what clues that might give.