Skip to content

eSTEeM > Projects > Themes > Innovative assessment > Assessing with confidence

Assessing with confidence

Multiple choice questions (MCQ) are the basic fare of e-assessment. MCQs are robust and easy to implement, but are pedagogically not ideal: open questions are preferable but automated marking of free text answers is problematic.

A possible squaring of this circle is to appropriate the technique of confidence-based marking (CBM).

In CBM, a student both selects an answer and also their level of confidence: they score full marks for knowing that they know the correct answer, some credit for a tentative correct answer but are penalised if they believe they know the answer but get it wrong. There are several motivations for CBM: it rewards care and effort so engendering greater engagement, it encourages reflective learning.

I believe we can appropriate CBM and, with one simple change, enrol it for quite a different end. Here the MCQ is presented in two stages. Initially, the question is presented but with no answer options visible; instead the student must set their confidence level that they know the answer. Only then are the possible answers are revealed and the student answers as a normal MCQ. The marking scheme follows standard CBM practice.

Consider what this means. Mechanically the question remains a simple MCQ. This means that answer matching is trivial and robust, questions are easy to implement, and existing question banks can be reused. However, to the student, the question is effectively transformed from closed MCQ to an open question. They need to formulate an answer first before they can decide their confidence in their answer. This means that they will decide their answer in the absence of any positive or negative clues. They cannot work back from the answer. They will not be led into misconceptions. Instead they have little choice but to answer the question as set.

This project plans to:

  • implement a CBM question type in OpenMark
  • pilot small-scale evaluation by observation
  • trial introduction in course assessment, with controlled experimental design if possible
  • record measures such as assignment scores, time spent on task
  • survey or interview to probe attitudinal aspects.

The premise is that students using CBM will engage better with questions, improve their learning, and become more reflective learners. Measures will be collected to probe these questions, although since demonstrating learning gains is notoriously difficult it may not be possible to demonstrate clear-cut outcomes. However, since any CBM question can also be presented as standard MCQ, there is potential for controlled experimental design which is rarely the case for pedagogic interventions.

 

Rosewell poster (PDF)