Can online selected response questions really provide useful formative feedback?

The title of this post comes from the title of a thoughtful paper from John Dermo and Liz Carpenter at CAA 2011. In his presentation, John asked whether automated e-feedback can create ‘moments of contingency?’ (Black & Wiliam 2009). This is something I’ve reflected on a lot – it some senses the ideas seem worlds apart.

John and Liz’s paper reports on the introduction of  formative e-assessment on a foundation level module in biology, with detailed and extensive feedback on each question, designed to help focus students’ learning and revision in preparation for an online summative assessment. They found an association between student progress and their engagement with the formative feedback task – and the association persisted even after correcting for ability of student etc. The formative assessment was very well received by students. However there were some anomalies, including the fact that students claimed to be using the feedback as an integral part of their revision, which didn’t really tally with the fact that they were accessing it immediately prior to the final exam. As the paper says ‘It would appear that there might be a discrepancy between what students report and what they actually do.’ Ah yes…

Two of John’s conclusions in the presentation were:

1. What exactly constitutes quality automated electronic feedback?

4. How can we research this further? How do we know what students are really doing with feedback?

I’d perhaps put these two points the other way round, but I agree with him absolutely!

This entry was posted in conferences, feedback and tagged , , , , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *