While I’ve been absent from this blog I’ve been doing lots of analysis of student responses to our questions. More on that to follow. I’ve also been reading a variety of stuff, including the report on the JISC-funded project ‘Scoping a vision for formative e-assessment’ and a selection of papers about the same work. I don’t agree with everything in this report, but I do absolutely agree that ‘no assessment technology is in itself formative, but almost any technology can be used in a formative way – if the right conditions are set in place’. I also agree that problems arise when practices are driven by ‘state-of-the-art technological know-how rather than pedagogy’.
The report identifies design ‘patterns’ of processes in formative assessment that can be supported by software tools. Some of the patterns seem rather obvious (nothing wrong with that) e.g. ‘feedback on feedback’ which highlights the importance of giving feedback to tutors on the feedback that they give to students. As an Open University staff tutor (the line-manager of around 50 part-time tutors) I know that this happens all the time within our system; tutors’ work is monitored (usually by peers) and I return the monitoring reports to the tutors, usually after taking a look myself too. I’d assumed that this sort of thing happened in other institutions too; maybe it doesn’t.
Another pattern that is identified is ‘soft scaffolding’ and the report sensibly points out that ‘technology should be designed to scaffold learners’ progress, but an interface that is too rigid impedes individual expression, exploration and innovation’. ‘Try once, refine once’ is nice too ( it’s very similar to what we do in our iCMA questions) though I’m not convinced we have any evidence that a two-step question-answering process (as described) is any more effective than a three-step question-answering process (ours). But some of the other patterns seem rather obscure and I can’t help but wonder if they have actually been used for real, as opposed to in development or in someone’s imagination.
I just can’t get excited about the ‘design pattern methodology’ per se, so I am slightly perturbed by the report’s recommendation that this approach be recognised, adopted and further developed. Yes, share good ideas. Yes, look for interesting similarities and differences between different approaches (especially if you can evaluate the impact of the differences). But I’m not convinced that an approach is good just because someone has been able to pigeon-hole it in a particular way. Maybe I’m missing the point!