The other thing that was discussed at yesterday’s ‘Analysing feedback’ session at the JISC online conference ‘Innovating e-Learning: shaping the future’ was the role of praise in feedback. (more…)
Archive for the ‘conferences’ Category
This is an unashamed advertisement for the Assessment in HE Conference, to be held on 26th-27th June 2013. This is the 4th such conference, but it is moving from a 1-day to a 2-day event and from Carlisle to Birmingham. I think it will be good.
There is more information on the 2013 Assessment in HE Conference – Call for Papers and the conference website.
Last week I attended the International Computer Assisted Assessment Conference in Southampton. This is the third consecutive year I’ve attended this conference and I enjoyed it, even if it was sometimes challenging to the point of being depressing.
So what is there to be depressed about? Bobby Elliott from the Scottish Qualifications Agency said ‘CAA2002 would be disappointed in CAA 2012′ – not because of the conference itself, but because computer aided assessment has not achieved as much as was hoped 10 years ago. Sue Timmis from the University of Bristol summed up the problem by saying by saying that, in reviewing the literature relating to the use of digital technologies in assessment, she and colleagues have not yet found evidence of a transformative effect. Steve Draper from the University of Glasgow and the Keynote speaker, raised another issue in saying that there is not much evidence of the effectiveness of feedback given from tutor to student.
So, on one level, has all of our work been a waste of time? I think I’d be slightly more optimistic if only because most of the conference attendees were interested in these issues, rather than talking about a wish to use technology whether or not that is the best solution from the students point of view. So at least our focus is on learning and teaching and we are looking for evidence of effectiveness rather than sailing on regardless – now we just have to get it right!
One good thing that came out of the conference is that John Kleeman told me about his Assessment Prior Art wiki – do take a look.
I was at a meeting in Bristol yesterday ‘Using assessment to engage students and enhance their learning’. Much of the discussion was on the use of peer assessment (and plenty of interesting stuff), with a keynote from Paul Orsmond, considering student and tutor behaviour inside and outside the formal curriculum.
However, what struck me most was something reported in a presentation from Harriet Jones of the Biosciences Department at the University of East Anglia (UEA). They want students to make their own notes so have made a conscious decision to stop giving out lecture notes (though copies of presentations used in lectures are available on their VLE 48 hours before each lecture, for those who want to download a copy and also for students who want to check something later). It’s a brave decision but also, I think, a very sensible one.
This will be my final post that picks up a theme from CAA 2011 , but the potential implications of this one are massive. For the past few weeks I have been trying to get my head around the significance of the ideas I was introduced to by John Kleeman’s presentation ‘Recent cognitive psychology research shows strongly that quizzes help retain learning’. I’m ashamed to admit that the ideas John was presenting were mostly new to me. The ideas echo with a lot of what we do at the UK Open University in encouraging students to learn actively, but they go further. Thank you John! (more…)
One of the things I’ve found time and time again in my investigations into student engagement with e-assessment is that little things can make a difference. Therefore the research done by Matt Haigh of Cambridge Assessment into the impact of question format, which I’ve heard Matt speak about a couple of times, most recently at CAA 2011, was well overdue. It’s hard to believe that so few people have done work in this area.
Matt compared the difficulty (as measured by performance on the questions) of ten pairs of question types e.g. with or without a picture, drag and drop vs tick box, drag and drop vs drop-down selection, multiple-choice with only single selection allowed vs multiple-choice with multiple selections enabled, when adminstered to 112 students at secondary schools in England. In each case the actual question asked was identical. The quantitative evaluation was followed by focus group discussions.
This work is very relevant to what we do at the OU (since, for example, we use drop-down selection as the replacement for drag and drop questions for students who need to use a screen reader to attempt the questions). Happily, Matt’s main conclusion was the variations of item format explored here had very little impact on difficulty – even when there appeared to have been some difference this was not statistically significant. The focus group discussions led to general insight into what makes a question difficult (not surprisingly ‘lack of clarity’ came top) and also to some suggestions for the observed differences and lack of differences in difficulty in the parallel forms of the questions.
I’d very much like to do some work in this area myself, looking at the impact of item format on our rather different (and vast) student population. I’d also like to observe people doing questions in parallel formats, so see what clues that might give.
In describing a presentation by Margit Hofler of the Institute for Information Systems and Computer Media at Graz University of Technology, Austria, the CAA 2011 Conference Chair Denise Whitelock used the words ‘holy grail’ and this is certainly interesting and cutting-edge stuff. The work is described in the paper ‘Investigating automatically and manually generated questions to support self-directed learning’ by Margit and her colleagues at http://caaconference.co.uk/proceedings/
An ‘enhanced automatic question creator’ has been used to create questions from a piece of text, and the content quality of 120 automatically created test items has been compared with 290 items created by students. (more…)
The title of this post comes from the title of a thoughtful paper from John Dermo and Liz Carpenter at CAA 2011. In his presentation, John asked whether automated e-feedback can create ‘moments of contingency?’ (Black & Wiliam 2009). This is something I’ve reflected on a lot – it some senses the ideas seem worlds apart. (more…)
Just home from CAA 2011 (the International Computer Assisted Assessment Conference in Southampton). The attendance was quite low ( probably a victim of the current economic climate) but the conference was good, with some very thoughtful presentations and extremely useful conversations. I’ll post more in the coming days, but for the moment I’ll just reflect on John Dermo’s summary at the end of the conference. John had used wordle.net to create a word cloud from the papers. Amazingly the ‘biggest’ (i.e. most common) word in the cloud was was ‘student’, whilst ‘technology’ was tiny.
I did occasionally feel that some presenters were still seeing technology as a solution in need of a problem (and also seeing ‘evaluation’ as something that we do to convince others that what we’re doing is the sensible way forward – surely honest evaluation has to accept that our use of technology might not always be the best solution). However, the overall focus on students not technology was refreshing. Hurrah!