Archive for the ‘conferences’ Category

Staff engagement with e-assessment

Thursday, July 11th, 2013

More reflections from CAA2013 (held in Southampton, just down the road from the Isle of Wight ferry terminal – shown)…

In the opening keynote, Don Mackenzie talked about the ‘rise and rise of multiple-choice questions’. This was interesting, because he was talking in the context of more innovative question types having been used back in the 1997s than are used now. I wasn’t working in this area in the 1997s so I don’t know what things were like then, but somehow what Don said didn’t surprise me.

Don went on to itentify three questions that each of us should ask ourselves, implying that these were the stumbling blocks to better practice. The questions were:

  • Have you got the delivery system that you need?
  • Have you got the institutional support that you need?
  • Have you got the peer support that you need?

I wouldn’t argue with those, but I think I can say ‘yes’ to all three in the context of my own work – so why aren’t we doing better?

I think I’d identify two further issues:

1. It takes time to write good questions and this needs to be recognised by all parties;

2. There is a crying need for better staff development.

I’d like to pursue the staff development theme I little more. I think there is a need firstly for academics to appreciate that they can and should ’do better’ (otherwise people do what is easy and we end up with lots of multiple-choice questions, and not necessarily even good multiple-choice questions), but then we need to find a way of teaching people how to do better. In my opinion this is about engaging academics not software developers – and in the best possible world the two would work together to design good assessments. That means that staff development is best delivered by people who actually use e-assessment in their teaching i.e. people like me. The problem is that people like me are busy doing their own job so don’t have any time to advise others. Big sigh. Please someone, find a solution – it is beyond me.

I ended up talking a bit about the need for staff development in my own presentation ‘Using e-assessment to learn about learning’ and in her closing address Erica Morris pulled out the following themes from the conference:

  • Ensuring student engagement
  • Devising richer assessments
  • Unpacking feedback
  • Revisiting frameworks and principles
  • and… Extending staff learning and development

I agree with Erica entirely, I just wonder how we can make it happen.

The Cargo Cult

Thursday, July 11th, 2013

I suspect that this reflection from the 14th International Computer Aided Conference (CAA2013) may not go down well with all of my readers. I refer to the mention in several papers of the use of technology in teaching and learning as a ‘cargo cult’.

Perhaps I’d better start by saying what the term ‘cargo cult’ is being used to mean. Lester Gilbert (et al.) (2013) explained that ‘cargo cults refer to post-World-War II Melanesian movements whose members believe that various ritualistic acts will lead to a bestowing of material wealth’ and , by analogy, ‘cargo cult science is a science with no effective understanding of how a domain works’. Lester then quoted Feynman ( 1985):

‘I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down–or hardly going up–in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. [This is an] example of what I would like to call cargo cult science.’

I’m not sure that my understanding is the same as Lester Gilbert’s or Richard Feynman’s, but the point that struck me forcably was the reminder of the ritualistic, ‘witch-doctor’ approach of much of what we do. Actually it doesn’t just apply to our use of technology. We have a mantra that doing such-and-such or using such-and-such a technical solution will improve the quality of our teaching and the quality of our students’ learning, and we are very often low on understanding of the underlying pedagogy. We are also pretty low on evidence of impact, but we keep on doing things differently just because we feel that it ought to work – or perhaps that we hope that it will.

Tom Hench ended his presentation (which I’ll talk about in another post)  by saying that we need ‘research, research and research’ into what we do in teaching. I agree.

Feynman, R (1985). Cargo cult science. In, Surely You’re Joking, Mr. Feynman! W W Norton.

Gilbert, L., Wills, G., Sitthisak,O. (2013) Perpetuating the cargo cult: Never mind the pedagogy, feel the technology. In Proceedings of CAA2013 International Conference, 9th-10th July, Southampton.

Oral feedback and assessment

Sunday, July 7th, 2013

As discussed in my previous post, the Assessment in Higher Education Conference was excellent. I helped Tim Hunt to run a ‘MasterClass’ (workshop!) on ‘Producing high quality computer-marked assessment’ and, with Janet Haresnape, ran a practice exchange on the evaluation of our faculty-wide move to formative thresholded assessment. As a member of the organising committee I also ran around chairing sessions, judging posters etc. and I have to say I loved every minute of it. I see from the conference website that another delegate has said it was the best conference they have ever attended, and I would agree with this.

I could go talk more about a number of the presentations I heard but for now I will just reflect on two themes. Here’s the first.

I have read a fair amount about the use of audio files and/or screencasts to give feedback and enjoyed the presentation from Janis MacCallum (and Charlotte Chalmers) from Edinburgh Napier University on ‘An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback’. One of Janis and Charlotte’s findings is that many more words of feedback are given when the feedback is given as an audio file. Another point, widely made, is that students like audio feedback because they can hear the tone of the marker’s voice. In the unlikely event of finding spare time, the use of audio feedback is something I’d like to investigate in the context of the OU’s Science Faculty.

There is a sense in which oral assessment (i.e. assessing by viva) is just the next step. There are issues, especially to do with student anxiety and possibility of examiner bias. However, if you are there with a student, you can tease out how much they know and understand. I find it an exciting possibility. Gordon Jouglin from the University of Queensland, who is an expert on oral assessment, gave an excellent keynote on the subject (though being a dim-twit I didn’t understand his title: ‘Plato versus AHELO: The nature and role of the spoken word in assessment and promoting learning’). His slides are here. Lots to think about.

The 5th Assessment in Higher Education Conference will run in 2015 – be there!

Good

Friday, November 16th, 2012

The other thing that was discussed at yesterday’s ‘Analysing feedback’ session at the JISC online conference ‘Innovating e-Learning: shaping the future’ was the role of praise in feedback. (more…)

Assessment in HE Conference

Friday, November 2nd, 2012

This is an unashamed advertisement for the Assessment in HE Conference, to be held on 26th-27th June 2013. This is the 4th such conference, but it is moving from a 1-day to a 2-day event and from Carlisle to Birmingham. I think it will be good.

There is more information on the 2013 Assessment in HE Conference – Call for Papers and the conference website.

CAA 2012

Sunday, July 15th, 2012

Last week I attended the International Computer Assisted Assessment Conference in Southampton. This is the third consecutive year I’ve attended this conference and I enjoyed it, even if it was sometimes challenging to the point of being depressing.

So what is there to be depressed about?  Bobby Elliott from the Scottish Qualifications Agency said ‘CAA2002 would be disappointed in CAA 2012′ – not because of the conference itself, but because computer aided assessment has not achieved as much as was hoped 10 years ago.  Sue Timmis from the University of Bristol summed up the problem by saying by saying that, in reviewing the literature relating to the use of digital technologies in assessment, she and colleagues have not yet found evidence of a transformative effect. Steve Draper from the University of Glasgow and the Keynote speaker, raised another issue in saying that there is not much evidence of the effectiveness of feedback given from tutor to student.

So, on one level, has all of our work been a waste of time? I think I’d be slightly more optimistic if only because most of the conference attendees were interested in these issues, rather than talking about a wish to use technology whether or not that is the best solution from the students point of view. So at least our focus is on learning and teaching and we are looking for evidence of effectiveness rather than sailing on regardless – now we just have to get it right!

One good thing that came out of the conference is that John Kleeman told me about his Assessment Prior Art wiki – do take a look.

Throw away the handouts

Friday, September 23rd, 2011

I was at a meeting in Bristol yesterday ‘Using assessment to engage students and enhance their learning’. Much of the discussion was on the use of peer assessment (and plenty of interesting stuff), with a keynote from Paul Orsmond, considering student and tutor behaviour inside and outside the formal curriculum.

However, what struck me most was something reported in a presentation from Harriet Jones of the Biosciences Department at the University of East Anglia (UEA). They want students to make their own notes so have made a conscious decision to stop giving out lecture notes (though copies of presentations used in lectures are available on their VLE 48 hours before each lecture, for those who want to download a copy and also for students who want to check something later). It’s a brave decision but also, I think, a very sensible one.

The testing effect

Tuesday, August 16th, 2011

This will be my final post that picks up a theme from CAA 2011 , but the potential implications of this one are massive. For the past few weeks I have been trying to get my head around the significance of the ideas I was introduced to by John Kleeman’s presentation ‘Recent cognitive psychology research shows strongly that quizzes help retain learning’. I’m ashamed to admit that the ideas John was presenting were mostly new to me. The ideas echo with a lot of what we do at the UK Open University in encouraging students to learn actively, but they go further. Thank you John! (more…)

The impact of item format

Tuesday, August 2nd, 2011

One of the things I’ve found time and time again in my investigations into student engagement with e-assessment is that little things can make a difference. Therefore the research done by Matt Haigh of Cambridge Assessment into the impact of question format, which I’ve heard Matt speak about a couple of times, most recently at CAA 2011, was well overdue. It’s hard to believe that so few people have done work in this area.

Matt compared the difficulty (as measured by performance on the questions) of ten pairs of question types e.g. with or without a picture, drag and drop vs tick box, drag and drop vs drop-down selection, multiple-choice with only single selection allowed vs multiple-choice with multiple selections enabled, when adminstered to 112 students at secondary schools in England. In each case the actual question asked was identical. The quantitative evaluation was followed by focus group discussions.

This work is very relevant to what we do at the OU (since, for example, we use drop-down selection as the replacement for drag and drop questions for students who need to use a screen reader to attempt the questions). Happily, Matt’s main conclusion was the variations of item format explored here had very little impact on difficulty – even when there appeared to have been some difference this was not statistically significant. The focus group discussions led to general insight into what makes a question difficult (not surprisingly ‘lack of clarity’ came top) and also to some suggestions for the observed differences and lack of differences in difficulty in the parallel forms of the questions.

I’d very much like to do some work in this area myself, looking at the impact of item format on our rather different (and vast) student population. I’d also like to observe people doing questions in parallel formats, so see what clues that might give.

Automatically generated questions

Tuesday, August 2nd, 2011

In describing a presentation by Margit Hofler of the Institute for Information Systems and Computer Media at Graz University of Technology, Austria, the CAA 2011 Conference Chair Denise Whitelock used the words ‘holy grail’ and this is certainly interesting and cutting-edge stuff.  The work is described in the paper ‘Investigating automatically and manually generated questions to support self-directed learning’ by Margit and her colleagues at http://caaconference.co.uk/proceedings/

An ‘enhanced automatic question creator’ has been used to create questions from a piece of text, and the content quality of 120 automatically created test items has been compared with 290 items created by students. (more…)