Archive for the ‘conferences’ Category

The Unassessors

Friday, October 31st, 2014

Radcliffe Camera, OxfordFollowing a small group discussion at the 8th EDEN (European Distance and E-learning Network) Research Workshop in Oxford earlier in the week, I accepted the task of standing up to represent our group in saying that our radical change would be to do away with assessment. It was, of course, somewhat tongue in cheek, and we didn’t really mean that we would do away with assessment entirely, rather that we would radically alter its current form. Assessment is so often seen as “the problem” in education, “the tail wagging the dog” and we spend a huge amount of money and time on it, so a radical appraisal is perhaps overdue; as others who are wiser than me have said before. We should, at the very least, stop and think what we really want from our assessment; despite the longstanding assessment for learning/assessment of learning debate, I still don’t think we really know.

You’ll note that I am using the word “we” in the previous paragraph. That’s deliberate, because I am including the whole assessment community in this (researchers and practitioners); I am certainly not just talking about my own University. I feel the need to explain that point because the rapporteur at the EDEN Research Workshop managed to rather misunderstand my paper and so to criticise the Open University’s current assessment practice as being the same as it was 25 years ago. It is my fault entirely for not making it clearer who I am and what I was trying to say; because I am basically a practitioner, and proud of it, I suffer quite a lot from people not appreciating the amount of reading and thinking that I have done.  The rapporteur was absolutely right to be critical; that’s waht the role is about and I am very grateful to him for making me review my standpoint. It is also true – as I say frequently – that we all, including those of us at the Open University, should learn from others. However, I’d ask whether any distance learning provider does much better.

There is a related point, relating to the extent to which change should be evolutionary or revolutionary. It is simply not true that Open University assessment practice is the same as it was 25 years ago: 25 years ago, our tuition was all face to face (we now make extensive use of synchronous and asynchronous online tools); our tutor-marked assignments were submitted through the post; our use of computer-marked assessment was limited to multiple-choice questions with responses recorded with a pencil on machine-readable forms (no instantaneous, graduated and targeted feedback; no constructed response questions and certaintly no short-answer free text questions); we made considerably less use of end-of-module assignments, oral assessment, assessment of collaborative activity, peer assessment. Things have changed quite a lot! However, the fundamental structures and many of the policies remain the same. Our students seem happy with what we do, but nevertheless perhaps it is time for change. Perhaps that’s true of other universities too!

ViCE/PHEC 2014

Friday, September 5th, 2014

The ‘interesting’ title of this post relates to the joint Variety in Chemistry Education/Physics Higher Education Conference that I was on my way home from a week ago. Apologies for my delay in posting, but since then I have celebrated my birthday, visited my elderly in-laws, moved into new Mon-Fri accommodation, joined a new choir, celebrated Richard’s and my 33rd wedding anniversary – and passed the viva for my PhD by publication, with two typos to correct and one minor ‘point of clarification’. It has been an amazing week!

The conference was pretty good too. It was held at the University of Durham whose Physics Department (and, obviously, Cathedral) is much as it was when I graduated more than 36 years ago. However most of the sessions were held in the new and shiny Calman Learning Centre (with the unnervingly named Arnold Wolfendale Lecture Theatre, since I remember Professor Wolfendale very well from undergraduate days). There were lots more chemists than physicists, I don’t really know why, and lots of young enthusiastic university teaching fellows. Great!

Sessions that stood out for me include the two inspirational keynotes and both of the workshops that I attended, plus many conversations with old and new friends. The first keynote was given by Simon Lancaster from UEA and its title was ‘Questioning the lecture’. He started by telling us not to take notes on paper, but instead to get onto social media. I did, though I find it difficult  to listen and to post meaningful tweets at the same time. Is that my age? However I agree with a huge amount of what Simon said, in particular that we should cut out lots of the content that we currently teach.

Antje Kohnle’s keynote on the second day had a very different style. Antje is from the University of St Andrews and she was talking about the development of simulations to make it easier for students to visualise some of the conterintuitive concepts in quantum mechanics. The resource that has been developed is excellent, but the important point that Antje emphasised is the need to develop resources such as this iteratively, making use of feedback from students. Absolutely!

The two workshops that I so much enjoyed were (1) ‘Fostering learning improvements in physics’, a thoughtful reflection, led by Judy Hardy and Ross Galloway from the University of Edinburgh, on the implications of the FLIP Project; and do (2) the interestingly named (from a student comment)  ‘I don’t know much about physics, but I do know buses’ led by Peter Sneddon at the University of Glasgow, looking at questions designed to test students’ estimation skills and their confidence in estimation.

The quality of the presentations was excellent, bearing in mind that some people were essentially enthusiastic teachers whilst others were further advanced in their understanding of educational research. I raised the issue of correlation not implying causality at one stage, but immediately wished that I hadn’t. I think that, by and large, the interventions that were being described are ‘good things’ and of course it is almost impossibly difficult to prove that it is your intervention that has resulted in the improvement that you see.

In sessions and informal discussion with colleagues, the topics that kept stricking me were (1) the importance of student confidence; (2) reasons for underperformance (by several measures) of female students. We are already planning a workshop for next year!

Oh yes, and Durham’s hills have got hillier…

Staff engagement with e-assessment

Thursday, July 11th, 2013

More reflections from CAA2013 (held in Southampton, just down the road from the Isle of Wight ferry terminal – shown)…

In the opening keynote, Don Mackenzie talked about the ‘rise and rise of multiple-choice questions’. This was interesting, because he was talking in the context of more innovative question types having been used back in the 1997s than are used now. I wasn’t working in this area in the 1997s so I don’t know what things were like then, but somehow what Don said didn’t surprise me.

Don went on to itentify three questions that each of us should ask ourselves, implying that these were the stumbling blocks to better practice. The questions were:

  • Have you got the delivery system that you need?
  • Have you got the institutional support that you need?
  • Have you got the peer support that you need?

I wouldn’t argue with those, but I think I can say ‘yes’ to all three in the context of my own work – so why aren’t we doing better?

I think I’d identify two further issues:

1. It takes time to write good questions and this needs to be recognised by all parties;

2. There is a crying need for better staff development.

I’d like to pursue the staff development theme I little more. I think there is a need firstly for academics to appreciate that they can and should ‘do better’ (otherwise people do what is easy and we end up with lots of multiple-choice questions, and not necessarily even good multiple-choice questions), but then we need to find a way of teaching people how to do better. In my opinion this is about engaging academics not software developers – and in the best possible world the two would work together to design good assessments. That means that staff development is best delivered by people who actually use e-assessment in their teaching i.e. people like me. The problem is that people like me are busy doing their own job so don’t have any time to advise others. Big sigh. Please someone, find a solution – it is beyond me.

I ended up talking a bit about the need for staff development in my own presentation ‘Using e-assessment to learn about learning’ and in her closing address Erica Morris pulled out the following themes from the conference:

  • Ensuring student engagement
  • Devising richer assessments
  • Unpacking feedback
  • Revisiting frameworks and principles
  • and… Extending staff learning and development

I agree with Erica entirely, I just wonder how we can make it happen.

The Cargo Cult

Thursday, July 11th, 2013

I suspect that this reflection from the 14th International Computer Aided Conference (CAA2013) may not go down well with all of my readers. I refer to the mention in several papers of the use of technology in teaching and learning as a ‘cargo cult’.

Perhaps I’d better start by saying what the term ‘cargo cult’ is being used to mean. Lester Gilbert (et al.) (2013) explained that ‘cargo cults refer to post-World-War II Melanesian movements whose members believe that various ritualistic acts will lead to a bestowing of material wealth’ and , by analogy, ‘cargo cult science is a science with no effective understanding of how a domain works’. Lester then quoted Feynman ( 1985):

‘I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down–or hardly going up–in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. [This is an] example of what I would like to call cargo cult science.’

I’m not sure that my understanding is the same as Lester Gilbert’s or Richard Feynman’s, but the point that struck me forcably was the reminder of the ritualistic, ‘witch-doctor’ approach of much of what we do. Actually it doesn’t just apply to our use of technology. We have a mantra that doing such-and-such or using such-and-such a technical solution will improve the quality of our teaching and the quality of our students’ learning, and we are very often low on understanding of the underlying pedagogy. We are also pretty low on evidence of impact, but we keep on doing things differently just because we feel that it ought to work – or perhaps that we hope that it will.

Tom Hench ended his presentation (which I’ll talk about in another post)  by saying that we need ‘research, research and research’ into what we do in teaching. I agree.

Feynman, R (1985). Cargo cult science. In, Surely You’re Joking, Mr. Feynman! W W Norton.

Gilbert, L., Wills, G., Sitthisak,O. (2013) Perpetuating the cargo cult: Never mind the pedagogy, feel the technology. In Proceedings of CAA2013 International Conference, 9th-10th July, Southampton.

Oral feedback and assessment

Sunday, July 7th, 2013

As discussed in my previous post, the Assessment in Higher Education Conference was excellent. I helped Tim Hunt to run a ‘MasterClass’ (workshop!) on ‘Producing high quality computer-marked assessment’ and, with Janet Haresnape, ran a practice exchange on the evaluation of our faculty-wide move to formative thresholded assessment. As a member of the organising committee I also ran around chairing sessions, judging posters etc. and I have to say I loved every minute of it. I see from the conference website that another delegate has said it was the best conference they have ever attended, and I would agree with this.

I could go talk more about a number of the presentations I heard but for now I will just reflect on two themes. Here’s the first.

I have read a fair amount about the use of audio files and/or screencasts to give feedback and enjoyed the presentation from Janis MacCallum (and Charlotte Chalmers) from Edinburgh Napier University on ‘An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback’. One of Janis and Charlotte’s findings is that many more words of feedback are given when the feedback is given as an audio file. Another point, widely made, is that students like audio feedback because they can hear the tone of the marker’s voice. In the unlikely event of finding spare time, the use of audio feedback is something I’d like to investigate in the context of the OU’s Science Faculty.

There is a sense in which oral assessment (i.e. assessing by viva) is just the next step. There are issues, especially to do with student anxiety and possibility of examiner bias. However, if you are there with a student, you can tease out how much they know and understand. I find it an exciting possibility. Gordon Jouglin from the University of Queensland, who is an expert on oral assessment, gave an excellent keynote on the subject (though being a dim-twit I didn’t understand his title: ‘Plato versus AHELO: The nature and role of the spoken word in assessment and promoting learning’). His slides are here. Lots to think about.

The 5th Assessment in Higher Education Conference will run in 2015 – be there!

Good

Friday, November 16th, 2012

The other thing that was discussed at yesterday’s ‘Analysing feedback’ session at the JISC online conference ‘Innovating e-Learning: shaping the future’ was the role of praise in feedback. (more…)

Assessment in HE Conference

Friday, November 2nd, 2012

This is an unashamed advertisement for the Assessment in HE Conference, to be held on 26th-27th June 2013. This is the 4th such conference, but it is moving from a 1-day to a 2-day event and from Carlisle to Birmingham. I think it will be good.

There is more information on the 2013 Assessment in HE Conference – Call for Papers and the conference website.

CAA 2012

Sunday, July 15th, 2012

Last week I attended the International Computer Assisted Assessment Conference in Southampton. This is the third consecutive year I’ve attended this conference and I enjoyed it, even if it was sometimes challenging to the point of being depressing.

So what is there to be depressed about?  Bobby Elliott from the Scottish Qualifications Agency said ‘CAA2002 would be disappointed in CAA 2012′ – not because of the conference itself, but because computer aided assessment has not achieved as much as was hoped 10 years ago.  Sue Timmis from the University of Bristol summed up the problem by saying by saying that, in reviewing the literature relating to the use of digital technologies in assessment, she and colleagues have not yet found evidence of a transformative effect. Steve Draper from the University of Glasgow and the Keynote speaker, raised another issue in saying that there is not much evidence of the effectiveness of feedback given from tutor to student.

So, on one level, has all of our work been a waste of time? I think I’d be slightly more optimistic if only because most of the conference attendees were interested in these issues, rather than talking about a wish to use technology whether or not that is the best solution from the students point of view. So at least our focus is on learning and teaching and we are looking for evidence of effectiveness rather than sailing on regardless – now we just have to get it right!

One good thing that came out of the conference is that John Kleeman told me about his Assessment Prior Art wiki – do take a look.

Throw away the handouts

Friday, September 23rd, 2011

I was at a meeting in Bristol yesterday ‘Using assessment to engage students and enhance their learning’. Much of the discussion was on the use of peer assessment (and plenty of interesting stuff), with a keynote from Paul Orsmond, considering student and tutor behaviour inside and outside the formal curriculum.

However, what struck me most was something reported in a presentation from Harriet Jones of the Biosciences Department at the University of East Anglia (UEA). They want students to make their own notes so have made a conscious decision to stop giving out lecture notes (though copies of presentations used in lectures are available on their VLE 48 hours before each lecture, for those who want to download a copy and also for students who want to check something later). It’s a brave decision but also, I think, a very sensible one.

The testing effect

Tuesday, August 16th, 2011

This will be my final post that picks up a theme from CAA 2011 , but the potential implications of this one are massive. For the past few weeks I have been trying to get my head around the significance of the ideas I was introduced to by John Kleeman’s presentation ‘Recent cognitive psychology research shows strongly that quizzes help retain learning’. I’m ashamed to admit that the ideas John was presenting were mostly new to me. The ideas echo with a lot of what we do at the UK Open University in encouraging students to learn actively, but they go further. Thank you John! (more…)