Archive for the ‘conferences’ Category

Reflections on AHEC 2: Assessment transparency

Saturday, June 27th, 2015

mangleI should start by saying that Tim Hunt’s summary of last week’s Assessment in Higher Education Conference is excellent, so I don’t know why I’m bothering! Seriously, we went to some different sessions, in particular Tim went to many more sessions on feedback than I did, so do take a look at his blog posting.

Moving on to “Assessment transparency”. I’m picking up here on one of the themes that Tim also alludes to, the extend to which our students do, or don’t, understand what is required of them in assessed tasks. The fact that students don’t understand what we expect them to do is one of the findings I reported on in my presentation “Formative thresholded evaluation : Reflections on the evaluation of a faculty-wide change in assessment practice” which is on Slideshare here. Similar issues were raised in the presentation I attended immediately beforehand (by Anke Buttner and entitled “Charting the assessment landscape: Preliminary evaluations of an assessment map”). This is not complicated stuff we’re talking about – not anything as sophisticated as having a shared understanding of the purpose of assessment (though that would be nice!).

It might seem obvious that  we want students to know what they have to do in assessment tasks, but there is actually a paradox in all of this. To quote Tim Hunt’s description of a point in Jo-Anne Baird’s final keynote: “if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.”. Suddenly things don’t seem quite so straightforward.

Reflections on AHEC 1: remembering that students are individuals

Thursday, June 25th, 2015

I’ve been at the 5th Assessment in Higher Education Conference, now truly international, and a superb conference. As in 2013, the conference was at Maple House in Birmingham.  With 200 delegates we filled the venue completely, but it was a deliberate decision to use the same venue and to keep the conference relatively small. As the conference goes from strength to strength we will need to review that decision again for 2017, but a small and friendly conference has a lot to commend it. We had some ‘big names’, with Masterclasses from David Boud, David Carless, Tansy Jessop and Margaret Price, and keynotes from Maddalena Taras and Jo-Anne Baird. There were also practitioners from a variety of backgrounds and with varying knowledge of assessment literature.

For various reasons I attended some talks that only attracted small audiences, but I learnt a lot from these. One talk that had a lot of resonance with my own experience was Robert Prince’s presentation on “Placement for Access and a fair chance of success in South African Higher Education institutions”.  Robert talked about the different educational success of students of different ethnicity, both at School and at South African HE institutions. The differences are really shocking. They are seeking to address the situation at school level, but Robert rightly recognises that universities also need to be able to respond appropriately to students from different backgrounds, perhaps allowing the qualification to be completed over a longer period of time.

Robert went on to talk about the ‘National Benchmark Tests (NBT)’ Project, which has produced tests of academic literacy, quantitative literacy and mathematics. The really scary, though sadly predictable, finding is that the National Benchmark Tests are extremely good at predicting outcome. But the hope is that the tests can be used to direct students to extended or flexible programmes of study.

In my mind, Robert’s talk sits alongside Gwyneth Hughes’s talk on ipsative assessment i.e. assessing the progress of an individual student (looking for ‘value added’). Gwyneth talked about ways in which ipsative assessment (with a grade for progress) might be combined with conventional summative assessment, but that for me is the problem area. If we are assessing someone’s progress and they have just not progressed far enough I’m not convinced it is helpful to use the same assessment as for students who are flying high.

But the important thing is that we are looking at the needs of individual students rather than teaching  and assessing a phantom average student.

Dé Onderwijsdagen Pre Conference : ‘Digital testing’

Saturday, November 15th, 2014

P1000739It has been quite a week. On Wednesday I sat on the edge of my seat in the Berrill Lecture theatre at the UK Open University waiting to see if Rosetta’s lander Philae, complete with the Ptolomy instrumentation developed by, amongst others, colleagues in our Department of Physical Sciences, would make it to the surface of Comet 67P, Churyumov–Gerasimenko. I’m sure that everyone knows by now that it did, and despite the fact that the lander bounced and came to rest in a non-optimal position, some incredible scientific data has been received; so there is lots more for my colleagues to do! Incidentally the photo shows a model of the lander near the entrance to our Robert Hooke building.

Then on Friday, we marked the retirement of Dr John Bolton, who has worked for the Open University for a long time and made huge contributions. In particular, John is one of the few who has really analysed student engagement with interactive computer-marked assessment questions. More on that to follow in a later posting; John has been granted visitor status and we are hoping to continue to work together.

World Trade Center, RotterdamHowever, a week ago I was just reaching Schiphol airport prior to a day in Amsterdam on Sunday and then delivering the keynote presentation at Dé Onderwijsdagen (‘Education Days’) Pre Conference : ‘Digital testing’ at the Beurs World Trade Center in Rotterdam. It was wonderful to be speaking to an audience of about 250 people, all of whom had chosen to come to a meeting about computer-based assessment and its impact on learning. Even more amazing if you consider that the main conference language was Dutch, so these people were all from The Netherlands, a country with a total population about a quarter the size of the UK.

There is some extremely exciting work going on in the Netherlands, with a programme on ‘Testing and Test-Driven Learning’ run by SURF. Before my keynote we heard about the testing of students’ interpretation of radiological images – it was lovely to see the questions close to the images (one of the things I went on to talk about was the importance of good assessment design) – and about ‘the Statistics Factory’, running an adaptive test in a gaming environment. This linked nicely to my finding that students find quiz questions ‘fun’ and that even simple question types can lead to deep learning. Most exciting is the emphasis on learning rather than on the use of technology for the sake of doing so.

I would like to finish this post by highlighting some of the visions/conclusions from my keynote:

1. To assess MOOCS and other large online courses, why don’t we start off by using peer assessment to mark short answer questions. Because of the large student numbers this would lead to accurate marking of a large number of responses, with only minimal input from an expert marker. Then we could use these marked  responses and machine learning to develop Pattern Match type answer matching, to allow automatic marking for subsequent cohorts of students.

2. Instead of sharing completed questions, let’s share the code behind the questions so that users can edit as appropriate. In other words, let’s be completely open.

3. It is vitally important to evaluate the impact of what we do and to develop questions iteratively. And whilst the large student numbers at the UK Open University mean that the use of computer-marked assessment has saved us money, developing high-quality questions does not come cheap.

4. Computer-marked assessment has a huge amount to commend it, but I still don’t see it as a panacea. I still think that there are things (e.g. the marking of essays) that are better done by humans. I still think it is best to use computers to mark and provide instantaneous feedback on relatively simple question types, freeing up human time to help students in the light of improved knowledge of their misunderstandings (from the simple questions) and to mark more sophisticated tasks.

The videos from my keynote and the other presentations are at

MOOCs: same or different?

Sunday, November 2nd, 2014

Last week’s EDEN Research Workshop was thought-provoking in many ways. Incidentally, I think that was largely because of the format that discouraged long presentations and encouraged discussion and reflection. I thought this would irritate me but it didn’t.

One of the questions that the workshop prompted for me (and, if the ‘fishbowl’ discussion at the end is to be believed, for others too) is the extent to which our wealth of previous research into student engagement with open and distance learning (especially when online) is relevant to MOOCs. Coincidentally, my [paper!] copy of  the November issue of Physics World arrived yesterday, and a little piece entitled “Study reveals value of online learning” lept out and hit me. It’s about work at MIT that has pre- and post-tested participants on a mechanics MOOC. The details are at:

Colvin, K. F., Champaign, J., Liu, A., Zhou, Q., Fredericks, C., & Pritchard, D. E. (2014). Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class. The International Review of Research in Open and Distance Learning, 15(4), 263-282.
They found that students, whether well or poorly prepared, learnt well. The Physics World article comments  that David Pritchard, the leader of the MIT researchers “finds it ‘rather dismaying’ that non-one else has published about whether learning takes place, given that there are thousands of MOOCs available”. I agree with Pritchard that we need more robust research into the effectiveness of MOOCs. However, I come back to the same point: To what extent does everything we know about open and distance learning apply?
I used to get really annoyed that people talked about MOOCs as if what they are doing is entirely new, and when the Physics World article goes on to compare MOOCs with traditional classroom learning, as if nothing else has existed, I feel that annoyance surfacing. However, at EDEN, I suddenly realised that there are some fundamental differences. Most people studying MOOCs are already well qualified; that is increasingly not the case for our typical Open University students. I accept that the MIT work has looked at “less well prepared” MOOC-studiers, and that is very encouraging, but I wonder if it is appropriate to generalise or to attempt support such a wide spectrum of different learners in the same way. Secondly, most work on the impact of educational interventions considers students who are retained, and the MIT study is no exception; they only considered students who had engaged with 50% or more of the tasks; if my maths is right that was about 6% of those initially registered. Much current work at the Open University rightly focuses on retaining our students; all our students. Then of course there are differences in length of module and typical study intensity, and so on.
I suppose an appropriate conclusion is that MOOC developers should both learn from and inform developers of more conventional open and distance learning modules. And I note that the issue of The International Review of Research in Open and Distance Learning that follows the one in which the MIT work is reported, is a special, looking at research into MOOCs. That’s good.

The Unassessors

Friday, October 31st, 2014

Radcliffe Camera, OxfordFollowing a small group discussion at the 8th EDEN (European Distance and E-learning Network) Research Workshop in Oxford earlier in the week, I accepted the task of standing up to represent our group in saying that our radical change would be to do away with assessment. It was, of course, somewhat tongue in cheek, and we didn’t really mean that we would do away with assessment entirely, rather that we would radically alter its current form. Assessment is so often seen as “the problem” in education, “the tail wagging the dog” and we spend a huge amount of money and time on it, so a radical appraisal is perhaps overdue; as others who are wiser than me have said before. We should, at the very least, stop and think what we really want from our assessment; despite the longstanding assessment for learning/assessment of learning debate, I still don’t think we really know.

You’ll note that I am using the word “we” in the previous paragraph. That’s deliberate, because I am including the whole assessment community in this (researchers and practitioners); I am certainly not just talking about my own University. I feel the need to explain that point because the rapporteur at the EDEN Research Workshop managed to rather misunderstand my paper and so to criticise the Open University’s current assessment practice as being the same as it was 25 years ago. It is my fault entirely for not making it clearer who I am and what I was trying to say; because I am basically a practitioner, and proud of it, I suffer quite a lot from people not appreciating the amount of reading and thinking that I have done.  The rapporteur was absolutely right to be critical; that’s what the role is about and I am very grateful to him for making me review my standpoint. It is also true – as I say frequently – that we all, including those of us at the Open University, should learn from others. However, I’d ask whether any distance learning provider does much better.

There is a related point, relating to the extent to which change should be evolutionary or revolutionary. It is simply not true that Open University assessment practice is the same as it was 25 years ago: 25 years ago, our tuition was all face to face (we now make extensive use of synchronous and asynchronous online tools); our tutor-marked assignments were submitted through the post; our use of computer-marked assessment was limited to multiple-choice questions with responses recorded with a pencil on machine-readable forms (no instantaneous, graduated and targeted feedback; no constructed response questions and certaintly no short-answer free text questions); we made considerably less use of end-of-module assignments, oral assessment, assessment of collaborative activity, peer assessment. Things have changed quite a lot! However, the fundamental structures and many of the policies remain the same. Our students seem happy with what we do, but nevertheless perhaps it is time for change. Perhaps that’s true of other universities too!

ViCE/PHEC 2014

Friday, September 5th, 2014

The ‘interesting’ title of this post relates to the joint Variety in Chemistry Education/Physics Higher Education Conference that I was on my way home from a week ago. Apologies for my delay in posting, but since then I have celebrated my birthday, visited my elderly in-laws, moved into new Mon-Fri accommodation, joined a new choir, celebrated Richard’s and my 33rd wedding anniversary – and passed the viva for my PhD by publication, with two typos to correct and one minor ‘point of clarification’. It has been an amazing week!

The conference was pretty good too. It was held at the University of Durham whose Physics Department (and, obviously, Cathedral) is much as it was when I graduated more than 36 years ago. However most of the sessions were held in the new and shiny Calman Learning Centre (with the unnervingly named Arnold Wolfendale Lecture Theatre, since I remember Professor Wolfendale very well from undergraduate days). There were lots more chemists than physicists, I don’t really know why, and lots of young enthusiastic university teaching fellows. Great!

Sessions that stood out for me include the two inspirational keynotes and both of the workshops that I attended, plus many conversations with old and new friends. The first keynote was given by Simon Lancaster from UEA and its title was ‘Questioning the lecture’. He started by telling us not to take notes on paper, but instead to get onto social media. I did, though I find it difficult  to listen and to post meaningful tweets at the same time. Is that my age? However I agree with a huge amount of what Simon said, in particular that we should cut out lots of the content that we currently teach.

Antje Kohnle’s keynote on the second day had a very different style. Antje is from the University of St Andrews and she was talking about the development of simulations to make it easier for students to visualise some of the conterintuitive concepts in quantum mechanics. The resource that has been developed is excellent, but the important point that Antje emphasised is the need to develop resources such as this iteratively, making use of feedback from students. Absolutely!

The two workshops that I so much enjoyed were (1) ‘Fostering learning improvements in physics’, a thoughtful reflection, led by Judy Hardy and Ross Galloway from the University of Edinburgh, on the implications of the FLIP Project; and do (2) the interestingly named (from a student comment)  ‘I don’t know much about physics, but I do know buses’ led by Peter Sneddon at the University of Glasgow, looking at questions designed to test students’ estimation skills and their confidence in estimation.

The quality of the presentations was excellent, bearing in mind that some people were essentially enthusiastic teachers whilst others were further advanced in their understanding of educational research. I raised the issue of correlation not implying causality at one stage, but immediately wished that I hadn’t. I think that, by and large, the interventions that were being described are ‘good things’ and of course it is almost impossibly difficult to prove that it is your intervention that has resulted in the improvement that you see.

In sessions and informal discussion with colleagues, the topics that kept stricking me were (1) the importance of student confidence; (2) reasons for underperformance (by several measures) of female students. We are already planning a workshop for next year!

Oh yes, and Durham’s hills have got hillier…

Staff engagement with e-assessment

Thursday, July 11th, 2013

More reflections from CAA2013 (held in Southampton, just down the road from the Isle of Wight ferry terminal – shown)…

In the opening keynote, Don Mackenzie talked about the ‘rise and rise of multiple-choice questions’. This was interesting, because he was talking in the context of more innovative question types having been used back in the 1997s than are used now. I wasn’t working in this area in the 1997s so I don’t know what things were like then, but somehow what Don said didn’t surprise me.

Don went on to itentify three questions that each of us should ask ourselves, implying that these were the stumbling blocks to better practice. The questions were:

  • Have you got the delivery system that you need?
  • Have you got the institutional support that you need?
  • Have you got the peer support that you need?

I wouldn’t argue with those, but I think I can say ‘yes’ to all three in the context of my own work – so why aren’t we doing better?

I think I’d identify two further issues:

1. It takes time to write good questions and this needs to be recognised by all parties;

2. There is a crying need for better staff development.

I’d like to pursue the staff development theme I little more. I think there is a need firstly for academics to appreciate that they can and should ‘do better’ (otherwise people do what is easy and we end up with lots of multiple-choice questions, and not necessarily even good multiple-choice questions), but then we need to find a way of teaching people how to do better. In my opinion this is about engaging academics not software developers – and in the best possible world the two would work together to design good assessments. That means that staff development is best delivered by people who actually use e-assessment in their teaching i.e. people like me. The problem is that people like me are busy doing their own job so don’t have any time to advise others. Big sigh. Please someone, find a solution – it is beyond me.

I ended up talking a bit about the need for staff development in my own presentation ‘Using e-assessment to learn about learning’ and in her closing address Erica Morris pulled out the following themes from the conference:

  • Ensuring student engagement
  • Devising richer assessments
  • Unpacking feedback
  • Revisiting frameworks and principles
  • and… Extending staff learning and development

I agree with Erica entirely, I just wonder how we can make it happen.

The Cargo Cult

Thursday, July 11th, 2013

I suspect that this reflection from the 14th International Computer Aided Conference (CAA2013) may not go down well with all of my readers. I refer to the mention in several papers of the use of technology in teaching and learning as a ‘cargo cult’.

Perhaps I’d better start by saying what the term ‘cargo cult’ is being used to mean. Lester Gilbert (et al.) (2013) explained that ‘cargo cults refer to post-World-War II Melanesian movements whose members believe that various ritualistic acts will lead to a bestowing of material wealth’ and , by analogy, ‘cargo cult science is a science with no effective understanding of how a domain works’. Lester then quoted Feynman ( 1985):

‘I found things that even more people believe, such as that we have some knowledge of how to educate. There are big schools of reading methods and mathematics methods, and so forth, but if you notice, you’ll see the reading scores keep going down–or hardly going up–in spite of the fact that we continually use these same people to improve the methods. There’s a witch doctor remedy that doesn’t work. [This is an] example of what I would like to call cargo cult science.’

I’m not sure that my understanding is the same as Lester Gilbert’s or Richard Feynman’s, but the point that struck me forcably was the reminder of the ritualistic, ‘witch-doctor’ approach of much of what we do. Actually it doesn’t just apply to our use of technology. We have a mantra that doing such-and-such or using such-and-such a technical solution will improve the quality of our teaching and the quality of our students’ learning, and we are very often low on understanding of the underlying pedagogy. We are also pretty low on evidence of impact, but we keep on doing things differently just because we feel that it ought to work – or perhaps that we hope that it will.

Tom Hench ended his presentation (which I’ll talk about in another post)  by saying that we need ‘research, research and research’ into what we do in teaching. I agree.

Feynman, R (1985). Cargo cult science. In, Surely You’re Joking, Mr. Feynman! W W Norton.

Gilbert, L., Wills, G., Sitthisak,O. (2013) Perpetuating the cargo cult: Never mind the pedagogy, feel the technology. In Proceedings of CAA2013 International Conference, 9th-10th July, Southampton.

Oral feedback and assessment

Sunday, July 7th, 2013

As discussed in my previous post, the Assessment in Higher Education Conference was excellent. I helped Tim Hunt to run a ‘MasterClass’ (workshop!) on ‘Producing high quality computer-marked assessment’ and, with Janet Haresnape, ran a practice exchange on the evaluation of our faculty-wide move to formative thresholded assessment. As a member of the organising committee I also ran around chairing sessions, judging posters etc. and I have to say I loved every minute of it. I see from the conference website that another delegate has said it was the best conference they have ever attended, and I would agree with this.

I could go talk more about a number of the presentations I heard but for now I will just reflect on two themes. Here’s the first.

I have read a fair amount about the use of audio files and/or screencasts to give feedback and enjoyed the presentation from Janis MacCallum (and Charlotte Chalmers) from Edinburgh Napier University on ‘An evaluation of the effectiveness of audio feedback, and of the language used, in comparison with written feedback’. One of Janis and Charlotte’s findings is that many more words of feedback are given when the feedback is given as an audio file. Another point, widely made, is that students like audio feedback because they can hear the tone of the marker’s voice. In the unlikely event of finding spare time, the use of audio feedback is something I’d like to investigate in the context of the OU’s Science Faculty.

There is a sense in which oral assessment (i.e. assessing by viva) is just the next step. There are issues, especially to do with student anxiety and possibility of examiner bias. However, if you are there with a student, you can tease out how much they know and understand. I find it an exciting possibility. Gordon Jouglin from the University of Queensland, who is an expert on oral assessment, gave an excellent keynote on the subject (though being a dim-twit I didn’t understand his title: ‘Plato versus AHELO: The nature and role of the spoken word in assessment and promoting learning’). His slides are here. Lots to think about.

The 5th Assessment in Higher Education Conference will run in 2015 – be there!


Friday, November 16th, 2012

The other thing that was discussed at yesterday’s ‘Analysing feedback’ session at the JISC online conference ‘Innovating e-Learning: shaping the future’ was the role of praise in feedback. (more…)