Archive for the ‘conferences’ Category

EAMS 2016

Friday, November 25th, 2016

Back in September I was delighted to be a keynote speaker at EAMS: the first ever conference on E-Assessment in the Mathematical Sciences. For more information about the conference in general click here; for a recording of my keynote click here.

The conference was a pleasant surprise in many ways. Firstly, it was truly international, with attendees from Australia, Finland, Ireland, Japan, Norway, the Netherlands, South Africa and the US as well as from all over the UK. Secondly, it was not as “techy” as I’d feared it might have been, nor did the maths go right over my head (well, not very often…). Thirdly, and most importantly, I had to tone down my keynote, which had been written to be rather critical of much e-assessment practice, because there was some fantastic stuff reported at the conference! I was particularly pleased to be a keynote speaker alongside Christian Lawson-Perfect, Chris Sangwin and Michael Gage. Thus we heard about some of the very good e-assessment systems and question types for mathematical sciences: NUMBAS, STACK and WeBWorK. I added in some detail on Pattern Match.

As I’ve said before in this blog, I am particularly impressed by STACK, and its author Chris Sangwin gave a particularly thoughtful talk on “the interplay between calculation and reasoning”, which fed very neatly into my discussion of “how far is it appropriate to go” in assessing automatically.

Of course there were talks that were less good;  the one point that I’d still want to emphasise is the need to monitor student use of questions very closely, and not to assume that they are behaving in the way that you think they are. However it was a joy to be at a meeting with such lovely people, most of whom seemed driven  by a desire to improve their students’ learning. It was also a great pleasure that Cliff Beevers, who was already an expert in this area when I was just setting out in the very early 2000s, was also at the meeting.

Keys to transforming assessment at institutional level: selected debates from AHE2016

Sunday, September 11th, 2016

Hot on the heals of my post about Sue Bloxham’s keynote at the Assessment in Higher Education workshop in June, this post is about the follow-up  Transforming Assessment webinar “Keys to transforming assessment at institutional level: selected debates from AHE2016.

Three talks from the AHE workshop had been selected for the webinar on the basis of the fact that they really did focus on change at the institutional level, and I thoroughly enjoyed chairing the session. If you want to watch the whole thing, click here for more information and the recordings.

The first of the three talks that we’d selected was “Changing feedback practice at an institutional level” in which Sally Brown talked about work at the University of Northumbria, Leeds Met (now Beckett) University and Anglia Ruskin University. Kay Sambell had given this talk at the earlier workshop and their conclusions were that

  • Slow transformative development has more impact than attempts at quick fixes;
  • Having money to support activities and personnel is important, but large amounts of cash doesn’t necessarily lead to major long-term impact;
  • Long-term ownership by senior managers is essential for sustainability;
  • To have credibility, activities need to be based on evidence-based scholarship;
  • Committed, passionate and convincing change agents achieve more than top-down directives.

The third of the three talks was “Changing colours: what happens when you make enhancement an imperative?” in which Juliet Williams talked about the impact of the TESTA (Transforming the Experience of Students through Assessment) Project at the University of Winchester.

However, from the conversations that I had at the workshop in June, it was the middle talk (given at the Webinar by Dave Morrison of the University of Plymouth because Amanda Sykes from the University of Glasgow was unavailable) that had inspired many of the attendees – bearing in mind that these were largely assessment practitioners not experts. The title was “Half as much but twice as good” and the important points I picked up were that

  • Timely feedback is more important than detailed feedback
  • [students are as busy as we are so] Less feedback can be more effective. If a student only reads your feedback for 30 seconds, what do you want them to take?

Capture 3

We ended with a good discussion of how to bring true institutional change.

Central Challenges in Transforming Assessment at the Departmental and Institutional Level

Saturday, September 10th, 2016

Back on 30th June, Assessment in Higher Education (AHE) held a seminar  in Manchester with the theme of “Transforming assessment and feedback in Higher Education on a wider scale: the challenge of change at institutional level”. The idea behind the seminar was partly to hold a smaller-scale event between our increasingly large bienniel conferences, though we had well over 100 attendees.

AHE are now working in collaboration with Transforming Assessment, and we live-streamed Sue Bloxham’s keynote to a further twenty or so people around the world. Then on 13th July we had a dedicated webinar to which three selected presenters from the seminar contributed. My involvement in both of these events meant that I had a double ‘bite at the cherry’. I heard one set of presentations at the seminar itself, then I heard the three presentations on 13th July , and chaired a discussion of overlapping themes. There was some fantastic stuff.

capture 1As I try to catch up with this blog, I’ll start by describing my take on Sue’s keynote. She started from the precept that assessment remains unfit for purpose – and change is slow. She went on to outline what she described as key barriers to assessment enhancement, where the two  barriers that have most resonance with my own experience being:


  • centrally imposed change, which produces resistance. Sue’s proposed solution is that we should put the focus for change on small low-level workgroups.
  • the need for assessment literacy for staff. Here the focus must be on adequate professional development.

Sue went on to describe a  framework which might be drawn upon to create the conditions for transformation at institutional or departmental level, based around

  • key principles
  • infrastructure
  • strategy
  • assessment literacy.

Capture 2

It was inspirational; now we just need to make the change happen. Don’t take my word for it though; you can watch the recording of Sue’s keynote  here.


Reflections on AHEC 2: Assessment transparency

Saturday, June 27th, 2015

mangleI should start by saying that Tim Hunt’s summary of last week’s Assessment in Higher Education Conference is excellent, so I don’t know why I’m bothering! Seriously, we went to some different sessions, in particular Tim went to many more sessions on feedback than I did, so do take a look at his blog posting.

Moving on to “Assessment transparency”. I’m picking up here on one of the themes that Tim also alludes to, the extend to which our students do, or don’t, understand what is required of them in assessed tasks. The fact that students don’t understand what we expect them to do is one of the findings I reported on in my presentation “Formative thresholded evaluation : Reflections on the evaluation of a faculty-wide change in assessment practice” which is on Slideshare here. Similar issues were raised in the presentation I attended immediately beforehand (by Anke Buttner and entitled “Charting the assessment landscape: Preliminary evaluations of an assessment map”). This is not complicated stuff we’re talking about – not anything as sophisticated as having a shared understanding of the purpose of assessment (though that would be nice!).

It might seem obvious that  we want students to know what they have to do in assessment tasks, but there is actually a paradox in all of this. To quote Tim Hunt’s description of a point in Jo-Anne Baird’s final keynote: “if assessment is too transparent it encourages pathological teaching to the test. This is probably where most school assessment is right now, and it is exacerbated by the excessive ways school exams are made hight stakes, for the student, the teacher and the school. Too much transparency (and risk averseness) in setting assessment can lead to exams that are too predicable, hence students can get a good mark by studying just those things that are likely to be on the exam. This damages validity, and more importantly damages education.”. Suddenly things don’t seem quite so straightforward.

Reflections on AHEC 1: remembering that students are individuals

Thursday, June 25th, 2015

I’ve been at the 5th Assessment in Higher Education Conference, now truly international, and a superb conference. As in 2013, the conference was at Maple House in Birmingham.  With 200 delegates we filled the venue completely, but it was a deliberate decision to use the same venue and to keep the conference relatively small. As the conference goes from strength to strength we will need to review that decision again for 2017, but a small and friendly conference has a lot to commend it. We had some ‘big names’, with Masterclasses from David Boud, David Carless, Tansy Jessop and Margaret Price, and keynotes from Maddalena Taras and Jo-Anne Baird. There were also practitioners from a variety of backgrounds and with varying knowledge of assessment literature.

For various reasons I attended some talks that only attracted small audiences, but I learnt a lot from these. One talk that had a lot of resonance with my own experience was Robert Prince’s presentation on “Placement for Access and a fair chance of success in South African Higher Education institutions”.  Robert talked about the different educational success of students of different ethnicity, both at School and at South African HE institutions. The differences are really shocking. They are seeking to address the situation at school level, but Robert rightly recognises that universities also need to be able to respond appropriately to students from different backgrounds, perhaps allowing the qualification to be completed over a longer period of time.

Robert went on to talk about the ‘National Benchmark Tests (NBT)’ Project, which has produced tests of academic literacy, quantitative literacy and mathematics. The really scary, though sadly predictable, finding is that the National Benchmark Tests are extremely good at predicting outcome. But the hope is that the tests can be used to direct students to extended or flexible programmes of study.

In my mind, Robert’s talk sits alongside Gwyneth Hughes’s talk on ipsative assessment i.e. assessing the progress of an individual student (looking for ‘value added’). Gwyneth talked about ways in which ipsative assessment (with a grade for progress) might be combined with conventional summative assessment, but that for me is the problem area. If we are assessing someone’s progress and they have just not progressed far enough I’m not convinced it is helpful to use the same assessment as for students who are flying high.

But the important thing is that we are looking at the needs of individual students rather than teaching  and assessing a phantom average student.

Dé Onderwijsdagen Pre Conference : ‘Digital testing’

Saturday, November 15th, 2014

P1000739It has been quite a week. On Wednesday I sat on the edge of my seat in the Berrill Lecture theatre at the UK Open University waiting to see if Rosetta’s lander Philae, complete with the Ptolomy instrumentation developed by, amongst others, colleagues in our Department of Physical Sciences, would make it to the surface of Comet 67P, Churyumov–Gerasimenko. I’m sure that everyone knows by now that it did, and despite the fact that the lander bounced and came to rest in a non-optimal position, some incredible scientific data has been received; so there is lots more for my colleagues to do! Incidentally the photo shows a model of the lander near the entrance to our Robert Hooke building.

Then on Friday, we marked the retirement of Dr John Bolton, who has worked for the Open University for a long time and made huge contributions. In particular, John is one of the few who has really analysed student engagement with interactive computer-marked assessment questions. More on that to follow in a later posting; John has been granted visitor status and we are hoping to continue to work together.

World Trade Center, RotterdamHowever, a week ago I was just reaching Schiphol airport prior to a day in Amsterdam on Sunday and then delivering the keynote presentation at Dé Onderwijsdagen (‘Education Days’) Pre Conference : ‘Digital testing’ at the Beurs World Trade Center in Rotterdam. It was wonderful to be speaking to an audience of about 250 people, all of whom had chosen to come to a meeting about computer-based assessment and its impact on learning. Even more amazing if you consider that the main conference language was Dutch, so these people were all from The Netherlands, a country with a total population about a quarter the size of the UK.

There is some extremely exciting work going on in the Netherlands, with a programme on ‘Testing and Test-Driven Learning’ run by SURF. Before my keynote we heard about the testing of students’ interpretation of radiological images – it was lovely to see the questions close to the images (one of the things I went on to talk about was the importance of good assessment design) – and about ‘the Statistics Factory’, running an adaptive test in a gaming environment. This linked nicely to my finding that students find quiz questions ‘fun’ and that even simple question types can lead to deep learning. Most exciting is the emphasis on learning rather than on the use of technology for the sake of doing so.

I would like to finish this post by highlighting some of the visions/conclusions from my keynote:

1. To assess MOOCS and other large online courses, why don’t we start off by using peer assessment to mark short answer questions. Because of the large student numbers this would lead to accurate marking of a large number of responses, with only minimal input from an expert marker. Then we could use these marked  responses and machine learning to develop Pattern Match type answer matching, to allow automatic marking for subsequent cohorts of students.

2. Instead of sharing completed questions, let’s share the code behind the questions so that users can edit as appropriate. In other words, let’s be completely open.

3. It is vitally important to evaluate the impact of what we do and to develop questions iteratively. And whilst the large student numbers at the UK Open University mean that the use of computer-marked assessment has saved us money, developing high-quality questions does not come cheap.

4. Computer-marked assessment has a huge amount to commend it, but I still don’t see it as a panacea. I still think that there are things (e.g. the marking of essays) that are better done by humans. I still think it is best to use computers to mark and provide instantaneous feedback on relatively simple question types, freeing up human time to help students in the light of improved knowledge of their misunderstandings (from the simple questions) and to mark more sophisticated tasks.

The videos from my keynote and the other presentations are at

MOOCs: same or different?

Sunday, November 2nd, 2014

Last week’s EDEN Research Workshop was thought-provoking in many ways. Incidentally, I think that was largely because of the format that discouraged long presentations and encouraged discussion and reflection. I thought this would irritate me but it didn’t.

One of the questions that the workshop prompted for me (and, if the ‘fishbowl’ discussion at the end is to be believed, for others too) is the extent to which our wealth of previous research into student engagement with open and distance learning (especially when online) is relevant to MOOCs. Coincidentally, my [paper!] copy of  the November issue of Physics World arrived yesterday, and a little piece entitled “Study reveals value of online learning” lept out and hit me. It’s about work at MIT that has pre- and post-tested participants on a mechanics MOOC. The details are at:

Colvin, K. F., Champaign, J., Liu, A., Zhou, Q., Fredericks, C., & Pritchard, D. E. (2014). Learning in an introductory physics MOOC: All cohorts learn equally, including an on-campus class. The International Review of Research in Open and Distance Learning, 15(4), 263-282.
They found that students, whether well or poorly prepared, learnt well. The Physics World article comments  that David Pritchard, the leader of the MIT researchers “finds it ‘rather dismaying’ that non-one else has published about whether learning takes place, given that there are thousands of MOOCs available”. I agree with Pritchard that we need more robust research into the effectiveness of MOOCs. However, I come back to the same point: To what extent does everything we know about open and distance learning apply?
I used to get really annoyed that people talked about MOOCs as if what they are doing is entirely new, and when the Physics World article goes on to compare MOOCs with traditional classroom learning, as if nothing else has existed, I feel that annoyance surfacing. However, at EDEN, I suddenly realised that there are some fundamental differences. Most people studying MOOCs are already well qualified; that is increasingly not the case for our typical Open University students. I accept that the MIT work has looked at “less well prepared” MOOC-studiers, and that is very encouraging, but I wonder if it is appropriate to generalise or to attempt support such a wide spectrum of different learners in the same way. Secondly, most work on the impact of educational interventions considers students who are retained, and the MIT study is no exception; they only considered students who had engaged with 50% or more of the tasks; if my maths is right that was about 6% of those initially registered. Much current work at the Open University rightly focuses on retaining our students; all our students. Then of course there are differences in length of module and typical study intensity, and so on.
I suppose an appropriate conclusion is that MOOC developers should both learn from and inform developers of more conventional open and distance learning modules. And I note that the issue of The International Review of Research in Open and Distance Learning that follows the one in which the MIT work is reported, is a special, looking at research into MOOCs. That’s good.

The Unassessors

Friday, October 31st, 2014

Radcliffe Camera, OxfordFollowing a small group discussion at the 8th EDEN (European Distance and E-learning Network) Research Workshop in Oxford earlier in the week, I accepted the task of standing up to represent our group in saying that our radical change would be to do away with assessment. It was, of course, somewhat tongue in cheek, and we didn’t really mean that we would do away with assessment entirely, rather that we would radically alter its current form. Assessment is so often seen as “the problem” in education, “the tail wagging the dog” and we spend a huge amount of money and time on it, so a radical appraisal is perhaps overdue; as others who are wiser than me have said before. We should, at the very least, stop and think what we really want from our assessment; despite the longstanding assessment for learning/assessment of learning debate, I still don’t think we really know.

You’ll note that I am using the word “we” in the previous paragraph. That’s deliberate, because I am including the whole assessment community in this (researchers and practitioners); I am certainly not just talking about my own University. I feel the need to explain that point because the rapporteur at the EDEN Research Workshop managed to rather misunderstand my paper and so to criticise the Open University’s current assessment practice as being the same as it was 25 years ago. It is my fault entirely for not making it clearer who I am and what I was trying to say; because I am basically a practitioner, and proud of it, I suffer quite a lot from people not appreciating the amount of reading and thinking that I have done.  The rapporteur was absolutely right to be critical; that’s what the role is about and I am very grateful to him for making me review my standpoint. It is also true – as I say frequently – that we all, including those of us at the Open University, should learn from others. However, I’d ask whether any distance learning provider does much better.

There is a related point, relating to the extent to which change should be evolutionary or revolutionary. It is simply not true that Open University assessment practice is the same as it was 25 years ago: 25 years ago, our tuition was all face to face (we now make extensive use of synchronous and asynchronous online tools); our tutor-marked assignments were submitted through the post; our use of computer-marked assessment was limited to multiple-choice questions with responses recorded with a pencil on machine-readable forms (no instantaneous, graduated and targeted feedback; no constructed response questions and certaintly no short-answer free text questions); we made considerably less use of end-of-module assignments, oral assessment, assessment of collaborative activity, peer assessment. Things have changed quite a lot! However, the fundamental structures and many of the policies remain the same. Our students seem happy with what we do, but nevertheless perhaps it is time for change. Perhaps that’s true of other universities too!

ViCE/PHEC 2014

Friday, September 5th, 2014

The ‘interesting’ title of this post relates to the joint Variety in Chemistry Education/Physics Higher Education Conference that I was on my way home from a week ago. Apologies for my delay in posting, but since then I have celebrated my birthday, visited my elderly in-laws, moved into new Mon-Fri accommodation, joined a new choir, celebrated Richard’s and my 33rd wedding anniversary – and passed the viva for my PhD by publication, with two typos to correct and one minor ‘point of clarification’. It has been an amazing week!

The conference was pretty good too. It was held at the University of Durham whose Physics Department (and, obviously, Cathedral) is much as it was when I graduated more than 36 years ago. However most of the sessions were held in the new and shiny Calman Learning Centre (with the unnervingly named Arnold Wolfendale Lecture Theatre, since I remember Professor Wolfendale very well from undergraduate days). There were lots more chemists than physicists, I don’t really know why, and lots of young enthusiastic university teaching fellows. Great!

Sessions that stood out for me include the two inspirational keynotes and both of the workshops that I attended, plus many conversations with old and new friends. The first keynote was given by Simon Lancaster from UEA and its title was ‘Questioning the lecture’. He started by telling us not to take notes on paper, but instead to get onto social media. I did, though I find it difficult  to listen and to post meaningful tweets at the same time. Is that my age? However I agree with a huge amount of what Simon said, in particular that we should cut out lots of the content that we currently teach.

Antje Kohnle’s keynote on the second day had a very different style. Antje is from the University of St Andrews and she was talking about the development of simulations to make it easier for students to visualise some of the conterintuitive concepts in quantum mechanics. The resource that has been developed is excellent, but the important point that Antje emphasised is the need to develop resources such as this iteratively, making use of feedback from students. Absolutely!

The two workshops that I so much enjoyed were (1) ‘Fostering learning improvements in physics’, a thoughtful reflection, led by Judy Hardy and Ross Galloway from the University of Edinburgh, on the implications of the FLIP Project; and do (2) the interestingly named (from a student comment)  ‘I don’t know much about physics, but I do know buses’ led by Peter Sneddon at the University of Glasgow, looking at questions designed to test students’ estimation skills and their confidence in estimation.

The quality of the presentations was excellent, bearing in mind that some people were essentially enthusiastic teachers whilst others were further advanced in their understanding of educational research. I raised the issue of correlation not implying causality at one stage, but immediately wished that I hadn’t. I think that, by and large, the interventions that were being described are ‘good things’ and of course it is almost impossibly difficult to prove that it is your intervention that has resulted in the improvement that you see.

In sessions and informal discussion with colleagues, the topics that kept stricking me were (1) the importance of student confidence; (2) reasons for underperformance (by several measures) of female students. We are already planning a workshop for next year!

Oh yes, and Durham’s hills have got hillier…

Staff engagement with e-assessment

Thursday, July 11th, 2013

More reflections from CAA2013 (held in Southampton, just down the road from the Isle of Wight ferry terminal – shown)…

In the opening keynote, Don Mackenzie talked about the ‘rise and rise of multiple-choice questions’. This was interesting, because he was talking in the context of more innovative question types having been used back in the 1997s than are used now. I wasn’t working in this area in the 1997s so I don’t know what things were like then, but somehow what Don said didn’t surprise me.

Don went on to itentify three questions that each of us should ask ourselves, implying that these were the stumbling blocks to better practice. The questions were:

  • Have you got the delivery system that you need?
  • Have you got the institutional support that you need?
  • Have you got the peer support that you need?

I wouldn’t argue with those, but I think I can say ‘yes’ to all three in the context of my own work – so why aren’t we doing better?

I think I’d identify two further issues:

1. It takes time to write good questions and this needs to be recognised by all parties;

2. There is a crying need for better staff development.

I’d like to pursue the staff development theme I little more. I think there is a need firstly for academics to appreciate that they can and should ‘do better’ (otherwise people do what is easy and we end up with lots of multiple-choice questions, and not necessarily even good multiple-choice questions), but then we need to find a way of teaching people how to do better. In my opinion this is about engaging academics not software developers – and in the best possible world the two would work together to design good assessments. That means that staff development is best delivered by people who actually use e-assessment in their teaching i.e. people like me. The problem is that people like me are busy doing their own job so don’t have any time to advise others. Big sigh. Please someone, find a solution – it is beyond me.

I ended up talking a bit about the need for staff development in my own presentation ‘Using e-assessment to learn about learning’ and in her closing address Erica Morris pulled out the following themes from the conference:

  • Ensuring student engagement
  • Devising richer assessments
  • Unpacking feedback
  • Revisiting frameworks and principles
  • and… Extending staff learning and development

I agree with Erica entirely, I just wonder how we can make it happen.