Archive for the ‘assessment design’ Category

SIREJ

Tuesday, February 19th, 2013

I’ve been on the road even more than usual recently, dealing with a domestic crisis of a rather unusual nature (a tree through the roof of my elderly in-laws’ house down in Sussex). They’re OK, but assessment work is low priority right now. Apart from thinking about it.

And when I think, I always come to the same conclusion – what we do isn’t really very good. There’s lots of theorising out there, but our assessment practice still isn’t great. So, after my previous timid posting, let’s think big. This is my personal manifesto for change. In order to improve our assessment practice, we need to be truly:

Student centred (which isn’t the same as saying that we should always do what students say they want, but our students and learning should come first. Always).

Innovative (or stictly, open to innovation, making change when that’s appropriate, but not when it isn’t).

Reflective

Evaluative

Joined up (trying to make our practice more coherent – I’m not saying that all practice should be identical, but rather that we should think about the big picture).

Think SIREJ - my initials with anger in the middle!

JISC Assessment and Feedback Programme

Tuesday, April 24th, 2012

I’m just back from my second attendance at a JISC Learning and Teaching Practice Experts Group meeting. It was excellent – it is inspiring to be surrounded by people who know such a lot about learning and teaching and, more importantly, actually care about the student experience.

Half the day was spent discussing some of the early outcomes from the new JISC Assessment and Feedback Programme. I was involved in an unsuccessful bid to this programme, but can truly say that I am happy not to have been successful (not sure what my colleagues think!).  I’m just SO busy and it is nice to learn from others rather than being in the thick of it. (more…)

A consistent approach to assessment design

Sunday, January 15th, 2012

Last week, I participated in a ‘Quality Enhancement Seminar’ on ‘Effective use of interactive computer-marked assessment’. A list of tips I gave included the following two points:

11. Embed iCMAs carefully within the module’s assessment strategy, considering carefully whether you want them to be formative-only, summative, thresholded etc.;

12. Work towards a consistent approach at the qualification, programme and Faculty level (at least).

Afterwards I was asked how the above two points can be reconciled. The questioner had a point. We want each module to have a carefully thought out assessment strategy, using e-assessment when but only when appropriate. But yet I have become more and more worried about the confusion we cause by having lots of different models for different modules. I’ve become convinced that we need to take a more joined up approach. Which is more important, having the detailed assessment strategy that is absolutely right for each module or having a consistent approach across a qualification? I’m not sure. It is at least good that so many people are giving careful consideration to these matters.

Two more talks

Thursday, June 30th, 2011

We’ve now had two more talks as part of the OU Institute for Educational Technology’s  ’Refreshing Assessment’ series. First we had Lester Gilbert from the University of Southampton on ‘Understanding how to make interactive computer-marked assessment questions more reliable and valid: an introduction to test psychometrics’. Then, yesterday, Don Mackenzie from Professional e-Assessment Services (which I think is a University of Derby spin-off) , with the title ‘From trivial pursuit to serious e-assessment: authoring and monitoring quality questions for online delivery’ (more…)

Not like Moses

Friday, May 6th, 2011

One of the joys of trying to catch up with others who have been working in the field of assessment for much longer than me is finding books and articles that were written some time ago but which still seem pertinent today. I’d definitely put the following book into this category (and more thoughts from it will follow):

Gipps, C. and Murphy, P. (1994) A fair test? Assessment, achievement and equity. Buckingham: Open University Press.

For now, I’d like to highlight a particularly memorable quote from Gipps and Murphy, originally from the Times Educational Supplement back in November 1988,expressing sceptism about the ‘Code of Fair Testing Practices in Education’ in the USA. As a former Assistant Secretary of Education put it:

If all the maxims are followed I have no doubt the overall quotient of goodness and virtue should be raised. Like Moses, the test makers have laid down ten commandments they hope everyone will obey. This doesn’t work very well in religion – adultery continues.

So I’d like to emphasise that my ‘top tips’ in the previous post are not commandments! – apart perhaps from my final tip (monitoring the questions when in use) which I think ought to be made compulsory.

In general though, although the ‘top tips’ have worked well for me, and I hope that these are ideas that others might find useful, perhaps it is more important that question authors take responsibility for the quality of their own work, rather than mindlessly following ‘rules’ written by others. This wish reflects most of my practice, in writing e-assessment questions and in everything else so, for example, I far prefer helping people to write questions in workshops (when they are writing questions ‘for real’) than providing rules for them to follow. Sadly, I think a wish to improve the quality of our e-assessment may be leading to a more dictatorial approach – I’m not convinced it will work.

Is it worth the effort?

Saturday, February 19th, 2011

I’m taking a short break from reporting findings from my analysis into student engagement with short-answer free-text questions to reflect on a couple of things following the HEA UK Physical Sciences Centre workshop on ‘More effective assessment and feedback’ at the University of Leicester on Wednesday. It was an interesting meeting – initially people sat very quietly listening to presentations, but by the afternoon there was lots of discussion. I spoke twice –  in the morning I wittered on about the problem of students not answering the question you (the academic) thought you had asked; in the afternoon I was on the more familiar ground of writing short-answer free-text e-assessment questions, with feedback.

Steve Swithenby ran two discussion sessions and at the end he got us classifying various ‘methods of providing feedback’ as high/medium/low in terms of ‘importance and value to student’, ‘level of resources needed’ and ‘level of teacher expertise required’. Obviously, in the current economic climate we’re looking for something that is high, low, low. I agree with Steve that e-assessment, done properly, is high, high, high.

Right at the end, someone asked me ‘Is it worth the effort?’ It’s a fair point. On one level, in my own context at the UK Open University I know that all the considerable effort I have put into writing good e-assessment questions has been worthwhile, on financial as well as pedagogical grounds - simply because we have so many students and can re-use low-stakes questions from year to year. I’m quite used to explaining that this is not necessarily the case in other places, with smaller student numbers. However, the question went deeper than that – do we over-assess? is the effort that we put into assessment per se worth it, in terms of improving learning? It’s a truism that assessment drives learning and I have certainly seen learning take place as students are assessed, in a way that I doubt would happen otherwise. But is this generally true? What would be the effect of reducing the amount of assessment on our modules and programmes? I don’t know.

Challenging my own practice

Friday, February 4th, 2011

The JISC ‘From challenge to change’ workshop yesterday (see previous post) started with an invitation to record aspects of assessment and feedback provision in our own context that we felt to be strengths or remained a challenge.

I was there as the case study speaker, outlining the challenges of my situation (huge student numbers, distance, open-ness) and then going on to describe what we have done to address the challenges. It’s all positive stuff – we have provided thousands of students with instantaneous feedback, we have helped them to pace their study, we have freed up staff time to do better things, we have marked consistently and reliably. Students and tutors like our questions and we have even saved the University money.

So why is it that, when asked to identify the strengths and the challenges, I still see more challenges than strengths. We are scaffolding learning but are we constraining it too much? We are using short-answer free-text questions because we want to ‘go beyond’ multiple choice computer-marked questions, but are we really causing students to think any more than they would when confronted by a good multiple-choice question? We are using ‘low-stakes summative’ iCMA questions to encourage students to engage more deeply with the process (and we know that, at a certain level, this works) but are they really learning? I have similar anxieties about our tutor-marked assignments – Are we giving just too much feedback? Are we overwhelming our students? Are we smothering them? Most fundamentally, do we have a shared understanding with our students (and our part-time tutors) about what assessment is for and what it is about?  If I’d like this blog to achieve one thing it would be to challenge all of us to reflect more and to evaluate more. Then perhaps we’ll get some answers.

Challenging received wisdom

Friday, February 4th, 2011

Our case study ‘Designing interactive assessments to promote independent learning’ from the JISC guide Effective Assessment in a Digital Age featured at the JISC Birmingham Assessment Workshop ‘From challenge to change’ yesterday, so I was speaking at the workshop.  These workshops are thoughtful and thought-provoking and there are some wonderful resources at http://jiscdesignstudio.pbworks.com/w/page/33596916/Effective-Assessment-in-a-Digital-Age-Workshops.

However, whenever I start thinking too deeply I end up worrying that we are not ‘getting it right’. That goes for my own work as much as anyone elses (see next post) but I do wonder whether all the noise being made in the ‘well tramped’ field of assessment is really making things better. In particular, we have principles of good assessment design, conditions under which assessment supports learning, guides to good assessment for learning etc. from every expert you can think of. I quote them regularly! But are these lists of sound underpinning principles really enabling us to deliver assessment that is more useful to our students and ourselves? I know that there is some inspirational work going on and that there have been some improvements over the years, but can we link improvement in learning to the underpinning principles? Where’s the evidence? If you have some, please do add a comment.