Archive for the ‘quality’ Category

Revolution or evolution?

Friday, December 21st, 2012

I seem to have taken part in a lot of discussions recently in which I, or others, have talked about the need for a real ‘shake-up’ of what we do in assessment. It is indeed depressing that we continue to talk about the problems, but yet we don’t seem to be able to do much better. I am very definitely not an expert, but I have read masses of papers written by those I consider to be experts, so I am somewhat embarrassed to admit that I am no longer really sure what assessment is for. I’ve heard all the arguments, but the more I read the more confused I become. And I suspect that, at least at institutional level, I am not alone in my confusion. I am no longer convinced that I know what we should alter in the ‘big shake-up’ and I fear that some of those who think they know what we should alter may be driven by rhetoric rather than by evidence.

In the absence of a revolution, I think we could make significant improvements by an evolutionary approach i.e. by making a series of smaller changes to our practice. In my own context small changes might include more ‘little and often’ assessment, more use of oral feedback, and assessment that is designed in a coherent way throughout a student’s programme of study, with more opportunities for reflection and perhaps with tutors being able to see feedback provided by tutors on previous modules. Some of these little changes are quite big! Your ‘little changes’ would be different.

Evolution has to do with the survival of the fittest, and in educational terms this reminds me of the importance of evaluating each of the ‘little changes’ (and, in an ideal world, not making too many changes at once – hmmm) and only persevering with the change if it is proven to be effective. Then, step by step, we can work towards better assessment practice.

Poor quality assessment – inescapable and memorable

Tuesday, August 9th, 2011

David Boud famously said ‘Students can, with difficulty, escape from the effects of poor teaching, they cannot (by definition if they want to graduate) escape the effects of poor assessment.’

Boud, D. (1995) Assessment and learning: contradictory or complementary? In P. Knight (ed) Assessment for learning in higher education. Kogan Page in association with SEDA. pg 35.

Poor assessment is also memorable. (more…)

Not like Moses

Friday, May 6th, 2011

One of the joys of trying to catch up with others who have been working in the field of assessment for much longer than me is finding books and articles that were written some time ago but which still seem pertinent today. I’d definitely put the following book into this category (and more thoughts from it will follow):

Gipps, C. and Murphy, P. (1994) A fair test? Assessment, achievement and equity. Buckingham: Open University Press.

For now, I’d like to highlight a particularly memorable quote from Gipps and Murphy, originally from the Times Educational Supplement back in November 1988,expressing sceptism about the ‘Code of Fair Testing Practices in Education’ in the USA. As a former Assistant Secretary of Education put it:

If all the maxims are followed I have no doubt the overall quotient of goodness and virtue should be raised. Like Moses, the test makers have laid down ten commandments they hope everyone will obey. This doesn’t work very well in religion – adultery continues.

So I’d like to emphasise that my ‘top tips’ in the previous post are not commandments! – apart perhaps from my final tip (monitoring the questions when in use) which I think ought to be made compulsory.

In general though, although the ‘top tips’ have worked well for me, and I hope that these are ideas that others might find useful, perhaps it is more important that question authors take responsibility for the quality of their own work, rather than mindlessly following ‘rules’ written by others. This wish reflects most of my practice, in writing e-assessment questions and in everything else so, for example, I far prefer helping people to write questions in workshops (when they are writing questions ‘for real’) than providing rules for them to follow. Sadly, I think a wish to improve the quality of our e-assessment may be leading to a more dictatorial approach – I’m not convinced it will work.

Is it worth the effort?

Saturday, February 19th, 2011

I’m taking a short break from reporting findings from my analysis into student engagement with short-answer free-text questions to reflect on a couple of things following the HEA UK Physical Sciences Centre workshop on ‘More effective assessment and feedback’ at the University of Leicester on Wednesday. It was an interesting meeting – initially people sat very quietly listening to presentations, but by the afternoon there was lots of discussion. I spoke twice –  in the morning I wittered on about the problem of students not answering the question you (the academic) thought you had asked; in the afternoon I was on the more familiar ground of writing short-answer free-text e-assessment questions, with feedback.

Steve Swithenby ran two discussion sessions and at the end he got us classifying various ‘methods of providing feedback’ as high/medium/low in terms of ‘importance and value to student’, ‘level of resources needed’ and ‘level of teacher expertise required’. Obviously, in the current economic climate we’re looking for something that is high, low, low. I agree with Steve that e-assessment, done properly, is high, high, high.

Right at the end, someone asked me ‘Is it worth the effort?’ It’s a fair point. On one level, in my own context at the UK Open University I know that all the considerable effort I have put into writing good e-assessment questions has been worthwhile, on financial as well as pedagogical grounds - simply because we have so many students and can re-use low-stakes questions from year to year. I’m quite used to explaining that this is not necessarily the case in other places, with smaller student numbers. However, the question went deeper than that – do we over-assess? is the effort that we put into assessment per se worth it, in terms of improving learning? It’s a truism that assessment drives learning and I have certainly seen learning take place as students are assessed, in a way that I doubt would happen otherwise. But is this generally true? What would be the effect of reducing the amount of assessment on our modules and programmes? I don’t know.

Writing good interactive computer-marked assessment questions

Thursday, October 21st, 2010

I run a lot of workshops trying to help colleagues to write good e-assessment questions. There are usually lots of brilliant ideas in the workshop, but somehow we end up slipping back into using lots of multiple choice questions because people think they are reliable.

I suppose it is true that the answer matching is easier to set up for multiple choice and multiple response questions, but beware – just because you (as author) can easily identify one response as ‘correct’ and others as ‘incorrect’ it doesn’t mean that the question is behaving in the way you expect. The question might be ambiguous or there might actually be more than one correct option. Or – the most common problem – whilst some options are definitely correct and others are definitely incorrect, there may also be options which could either be correct or incorrect, depending on your interpretation of them. (more…)

Dependability – the one-handed clock

Wednesday, September 8th, 2010

This is  my final post relating directly to the Earli/Northumbria Assessment Conference. Well that’s a relief I hear you say. It was an amazing conference for me, coming at just the right time in my thinking about broader issues in assessment.

This post continues the theme of ‘quality’. In the final keynote, instead of dealing separately with issues of validity, reliability, authenticity and manageability (in practical terms), Professor Gordon Stobart talked about a ’one handed clock’ . You have to decide where in the 360-degree round to place the one hand when construct validity, reliability and manageability are equally spaced (at 120 degree intervals) around it.  This is a useful way of thinking, capturing the tensions I was trying to describe in my previous post.

Do we know what we mean by ‘quality’ in e-assessment?

Wednesday, September 8th, 2010

This was the topic of my roundtable at the Earli/Northumbria Assessment Conference and I am very grateful to the 10 people who attended one of the two wonderful discussions we had on the topic.

The obvious answer is that, no, we don’t know what we mean by ‘quality’. We don’t even know what we mean by ‘e-assessment’, Having discussed this a bit, we moved on to discuss different aspects of ‘quality’. I note that both groups, whilst mentioning that validity and reliability are important, also emphasised the role of e-assessment in transforming learning, in particular through its ability to provide instantaneous feedback. (more…)