Archive for February, 2013

Quote of the day

Wednesday, February 27th, 2013

‘Assessment is not working, or at least it is not working as it should. In our attempt to generate forms of assessment capable of addressing all the purposes for whish we use assessment, we have produced a Frankenstein that preys on the educational process, reducing large parts of teaching and learning to mindless mechanistic process whilst sapping the transformative power of education.’

Broadfoot, P. (2008) Assessment for learners: assessment literacy and the development of learning power. In Havnes, A. and McDowell, L. (eds).  Balancing dilemmas in assessment and learning in contemporary education. Routledge/Taylor and Francis. pp213-224.

Quote of the day

Monday, February 25th, 2013

Reading through my notes on some of the many assessment papers I have read, I’m finding a few of those ‘sit up and take note’ quotes; things (sometimes very obvious) that other people somehow manage to say so much better than I can. So, I bring you the first of an occasional series of ‘Quote of the day’:

…’ summative assessment is itself ‘formative’. It cannot help but be formative. This is not an issue. At issue is whether that formative potential of summative assessment is lethal or emancipatory. Does summative assessment exert its power to discipline and control, a power so possibly lethal that the student may be wounded for life?’

Barnett, Ronald (2007) Assessment in higher education: an impossible mission? In Boud, David and Falchikov, Nancy (eds) Rethinking assessment in higher education. London, Routledge. pg37.

Selected response or constructed response?

Saturday, February 23rd, 2013

I have had an interesting debate with colleagues about whether questions in which you have to drag one or more markers to appropriate places on an image (see example below) are selected response or constructed response questions.

I am of the opinion that this is a constructed response question, because students are not given clues as to where the markers should go. It is fundamentally different to the question below (‘drag and drop onto image’) which is selected response because there are only a number of places where the labels can go.

However, during this debate, my colleague pointed out that the boundaries between constructed and selected response question types are not that clear cut. In a sense the top image is selected response because there are a finite number of pixels in the image. Similarly if you ask a numerical question in which you want an answer that’s in integer between 1 and 9, there are actually only 9 options available to you. For what it’s worth, I still think both of these are constructed response questions, but the debate is an interesting one.

The hidden curriculum

Saturday, February 23rd, 2013

This morning I’ve been reading an oldish paper (Sambell & McDowell, 1998) about work on the ‘hidden curriculum’, an even older phenomenon (Snyder, 1971). 

The hidden curriculum can be thought of in terms of the distinction between ‘what is meant to happen’ i.e. the curriculum stated officientally by the educational system or institution, and what students actually experience ‘on the ground’. Assessment is very important in determining the hidden curriculum.

Sambell & McDowell take this one step further. They point out that every student has a different hidden curriculum; the same assessment is interpreted differently not just by ‘staff’ and ‘students’ but by individuals. Students bring with them different range experiences, motivations and perspectives which influence their response.

In the light of this, how are we do work to improve the assessment experience (and hence the hidden curriculum) for all our students, especially in a large and diverse University such as the Open University? Dealing with each individual student’s previous experiences and perceptions is challenging. However I think we are still missing some very obvious tricks. We assume that our students share our understanding of what assessment is for, but many probably don’t. I am interested in doing some work to improve that understanding, perhaps by running online tutorials for students before their first assignment, to discuss the assessment process not the assessment itself. Then we might to similarly after students have got the first piece of work back, to discuss what they have learnt from it and from the feedback.

Anyone want to join in the fun?

Sambell, K. & McDowell, L (1998) The construction of the hidden curriculum: messages and meanings in the assessment of student learning, Assessment and Evaluation in Higher Education, 23(4), 391-402.

Snyder, B.R. (1971) The hidden curriculum. New York: Alfred A. Knopf.

SIREJ

Tuesday, February 19th, 2013

I’ve been on the road even more than usual recently, dealing with a domestic crisis of a rather unusual nature (a tree through the roof of my elderly in-laws’ house down in Sussex). They’re OK, but assessment work is low priority right now. Apart from thinking about it.

And when I think, I always come to the same conclusion – what we do isn’t really very good. There’s lots of theorising out there, but our assessment practice still isn’t great. So, after my previous timid posting, let’s think big. This is my personal manifesto for change. In order to improve our assessment practice, we need to be truly:

Student centred (which isn’t the same as saying that we should always do what students say they want, but our students and learning should come first. Always).

Innovative (or stictly, open to innovation, making change when that’s appropriate, but not when it isn’t).

Reflective

Evaluative

Joined up (trying to make our practice more coherent – I’m not saying that all practice should be identical, but rather that we should think about the big picture).

Think SIREJ - my initials with anger in the middle!

Automatic generation of assessment items

Monday, February 11th, 2013

Last week I participated in a fascinating Transforming Assessment webinar on ‘The semi-automatic generation of assessment items: objectives, challenges, and perspectives’ . The presenter was Muriel Foulonneau from the Henri Tudor Research Centre, Luxembourg, and I was left feeling very much an amateur in a ‘room’ full of very clever professionals. The way in which assessment items and distractors are automatically generated is really very clever – with my ‘day job’ and so many wide-ranging interests in assessment, I feel I will never be able to keep up with what is going on.

However I was absolutely amazed to discover that one of the drivers for this work is the cost of producing test items – $1500-$2500 per item. How so? We are only talking about simple multiple-choice items here, how can it possibly cost that much?